id
stringlengths
9
16
pid
stringlengths
11
18
input
stringlengths
5.27k
352k
output
stringlengths
399
9.26k
gao_GAO-18-107
gao_GAO-18-107_0
Background SIV Overview SIV holders become lawful permanent residents upon admission to the United States under one of three special visa programs. The first, created in 2006, is for certain Afghans and Iraqis who have worked directly with U.S. Armed Forces, or under chief of mission authority, for at least one year as translators or interpreters. It is currently capped at 50 visas (excluding spouses and children) per year and is a permanent program. The other two SIV programs for certain Iraqis and Afghans who worked for or on behalf of the U.S. government, and as a consequence experienced or are experiencing an ongoing serious threat, have had larger numbers of visas allocated but are temporary in nature and require legislation to extend the programs. The SIV program for Iraqis who worked for or on behalf of the U.S. government stopped accepting new applications after fiscal year 2014. The SIV program for Afghans who worked for or on behalf of the U.S. government continued to accept new applications as of November 2017, and most recently, was allocated additional visas in December 2017. For all of these SIV programs, prospective special immigrants must go through multiple steps as required by the particular program to which they are applying, such as (1) providing a letter of recommendation from the direct U.S. citizen employment supervisor, (2) a statement describing the threats the applicant has received as a result of his or her U.S. government employment, and (3) forms and documents for all family members applying for visas. Additionally, applicants must have an in- person interview with a consular officer and have fingerprints taken at a U.S. embassy or consular office, among other steps in the process. The Iraqi and Afghan SIV application process has been subject to criticism due in part to the length of time it has taken some applications to be processed. Legislation was enacted to require State and the Department of Homeland Security to complete SIV applications within a specified period of time and to report on the efficiency of the application process. SIV Resettlement in the United States and Resettlement Assistance Afghan and Iraqi special immigrants are treated like refugees for purposes of federal public assistance, including receipt of resettlement assistance. Over time, SIV holders have accounted for an increasing percent of the total number of individuals receiving resettlement assistance in the United States. SIV holders accounted for about 1 percent of the total number of individuals who received resettlement assistance upon arrival in fiscal year 2008 (the first year they were eligible for this assistance), 13 percent in fiscal year 2016, and about 26 percent in fiscal year 2017, with a reduction that year in total refugee arrivals (see fig. 1). During the application process overseas, SIV holders may elect to receive resettlement assistance upon arrival in the United States. If they indicate on their visa application that they have a tie in the United States and would like to be placed nearby, in most cases PRM will do so. SIV holders are then served by the local resettlement agencies in that area. (This is also true for refugees who indicate they have U.S. ties.) Most SIV holders travel to the United States in the same way as refugees, with travel booked by the International Organization for Migration (an inter- governmental organization). In this case, resettlement agencies know in advance of SIV holders’ arrival. However, some SIV holders elect to book their own travel to the United States for various reasons, such as when they may be in immediate danger in their home countries. These SIV holders must contact a local resettlement agency as soon as possible after they arrive to receive initial resettlement assistance through the R&P program, generally within 30 days of arrival. SIV holders who arrange their own travel and elect to receive resettlement assistance after they arrive in the United States are often known by resettlement agencies as walk-in SIV holders. Various federal programs provide resettlement assistance for which SIV holders are eligible (see table 1). The R&P program provides initial resettlement assistance for the first 30 to 90 days and is administered through agreements PRM has with the nine national resettlement agencies and their network of local resettlement agencies. The R&P cooperative agreement outlines what resettlement agencies must do for a newly arrived individual or family, including picking up people at the airport; providing initial housing, furniture, food, and clothing; helping children enroll in school or adults enroll in language programs; and developing a resettlement plan, which focuses on early employment for employable adults. Under the R&P agreement, PRM provides a fixed per capita grant to national resettlement agencies for individuals served ($2,075 in fiscal year 2017), of which a specified amount must be given in cash or spent directly on each individual served through the R&P program ($925 in fiscal year 2017). These grant amounts and standards are the same nationally. ORR’s programs generally provide short-term assistance after the initial resettlement period. Several of ORR’s key refugee assistance programs, such as cash and medical assistance and social services, are administered through grants to refugee coordinators (or equivalent) in each state. These coordinators are, in many cases, staffed by state agencies (e.g., departments of social services), but in some cases are staffed by private organizations. At the local level, service providers may be county social services offices, local affiliates of the nine national resettlement agencies, or other community service providers. In contrast, ORR’s Matching Grant program is administered through the national resettlement agencies and not the state refugee coordinators. This program provides cash assistance, employment services, and case management for up to 6 months. In some cases a household also may receive Temporary Assistance for Needy Families (TANF) or Medicaid instead of refugee cash and medical assistance, depending on state eligibility rules and the characteristics of any given household. In addition, because SIV holders, like refugees, are eligible to receive public benefits, they may also be eligible for other types of assistance, such as food assistance from the Supplemental Nutrition Assistance Program. Available Data Provide Limited Information on SIV Holders’ Short-Term Outcomes and No Information on Long- Term Outcomes Limited data from PRM from fiscal year 2011 through part of fiscal year 2017 showed that most principal SIV holders were unemployed and relied on cash assistance for income 90 days after arrival to the United States. Available data from ORR for one of its programs, the Matching Grant program, provide slightly longer-term information, but only cover a portion of SIV holders and are not representative. ORR’s Matching Grant data for fiscal year 2016 (the only year available) showed that most SIV holders were employed 6 months after arrival and no longer reliant on cash assistance. Although ORR regularly surveys the general refugee population up to 5 years after resettlement in order to examine their longer-term outcomes, it has never surveyed SIV holders for such information. Limited State Department Data Collected 90 Days After Arrival Showed the Majority of Principal SIV Holders Were Unemployed About 60 percent of all principal SIV holders participating in the R&P program who arrived to the United States in fiscal years 2011 through the first quarter of 2017 were unemployed 90 days after arriving, according to data that PRM collects from resettlement agencies on R&P recipients. Based on our analyses of these data, principal SIV holders from Iraq tended to be unemployed at somewhat higher rates than those from Afghanistan. With respect to English speaking skills, the majority who reported their level of spoken English as “good” were unemployed, though they had considerably higher employment rates at 90 days than those reporting English levels of “some” or “none.” In contrast, employment rates were relatively comparable among principal SIV holders with different levels of education, although those with post- secondary levels of education had somewhat lower employment than those at the secondary level (see fig. 2). Additionally, almost all SIV households relied on cash assistance at 90 days in order to cover expenses such as housing costs. Even among households that had earnings from employment, most also relied on some form of cash assistance, according to our analysis of the R&P data. Of those SIV households receiving earnings from employment, 89 percent were also receiving income through Refugee Cash Assistance, Matching Grant, or TANF programs, which is slightly higher than the rate of the overall refugee population who received one of these types of cash assistance (82 percent). These were the most common types of cash assistance that SIV households received, with slightly less than a third also relying on personal assets (see fig. 3). SIV holders also received non-cash assistance and services within 90 days of arrival, based on our analysis of the R&P data. For example, nearly all SIV households received food assistance, the most common type of non-cash assistance. Other common forms of assistance include employment services and case management, which were provided to both principal SIV holders and spouses (mostly wives) at comparable rates (see fig. 4). To a lesser degree, principal SIV holders and spouses also received health services and access to English as a Second Language (ESL) courses, among other types of assistance. HHS Data on About One- Third of SIV Holders Show Most Are Employed 6 Months After Arrival, but Federal Agencies Do Not Collect Longer-Term Data The most recent data from the Matching Grant program, which is one cash assistance program in which selected SIV holders might participate during and after their initial 90 days in the United States, and the one such ORR program for which SIV outcomes could be identified, showed that the majority of SIV program participants were employed and no longer relying on cash assistance at end of the 180 day (or 6 month) benefit period. Specifically, about two-thirds of SIV holders in the Matching Grant program were employed at 180 days, a rate slightly lower than the rate for all Matching Grant participants, which include refugees, asylees, and other specified groups, according to data for fiscal year 2016 (see fig. 5). However, SIV holder participants had slightly higher rates of full-time employment and a slightly higher average wage of about $12 per hour compared with all Matching Grant participants. The relatively low wages may reflect, among other contributing factors, the general need for Matching Grant participants to accept the first available employment opportunity, including entry level jobs, as a requirement of the program and the length of the program, which ends at 180 days. About 80 percent of participating SIV households, as well as all participating households in the Matching Grant program in fiscal year 2016, were considered “self- sufficient,” defined by the program as having sufficient earnings to cover basic expenses without the need for cash assistance. About one-third of SIV households overall participate in the Matching Grant program. Findings on SIV holders participating in this program are not representative of all SIV holders, given program design elements. For instance, the Matching Grant program has limited enrollment slots, and resettlement agencies may have an incentive to select more “employable” candidates. In contrast, Refugee Cash Assistance and TANF, the other main cash assistance programs in which SIV holders may participate, generally serve all eligible clients based on income and other eligibility requirements. Additionally, unlike Refugee Cash Assistance or TANF, the benefit amount for the Matching Grant program is generally not reduced or terminated based on earnings, which may create additional incentives to find work and potentially increase the likelihood of employment at 90 days for Matching Grant participants. Our analysis of PRM’s data from the R&P program show that principal SIV holders participating in the Matching Grant program have a higher employment rate 90 days after arrival than those receiving cash assistance from ORR’s Refugee Cash Assistance program or state TANF programs. Additionally, although Matching Grant data provide some additional information beyond what is collected for the R&P program, the data still provide relatively limited insight on individuals’ employment and other outcomes. First, the Matching Grant data are collected at 6 months after arrival, which is a few months beyond the 90-day reporting period for the R&P program. The focus on 6-month outcomes aligns with the Matching Grant program’s goal of immediate self-sufficiency and employment before the end of cash assistance; however, the short timeframe precludes any understanding of participants’ progress in job security, wage growth, or career advancement over the longer-term. Second, while the Matching Grant data do include information on full-time or part- time employment status and average wage—information not captured in the R&P data—they do not provide information on type of employment, career or wage progression, or the amount earnings exceed expenses for those households considered self-sufficient. Moreover, ORR’s guidelines for the Matching Grant program encourage resettlement agencies to work with participants with specialized, advanced skills or vocations who have been placed in entry-level work to obtain job upgrades or recertification programs as appropriate. However, ORR does not collect any information on the extent that this occurs or results in positive employment outcomes, such as wage increases. While ORR’s program data focus on short-term self-sufficiency, ORR regularly gathers information on the longer-term outcomes of the general refugee population through its Annual Survey of Refugees. ORR conducts its Annual Survey of Refugees to comply with a statutory reporting requirement. It also uses its annual survey to provide Congress and the public information as to whether refugees are successfully resettling in the United States through its programs, in line with the agency’s overall mission to link the populations it serves to the right resources to help them become successfully assimilated members of American society over the longer term. The survey provides information on a sample of refugees each year after resettlement in the United States, up to 5 years. It reports on a range of outcomes, including wage progression, educational attainment, home ownership, and the receipt of public assistance (including non-cash assistance), among other things. Although ORR has typically surveyed the refugee population overall, it has in previous years used its annual survey to conduct supplements on special populations, including Iraqi refugees, Hmong refugees, and the Lost Boys of Sudan. These populations were selected based on ORR leadership’s policy priorities and their inclusion in the survey, through the use of oversampling techniques, was cost-neutral, according to ORR officials. ORR, however, has never used its Annual Survey of Refugees to examine long-term outcomes for SIV holders. HHS, in October 2017, awarded a research contract focused on redesigning its Annual Survey of Refugees, the first such redesign since 1993. The goal of this effort is to better understand medium- to long-term resettlement outcomes for refugees and related populations through improved data collection, but the contract does not mention examining the outcomes of any special populations, such as SIV holders. Agency officials stated that ORR plans to explore potential costs and benefits of including special populations (such as SIV holders) in its survey redesign efforts. However, at the time of our review, ORR officials did not yet know whether such an effort would be cost neutral, as with other prior efforts examining special populations; and if not, whether they could obtain long-term outcome information about SIV holders through future surveys or in other ways. Standards for Internal Control state that management needs quality information to make decisions and achieve its objectives. Accordingly, one of ORR’s policy objectives is to improve data collection in order to make data-driven decisions to better support the populations it serves. Similarly, a primary goal of HHS’ redesign of the Annual Survey of Refugees is to maximize the effectiveness of ORR’s policies and programs in promoting successful integration for its populations. Without longer-term data or other in-depth research, neither ORR nor policymakers have information as to whether SIV holders have progressed beyond the immediate goal of basic self-sufficiency toward improved economic security and cultural integration over the longer term. Reported Challenges Include the Capacity of Resettlement Agencies in Certain Locations, Barriers to Skilled Employment, and Housing SIV holders faced a variety of challenges while resettling in the United States, according to representatives of 13 local resettlement agencies we interviewed and SIV holders who participated in 11 focus groups. Among local resettlement agencies, the two in Northern Virginia reported significant challenges with their capacity to assist the large numbers of SIV holders in the area, while agencies in other locations we visited reported fewer capacity challenges. SIV holders also experienced challenges finding skilled employment, which did not align with their expectations of resettlement in the United States. Additionally, securing affordable and suitable housing, and female spouses’ assimilation to U.S. culture were reported challenges. Officials we interviewed from some resettlement agencies reported taking steps to address some of these issues. Large Numbers of SIV Holders Created Capacity Challenges for Resettlement Agencies in Northern Virginia Of the 13 local resettlement agencies in three states at which we interviewed officials, officials from the 2 agencies in Northern Virginia reported the greatest impact from high numbers of SIV holders, creating capacity challenges at both local resettlement agencies as well as in the community. The number of SIV holders in the Northern Virginia area increased more than tenfold since fiscal year 2013 and almost doubled from fiscal years 2015 through 2016, according to data provided by Virginia’s state refugee coordinator. Officials from one of the two local resettlement agencies in Northern Virginia reported that SIV holders also increased as a percentage of their total caseload in recent years, and now make up almost 90 percent. In addition to large numbers of SIV holders scheduled to arrive at local resettlement agencies, many also arrived as walk-ins, which meant the agencies could not predict how many individuals they would need to assist at a given time, according to Virginia’s state refugee coordinator. Both of the Northern Virginia local resettlement agencies reported challenges related to capacity. Staff from one agency said that a case manager would normally have three to four families a month to resettle but now might regularly be dealing with five families in a week and, in an extreme case, 70 families in a month. The large influx created great challenges in finding affordable housing for SIV holders, according to staff from the two agencies, especially because the area has one of the most expensive housing costs in the state (see fig. 6). Additionally, officials from the agencies and Virginia’s state refugee coordinator reported that the influx caused significant delays in getting SIV holders needed social services, such as health screenings for children, which then resulted in school enrollment delays. Due to the significant increase in SIV holder arrivals, two national resettlement agencies opened temporary offices in the area under PRM’s approval and encouragement. SIV holders may have originally been drawn to Northern Virginia by the hope of finding work at nearby federal government offices, according to officials from one national resettlement agency and one local resettlement agency. Local resettlement agency staff added that over time, SIV holders may have moved to the area to be near an established community of SIV holders. According to PRM data, 83 percent of SIV holder cases in Virginia reported having U.S. ties, although 66 percent of these were ties to friends (not relatives). In all three focus groups conducted in Northern Virginia, SIV holders reported that their U.S. ties were sometimes distant friends or acquaintances who were helpful in the resettlement process, including with providing transportation and navigating life in the United States. Officials from local resettlement agencies in other areas we visited expressed fewer capacity challenges. In Sacramento, officials from the three local resettlement agencies and a local service provider reported that they faced some capacity challenges, as their local area had among the highest number of SIV holder arrivals in the United States, according to our analysis of PRM data. However, so far officials we interviewed in Sacramento reported that have been able to find ways to manage service provision to address the high caseloads. For instance, to address rising housing costs and difficulties securing affordable housing, officials from one local resettlement agency reported that they started securing housing farther from the central SIV holder community, although this was not always preferred by SIV holders they resettled. Officials from Sacramento County’s health department said to address backlogs for health screenings caused by increased SIV holder arrivals, they increased the number of full-time staff. In addition, Sacramento, when compared with Northern Virginia, had more local resettlement agencies to manage arrivals (four versus two), which may have helped local agencies address capacity challenges. In the Dallas/Fort Worth area, officials we interviewed from all six local resettlement agencies reported no significant capacity challenges with respect to resettling SIV holders. These six agencies had fewer SIV holder arrivals, and SIV holders represented a smaller percentage of their total caseload than other sites we visited. Generally, securing affordable housing that meets requirements was not reported as a major challenge, although housing prices were rising in Dallas, according to local resettlement agency staff and the Dallas/Fort Worth regional designee. SIV Holders Experienced Barriers to High-Skilled Employment, Which Did Not Align with Their Expectations of Resettlement According to officials from national and local resettlement agencies, officials from advocacy groups, and SIV holder participants in all 8 focus groups conducted with principal SIV holders, principal SIV holders faced challenges obtaining employment in their previous fields or that matched their skill level. These challenges occurred even though they had worked for the U.S. government, tended to have completed secondary education or more, and reported good levels of spoken English. Several factors may account for these challenges, some of which may also be applicable to skilled refugees or immigrants who are not SIV holders. These include: Limited opportunities for federal employment in the United States: SIV holders had limited opportunities for federal employment because most positions required U.S. citizenship as well as background investigations or security clearances that are available only to citizens, as we reported in 2010. In 6 of the 8 focus groups we conducted with principal SIV holders, some participants said that they expected to be able to get jobs similar to the ones they had in Afghanistan or Iraq, such as with the federal government, because they had previously worked for U.S. organizations. Based on the surveys they completed at the end of our focus groups, principal SIV holders reported that they had a range of jobs in Afghanistan and Iraq, including interpreter, information technology worker, security guard, project manager, and engineer. In one of our focus groups conducted in Northern Virginia, some participants expressed frustration with being ineligible for security clearances for federal employment in the United States because they were able to obtain clearance to work in Afghanistan, and they now had to wait 5 years to apply for U.S. citizenship, which is required for a U.S. security clearance. SIV holders’ previous work may not help with U.S. employment: Some officials we interviewed from advocacy groups and local resettlement agencies said that while principal SIV holders’ ability to speak English with a high level of proficiency enabled them to work for the U.S. government overseas, they may not always have the writing skills needed for professional work in the United States. Officials from a career development organization that works directly with highly skilled immigrants, including SIV holders, to help them re-enter their fields in the United States said that SIV holders may sometimes be hindered in re-entering their original professional fields because during the time they worked as interpreters, translators, or other positions for the U.S. government, they may not have been actively employed in their original fields. Barriers to foreign degree and credential recognition: While SIV holders and others may be able to get their foreign degrees or other credentials assessed for U.S. equivalency, these processes can be costly or time consuming, according to officials we interviewed from one national and two local resettlement agencies. Staff from two national resettlement agencies said that degree recognition could be particularly challenging for Afghan SIV holders because the nature of conflict in Afghanistan made it harder for evaluators to connect with universities there. Other research we reviewed identified the complexities of the licensing process and of available career paths as challenges for highly skilled and educated immigrants in the United States in general. Officials we interviewed from about half of the local resettlement agencies said that because principal SIV holders were often unable to find employment in their prior profession, many took “survival” or low-skilled jobs in order to cover basic expenses. Officials from local resettlement agencies, as well as participants in our focus groups, reported that common jobs for principal SIV holders included drivers for ride-sharing services like Uber and Lyft, airport workers such as luggage handling and food service, security guards, low-level information technology workers such as cell phone assembly or temporary technician, or warehouse workers such as inventory or stocking. One principal SIV holder we spoke to in our focus groups said he worked as a civil engineer for 6 years in Afghanistan, but was assembling cell phones in the United States, which was disappointing for him given his years of experience and education. In almost all of our focus groups with principal SIV holders, participants expressed frustration about the barriers to re-entering their professional fields and the need to take low-skilled jobs. These employment-related challenges did not align with the expectations of principal SIV holders, who thought that their education and prior work experience with the U.S. government would enable them to find skilled work, according to many national and local resettlement agency officials we interviewed and SIV holders who participated in our focus groups. All 3 state refugee coordinators, representatives of 7 of 9 national resettlement agencies, and representatives of 10 of 13 local resettlement agencies we spoke to said that SIV holders tend to have high, unrealistic expectations about employment or about life in general after they arrive. As one principal SIV holder from one of our focus groups in California stated: “I thought I would not need to worry about anything in the U.S. for years and they will take care of me and my family because I worked for their government.” SIV holders in our focus groups also expected more assistance in obtaining high-skilled employment than they generally received. In all 8 of our focus groups conducted with principal SIV holders, some participants expected more assistance getting back into their fields of interest, but said that local resettlement agencies did not always have the technical skills or resources needed to assist them. Similarly, in 4 of the 8 focus groups with principal SIV holders, some participants reported that they expected to receive sufficient government assistance to cover expenses while they adapted to life in the United States, spent time getting retrained or recertified, or searched for employment. Because of these high expectations, the reality of starting over was frustrating or shocking, and made the initial resettlement process challenging, according to both staff from local resettlement agencies and SIV holders from our focus groups. Officials from a number of national and local resettlement agencies said that SIV holders’ expectations tended to be higher than other clients they served, such as refugees. Officials we interviewed from a number of national and local resettlement agencies agreed that they would have liked to do more for SIV holders, given their sacrifice in working for the U.S. government, but that they treat all of their clients in the resettlement program the same, in accordance with PRM’s cooperative agreements. Staff from one national resettlement agency and one local resettlement agency agreed that while they would like to assist SIV holders and other highly-skilled clients to obtain better or more skilled jobs, they did not have the resources or capacity to provide a significant amount of specialized help over a longer term. False expectations about resettlement may have come through word of mouth or other sources, according to resettlement agency staff and SIV holders we interviewed. Some local resettlement agency staff said SIV holders’ high expectations may be due in part to inaccurate information from the SIV holder community through social media or word of mouth. Staff from one local resettlement agency reported that managing SIV holders’ high expectations was time-consuming for staff because there was a “mountain of misinformation” within the community. Principal SIV holders may have also received false hope from their overseas U.S. military colleagues, who may not understand the challenges of resettlement. For example, one principal SIV holder we spoke to in our focus groups said that his American co-workers in Afghanistan told him it would be easy to find a good job in the United States because of his skills, but he said finding employment in his previous field was challenging and he is now working for a warehouse packing department. The Virginia Refugee Resettlement Program Manual states that the STEP program provides highly-skilled participants with specialized services that include professional assessments and assistance in accessing training, certifications, and courses related to prior careers. STEP participants are selected based on an employment assessment of all participants enrolled in Virginia’s refugee social service employment program, which is available to those who have had a refugee eligible status for less than 5 years and are over age 16. Many STEP beneficiaries in Northern Virginia are special immigrant visa (SIV) holders, according to the Virginia State Refugee Coordinator. The STEP program is funded through the Office of Refugee Resettlement’s Refugee Social Services and Targeted Assistance funds, and services are provided by local resettlement agencies. programs: Officials we interviewed at local resettlement agencies in Texas and Virginia said they used ORR funding to support career development programs for SIV holders and other clients. For example, officials from Catholic Charities Dallas said they used ORR’s Refugee Social Services funds to offer clients training and certifications in technical occupations, such as clinical nurse or forklift operator. Officials we interviewed from other organizations said they also relied on programming or funds provided under the Workforce Innovation and Opportunity Act (WIOA) for career development programs that could serve SIV holders. For example, officials from the International Rescue Committee’s national office said that some of their local offices used WIOA’s American Job Center system to help SIV holders and other skilled clients with good English skills access training opportunities or other job search resources. Officials from the Sacramento Employment Training Agency told us they recently utilized WIOA and other funding to launch an English Language Learner Workforce Navigator pilot that will emphasize assisting SIV holders and refugees because of large populations of these groups in Sacramento County. The program aims to provide participants with additional entry points to employment and training opportunities, as well as case management and supportive services. California Law on In-state Tuition for SIV Holders and Refugees In October 2017, California enacted Assembly Bill 343, which provides certain special immigrant visa (SIV) holders and refugees admitted to the United States and who settle in California with in-state tuition at California Community Colleges for the minimum time necessary to become a resident. (Students generally need to live in California for more than one year and meet other requirements to qualify for in-state tuition.) The legislature’s finding, as stated in the bill, was that access to institutions of higher education will ensure that SIV holders are “able to pursue their educational goals and rebuild and improve their lives and the lives of their families.” Upwardly Global officials describe their work as eliminating employment barriers for special immigrant visa (SIV) holders, immigrants and refugees who were professionals in their home countries. They work to help these newcomers re-enter their career fields after moving to the United States, according to staff we interviewed and other information. The organization offers career development programming including training on the U.S. job search, specialized training opportunities, and recertification services. It provides these services to job seekers in-person at physical locations (Chicago, New York, San Francisco, and Silver Spring, Maryland), as well as virtually through online services, training modules, or other job resources. Since 2009, the organization has placed 69 individuals with SIVs (of 236 served) into new employment with an average annual salary of about $54,000 at placement, according to data from Upwardly Global. SIV holders most commonly placed in jobs in technology, engineering, or finance and accounting, according to staff we interviewed. career development: Officials from Catholic Charities Fort Worth, for example, said they recruited retirees who were former professionals to voluntarily work one-on-one with clients on job readiness skills, such as interviews, resume writing, and general career planning. Officials we interviewed from several national and local resettlement agencies or county service providers also reported that they sometimes refer clients to outside organizations with career development programming for highly-skilled immigrants, such as Upwardly Global (see sidebar). Housing Issues and Integration of Female Spouses Were Other Challenges Housing While housing challenges were common among both SIV holders and refugees, SIV holders tended to have high expectations, according to staff from some local resettlement agencies. Officials from national and local resettlement agencies, as well as SIV holders from our focus groups, described several housing related challenges: Local resettlement agencies faced barriers to securing housing: SIV holders, like refugees, lack rental or credit histories and Social Security numbers when they arrive in the United States, which limits the housing options available to local resettlement agencies who must secure their housing. Local resettlement agency staff said that they had built relationships with landlords who were willing to forego these requirements; accordingly, some staff reported that SIV holders and refugees were often housed in certain apartment complexes. SIV holders in our focus groups expected better housing: In 10 of 11 focus groups we conducted, SIV holders reported that sometimes the apartments they lived in were not of high quality, they experienced problems with infestation, or had concerns about safety. The SIV holders in our focus groups who had problems with infestation or other issues said that they reported them to the landlord or local resettlement agency and the issues were generally addressed, but not always to their satisfaction. Additionally, according to staff from national and local resettlement agencies, as well as SIV holders in 5 of our 11 focus groups, SIV holders often expected better housing or to be placed in certain locations near the main SIV holder community; however, this was not always possible due to limited availability of affordable housing. SIV holders in some of our focus groups also reported that they could not afford to move to nicer apartments. Affordable housing was limited: Housing affordability was also cited as a major challenge, especially by local resettlement agency staff and SIV holder participants in 5 of our focus groups in Northern Virginia and Oakland, California. In Alameda County, where the city of Oakland is located, and in the city of Alexandria, where most SIV holders from our 3 focus groups in Northern Virginia lived, the median rental cost for a one-bedroom apartment in 2016 was about $1,400, according to U.S. Census Bureau data. In Sacramento and Dallas, rising housing costs were cited as growing challenges by staff from some local resettlement agencies and SIV holders in 3 of our 4 focus groups in those cities. While there are no national guidelines for affordability, officials from one national resettlement agency said that their general rule is to find housing that a family could afford on their expected income and have extra for other expenses. Some groups we spoke with used strategies to help address housing challenges. For example, Catholic Charities Dallas had a dedicated housing specialist whose primary job was to find and place clients into suitable housing and whose work included conducting outreach to new apartment complexes to ensure that they knew of the agency and the benefits of renting to SIV holders and refugee clients. Officials from Catholic Charities of the East Bay in Oakland described their church sponsorship program in which a local church is matched with a family to help subsidize rent and support the family in other areas, often for 6 months or more. Also, officials from one advocacy and service organization, No One Left Behind, said they assisted local resettlement agencies with finding housing for SIV holders, and had established agreements with local resettlement agencies in some cities, including Rochester, New York and Pittsburgh, Pennsylvania to secure housing and provide furnishings for all SIV holder families they resettled. Integration of Female Spouses Officials we interviewed from all 9 national resettlement agencies and 12 of 13 local resettlement agencies reported that female SIV spouses experienced specific barriers to assimilation. These include: Female SIV spouses experienced cultural adjustment challenges: Officials from national and local resettlement agencies reported that the gap between male principal SIV holders and their spouses in terms of English proficiency, education, work experience, or exposure to American culture, could be large and created challenges for women’s integration, especially for Afghan women, a few officials noted. Accordingly, male principal SIV holders may be able to more quickly integrate, while female SIV spouses may be less likely to participate in programs, struggle to integrate, or feel isolated, according to officials from national and local resettlement agencies. Officials noted that this gap tended to be larger than between refugee husbands and wives, who may be more evenly matched. Our analysis of PRM data confirmed that differences in education and spoken English levels were larger between principal SIV holders and spouses than with refugee principals and spouses. According to our analysis of PRM data on SIV spouses, 42 percent reported speaking no English, with those from Afghanistan much less likely to speak any English than those from Iraq. Afghan SIV spouses were also about one-third as likely to have reported completing postsecondary education as Iraqi SIV spouses, based on available data. In contrast, in our focus groups some female SIV spouses and some female principal SIV holders had prior work experience and high levels of education. For example, about one-third of the female SIV spouses in our focus groups (9 of 27) reported on their participant surveys that they had prior work experience in their home countries, including as teachers and journalists. Lack of childcare and limited transportation options: Officials we interviewed from local resettlement agencies and SIV spouses in two of our focus groups said that barriers around childcare and transportation made it challenging for female SIV spouses to leave the house for classes or employment. For example, in one of our Sacramento focus groups, several female SIV spouses reported that they wanted to take English classes and find work, but the cost of childcare and lack of public transportation, including school buses for their children, were prohibitive. National and local resettlement agency officials also reported that female SIV spouses may take longer to assimilate and feel isolated because of families’ expectations about female spouses staying home. Officials from one national resettlement agency said that prior to arrival, many SIV holders and their families lived comfortably on one income, and therefore female spouses were often not initially willing to work, which strained finances and made self-sufficiency difficult. In all three of our focus groups with female SIV spouses, participants said that they would like to work, but needed to wait until their children were older or needed to learn English first. Officials we interviewed from several resettlement agencies described their efforts to address some of the challenges related to the integration of female spouses. They include: Engaged SIV women independent from their spouses: Staff from two local resettlement agencies reported providing intake for men and women separately so that they ensure that women had a connection to resettlement agency staff independent of their husbands. Other agencies reported that they started making sure that an interpreter was provided for the female spouse rather than having her husband act as an interpreter, so that they could ensure everyone received the same information and that such information was not filtered through the husband. Staff we spoke to at one local resettlement agency acknowledged that their employment services had previously been primarily focused on the male clients in each household, but that they had since created a separate curriculum for women to ensure that all adult clients received job readiness training. Mitigated barriers faced by female SIV spouses to attend English classes and to work: To address childcare and transportation barriers, staff we spoke to at three local resettlement agencies said they offered English language classes at apartment complexes with many SIV holder families, with childcare provided. Several local resettlement agencies also used volunteers to provide in-home English classes and mentoring for SIV women. Officials from two local resettlement agencies said they provided women’s empowerment programming to overcome isolation and other issues. For example, officials from International Rescue Committee Dallas told us that they offered a women’s empowerment class that met two times per week to discuss varying topics, including public transit, job readiness, and sewing. Officials from Opening Doors Sacramento, an affiliate of Church World Service, told us that they assist women who are special immigrant visa (SIV) holders and refugees convert their homes into home-based childcare centers. Opening Doors utilizes funds from the Office of Refugee Resettlement’s grant on micro- finance and partners with a local social service agency to help the women start a business plan and get licensed. As of April 2017, over 50 women have received their license through this program, many of whom are from Afghanistan, according to officials from Opening Doors. State’s PRM has taken several steps to address the capacity challenges reported by resettlement agencies in Northern Virginia. First, in May 2017, PRM placed limitations on SIV holders’ resettlement in that area in response to concerns raised by local resettlement agencies and the state refugee coordinator, and in consultation with national resettlement agencies, advocacy groups, and ORR. The policy generally restricts SIV holders from being placed in Northern Virginia unless they have close family ties there. Second, in June 2017, PRM issued another new policy that gives SIV holders more resettlement options. Under this new policy, SIV holders can choose to be placed in one of 25 cities without having a U.S. tie (see table 2). This option did not exist previously, as SIV holders, like refugees, were typically placed near a specified U.S. tie or in a location primarily determined by resettlement agencies. According to PRM officials, by providing a choice to SIV holders, they aimed to increase the likelihood of successful resettlement in these alternative areas and mitigate secondary migration (when people leave their initial placement to move to desired locations). PRM officials said that they considered various factors in developing the list of 25 cities, including the presence of existing SIV communities, sufficient capacity to resettle new arrivals among local resettlement agencies, and housing availability and employment opportunities based on information from local resettlement agencies. In finalizing its list of cities, PRM also sought input from national resettlement agencies, advocacy organizations, and ORR. To inform SIV holders about resettlement prior to arrival and to better manage their expectations, PRM has developed informational materials specifically for SIV holders. All individuals served through the R&P program must receive cultural orientation training once they arrive in the United States, according to R&P guidelines, and many refugees also take this training overseas. In contrast, SIV holders generally do not take overseas cultural orientation training because they typically receive their visas in locations where there are no facilities to provide such training. PRM officials said providing special cultural orientation training sessions for SIV holders, such as at the U.S. embassy in Kabul, would be logistically difficult and potentially result in additional security risks for SIV holders. In lieu of overseas cultural orientation trainings, PRM provides a Dari-translated version of its manual on U.S. resettlement, Welcome to the United States: A Guidebook for Refugees, for distribution by the U.S. embassy in Kabul. It has also developed several other types of informational materials specifically for SIV holders, including documents such as “19 Things You Need to Know About Resettling in the United States” and “Frequently Asked Questions (FAQs) About Resettlement Benefits for Iraqi and Afghan Recipients of Special Immigrant Visas,” as well as short videos aimed specifically at SIV holders (see sidebar). SIV holders can access informational materials on State’s Refugee Processing Center’s website, and links to this website are included at the end of emails from PRM staff when communicating with SIV holders, according to PRM officials. Additionally, PRM officials noted that they have also worked with advocacy groups who may be communicating with SIV holders while overseas, to disseminate information, such as the challenges of resettling in high cost-of-living areas. Officials said that their efforts to inform SIV holders about resettlement before they come to the United States have been ongoing for several years. However, officials we interviewed from many national and local resettlement agencies, as well as those from some state refugee coordinator offices and advocacy groups, said that State could do more to inform SIV holders about resettlement while they were still overseas, given their often false expectations about resettlement. For instance, officials from a number of these entities said that PRM’s informational materials for SIV holders are general and lack specific details or more in- depth information on issues, such as housing affordability, employment, or the type of government assistance they will or are likely to receive. This type of information could provide them a better sense of what to expect when they resettle in the United States, according to officials. Based on our review, we found that while the materials discuss resettlement challenges generally, such as difficulties associated with relocating in certain high-cost areas or the likelihood that SIV holders will need to take an entry-level job instead of one in their professional field, they do not contain specific details, examples, or links to specific information. For example, the materials do not provide information on area housing costs in popular resettlement areas or common jobs or average wages among SIV holders (or refugees). They provide minimal information on the amounts people may receive in government assistance or the extent to which they can expect assistance with such things as longer-term training or education. PRM’s new list of 25 cities, for instance, includes a link to each city’s municipal government website, but such websites are unlikely to provide easy access to information, such as area housing costs, that could help inform people’s resettlement choices. PRM officials stated that they are wary of providing specific details because these may vary for SIV families, depending on the state where they reside, the assistance programs in which they participate, their particular household situation, or other factors. Such differences can be a source of misinformation among those in the SIV community, according to PRM officials, as well as some resettlement agencies we interviewed. Accordingly, officials noted that they would not want to be in a position to defend information that may be inaccurate or not applicable to SIV holders. Officials we interviewed from two resettlement agencies also noted that it could be challenging to provide specific details, such as on government benefit amounts, as these may vary greatly across households. Yet, officials we interviewed from other resettlement agencies and advocacy groups noted that illustrative details, examples, or more in- depth discussion on key issues would provide SIV holders more understanding of what they may experience and inform their decision- making. Providing web links to relevant information or additional information from official sources may also help SIV holders gather information from more credible sources and counter some of the misinformation they may receive through word of mouth, according to a state refugee coordinator and officials at two local resettlement agencies we interviewed. Similarly, participants in 5 of our 11 focus groups said that getting additional cultural orientation or more information about life in the United States, such as from State, would have been useful. Some said they did not always get an accurate picture of resettlement from their U.S. ties. One principal SIV holder we spoke to said getting additional information about resettlement while still overseas would have been useful for SIV holders since it can be difficult to learn all this information once they have arrived in the United States, as they are in “culture shock” and “overwhelmed” by all they have to do. In contrast, participants in three focus groups said that access to more resettlement information overseas would not have been useful: People’s primary focus at that time is on simply getting their visa and leaving the country. In addition to the lack of specificity in the information provided to prospective SIV holders, some of State’s efforts to disseminate existing information are also incomplete. For instance, we learned of some instances of miscommunication between PRM and Consular Affairs regarding information provided to SIV holders at embassies. While PRM officials told us they understood that the embassies in Kabul and Baghdad provided SIV holders with hard copies of Welcome to the United States, and played the informational videos for SIV holders on a loop, officials from Consular Affairs told us that the Bagdad embassy no longer provided hard copies of guides due to costs, and neither embassy played the videos due to space and other issues. Officials we interviewed from a few resettlement agencies and advocacy groups suggested that there may be additional opportunities for State to disseminate information, such as making the “19 Things to Know” document available at more touch points. The links to such SIV-specific informational documents are directly available on State’s Refugee Processing Center website and through the form SIV holders complete to elect to receive resettlement benefits. However, they are not directly accessible on State’s Consular Affairs’ websites that describe the steps to apply for a SIV. Further, these SIV- specific documents are also not offered at embassies or mailed to SIV holders in their visa packages, according to Consular Affairs officials. Moreover, in several of our focus groups, some participants stated that they did not remember receiving any or much information on resettlement in the United States while in their home country, including information aimed specifically at SIV holders. Federal internal controls state that management should externally communicate necessary quality information to achieve objectives, considering audience, nature of information, availability of information, and costs in doing so. Because State’s current information to SIV holders overseas is general and the agency may miss opportunities to disseminate or otherwise make individuals aware of the information, SIV holders may be hampered in their ability to make well-informed decisions on where to resettle in the United States, as well as in their ability to prepare and adapt to potential challenges as quickly as possible upon arrival. ORR’s New Grant Provides More Targeted Assistance on Working with Skilled Immigrants Although ORR does not provide specific support or assistance for SIV holders, ORR’s funding and technical assistance for refugees and other eligible clients can be used to support programming for highly skilled clients, including SIV holders. For example, states can use Refugee Social Services and Targeted Assistance Grant funds to develop specialized programs aimed at higher skilled immigrants, if they choose. Among our selected states, Virginia used these funds to support its career development program. ORR also uses a technical assistance provider, Higher, to provide support related to employment and self- sufficiency. Higher makes various employment resources available that resettlement agencies or other service providers can use, including those that can help serve highly skilled clients, such as webinars or postings on educational or career development opportunities. Higher has also developed online training modules, recertification guides, and other resources that refugees, SIV holders, or other clients can directly access through its website, in addition to posting links to other providers’ services, such as those from Upwardly Global, which are directly accessible by clients. In addition, in June 2017, ORR posted a new $3 million competitive grant announcement for the Refugee Career Pathways program that aims to address the challenges experienced by highly skilled refugees, SIV holders, or other eligible populations in moving beyond low-skilled work into professional fields with career advancement opportunities (see text box). The grant announcement states that this program will utilize a “career pathways” approach, as defined by WIOA, which is a combination of training, education, and services to help people obtain short-term and long-term career opportunities in specific fields that align with state or regional economic needs. Possible types of assistance that could be provided to participants include case management, training and technical assistance, mentoring, or financial assistance for educational or certification programs. This ORR grant aligns with the desire for more targeted assistance and information for skilled immigrants, such SIV holders, which was expressed by officials we interviewed at a number of national and local resettlement agencies and SIV holders in our focus groups. Goals of Office of Refugee Resettlement’s new Refugee Career Pathways Program “The Refugee Career Pathways (RCP) program is a new program established by the Office of Refugee Resettlement (ORR) to address the obstacles faced by resettled refugees in initiating professional careers in their new communities. While many refugees have previous professional experience in their country of origin, they often lack the degrees, certifications, and knowledge specific to the U.S. job environment needed to attain professional employment after resettlement. Even highly-skilled refugees are often required to take low-skilled jobs with little opportunity for advancement or skill development. This in turn limits refugees’ potential to achieve economic self-sufficiency and to benefit their communities by making full use of the skills and experience they bring to their new home. The goal of the RCP program is to support refugees in attaining the knowledge and resources needed to begin a professional career in their new community. Existing job training programs for refugees often focus on supporting initial job placement, which may not be adequate to secure long-term self-sufficiency. The RCP program will assist refugees to begin professional careers that provide not only a salary but also greater job security and the possibility of career advancement.” SIV holders resettle in the United States in most cases to escape endangerment—a result of their work for the U.S. government in Iraq or Afghanistan. After their resettlement, however, no outcome information exists beyond whether SIV holders are minimally self-sufficient within their first 6 months. SIV holders are a small group compared to the larger, general population of refugees. Yet ORR faced and overcame similar constraints in conducting studies on other special populations in the past, such as the Lost Boys of Sudan, responding to the focus and concern of policymakers about those populations at the time. Although ORR could leverage its existing methodologies to examine SIV holders’ longer-term outcomes in further research, similar to what it did for other groups, it has not yet fully explored the feasibility of doing so or other possibilities to obtain information about the SIV holder population. ORR’s new survey redesign efforts, aimed at improving its understanding of the long-term outcomes of refugees and related populations, provide the agency an opportunity to do this. Until then, policymakers have no information as to whether SIV holders—a population of special interest and one with an increasing presence in the federal refugee resettlement programs—are successfully resettling in the United States. While many of the resettlement challenges related to employment, housing, or cultural integration are outside of State’s control, they may be exacerbated by SIV holders’ own high expectations about resettlement. These expectations are often cultivated before they arrive from overseas. State’s efforts to inform SIV holders about resettlement have been ongoing for years and, to some extent, help overcome the logistical difficulties of not being able to provide SIV holders with cultural orientation training before they come. However, the persistent gap among SIV holders’ expectations and their experiences, as described by many of the SIV holders and officials we interviewed from national and local resettlement agencies and advocacy groups, and other stakeholders, suggests that these efforts are falling short. While State has made efforts to disseminate the information through various touchpoints, there are missed opportunities for distribution, such as at embassies. When coupled with the lack of examples or details in State’s informational materials for SIV holders, these missed opportunities may contribute to SIV holders’ ongoing false expectations of resettlement. Finding additional ways to deliver information to SIV holders about the realities of resettlement could help them make more informed decisions about where they choose to resettle—decisions which may be predicated on their ability to access additional information about important factors such as employment opportunities or area housing costs. Such information, while not a panacea for the real resettlement challenges SIV holders face, can at least help them make decisions that better align their personal situation with the economic realities of resettlement in the United States. Additional information could also mitigate SIV holders’ surprise and frustration once they arrive, better enable them to quickly orient to their new lives, as well as help refugee agencies facilitate that transition. Recommendations for Executive Action We are making two recommendations, including one to ORR and one to PRM: 1. The Director of the Office of Refugee Resettlement (ORR) should consider including SIV holders in its Annual Survey of Refugees. (Recommendation 1) 2. The Assistant Secretary of the Bureau of Population, Refugees, and Migration (PRM) should identify and implement additional ways to deliver information to prospective SIV holders about resettlement to assist with adjustment and expectations after arrival in the United States, including providing more detailed or in-depth information on key issues. PRM, working with Consular Affairs as needed, should also identify and address potential gaps in disseminating relevant information to SIV holders, such as at embassies. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of our report to HHS and State for review and comment. Both agencies agreed with our recommendations. In its response, HHS stated that while it did not believe including SIV holders in the Annual Survey of Refugee was feasible under the current contract due to costs, it would continue to look for cost-effective ways to include SIV holders in its survey redesign efforts and in future contracts. HHS stated that it would also explore ways to capture more information on SIV holders through its administrative program data, including on employment outcomes. State, in its response, said that PRM has developed new guidance for the Refugee Processing Center’s SIV unit regarding the distribution of additional information to SIV holders and that staff from this unit plan to include additional links to cultural orientation information in all their correspondences with SIV applicants. Additionally, State noted that Embassy Baghdad will distribute copies of the Welcome Guide to Iraqi SIV holders and that PRM will work with Consular Affairs to identify other ways to provide information to SIV applicants. HHS and State also provided technical comments, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretaries of Health and Human Services and State, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Additional Methodological Details This appendix provides additional information on our methodologies for our analysis of data from the Department of State (State) and on our focus groups with special immigrant visa (SIV) holders. Analysis of State Data We analyzed individual record-level data from State’s Bureau of Population, Refugees, and Migration (PRM) for fiscal years 2011 through the first quarter of 2017 (i.e., October 2010 through December 2016) that provide information on recipients of State’s resettlement program, the Reception and Placement (R&P) program. Fiscal year 2011 was the first year of the R&P program’s current reporting requirements, and December 2016 was the most current data available at the time of our review. Overall, this timeframe accounted for about 40,000 individual SIV holders (principal SIV holders and their family members) and 14,000 cases, or households, before we excluded instances of missing data. In our analysis and reported results, we excluded instances of missing data, such as when SIV holders migrated from their initial placements before resettlement agencies could collect 90-day outcome information, or, in the case of employment rates, when principal SIV holders were considered exempt from seeking employment for various reasons. This resulted in about 38,000 individuals and 13,000 cases. The R&P information we examined included data on recipients’ employment status and other household income sources at 90 days after arrival, such as from earnings or common cash assistance programs. Most of the R&P data are provided as “yes” or “no” responses, such as whether an individual is employed or whether the household has income that exceeds expenses. R&P data are collected by national and local resettlement agencies on all individuals served through the R&P program, and reported to PRM at one-point in time—90 days after individuals’ arrival in the United States. Per R&P reporting requirements, some data are collected at the case or household level, such as whether the household has sufficient income to meet expenses, while other data, such as employment status, are collected on each individual in a case. Additionally, we reviewed PRM data on recipients’ background characteristics, such as education level and spoken English ability, collected by PRM during the application and screening process prior to an individual’s resettlement in the United States. PRM tracks information on all individuals applying to the U.S. Refugee Admissions Program, including those with SIVs, using its data repository known as the Worldwide Refugee Admissions Processing System. Some of the background information on SIV holders, including education level and spoken English level, are self-reported and provided on SIV application forms. PRM collected both the background and the R&P data in a way that allowed SIV holders to be examined separately from resettled refugees. We also did analyses with the same variables for resettled refugees from the same general timeframe. We reviewed the data from PRM for missing data and internal inconsistencies, and interviewed PRM officials knowledgeable about the data to resolve identified issues. We determined that the data were sufficiently reliable for our purposes of reporting employment rates, income sources, and receipt of services at 90 days, as well as broad categories of education and spoken English levels, for SIV holders and, in some cases, refugees. Focus Groups with SIV Holders In each of our selected states (California, Texas, and Virginia), we conducted three to four focus groups with principal SIV holders and SIV spouses to better understand resettlement factors or challenges from their perspectives. In total, we conducted 11 focus groups and spoke with 86 participants from both Afghanistan and Iraq. Specifically, we conducted eight focus groups with all or mostly principal SIV holders. (Participants in seven of these groups were all male principal SIV holders; participants in one group included four male principal SIV holders and two female spouses.) We also conducted three focus groups with primarily female spouses. (All participants in these three groups were females; however, in two groups, one participant was the principal SIV holder.) To supplement the information we gathered through our focus group discussions, we also distributed short anonymous surveys to participants at the end of each session. Among other basic questions, we asked participants whether they were currently employed and, if so, the type of work they did. We also asked principal SIV holders what type of work they did for the U.S. government, and SIV spouses whether they worked in their home country and the type of work. Almost all participants submitted a survey (84 of 86). However, some participants (particularly SIV spouses) appeared to have difficulty understanding the questions, although we had translation assistance during our focus groups. In our report, we discussed survey findings on principal SIV holders’ prior work for the U.S. government and the prevalence of prior work among SIV spouses. Overall, these responses had few blanks, and the responses themselves seemed to indicate general understanding of the questions. The information gathered from interviews and focus groups from our site visits is not generalizable and is meant to provide illustrative examples. Appendix II: Comments from the Department of Health and Human Services Appendix III: Comments from the Department of State Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Janet Mascia (Assistant Director), Theresa Lo (Analyst-in-Charge), Cristina Norland, and Rachel Pittenger made key contributions to this report. Also contributing to this report were James Bennett, Kathryn Bernet, Pamela Davidson, Holly Dye, Sara Edmondson, Cynthia Grant, Marissa Jones, James Rebbe, and Rosemary Torres Lerma.
Why GAO Did This Study Certain Afghan or Iraqi nationals who worked for the U.S. government and may have experienced serious threats due to this work may qualify for an SIV. An SIV allows them and eligible family members to resettle in the United States, and since 2008 over 60,000 SIV holders (principal holder and family members) have done so. Upon arrival, they are eligible for resettlement assistance from State and HHS. GAO was asked to review SIV holders' resettlement outcomes and challenges. This report examines (1) available data on SIV holders' employment and other outcomes, (2) challenges affecting their resettlement, and (3) federal efforts to help address challenges. GAO analyzed the most recent federal data (State: 2010-2016; and HHS: 2016) on SIV holders' outcomes; interviewed officials from nine national resettlement agencies; and visited three states (CA, TX, and VA) where over half of SIV holders resettled. In these states, GAO interviewed the states' refugee coordinators and, for two local areas with relatively high levels of SIV resettlement, interviewed local resettlement agency officials and conducted focus groups with SIV holders. GAO also reviewed relevant federal laws and policies and interviewed federal officials. What GAO Found Since fiscal year 2011, about 13,000 Afghan and Iraqi nationals (excluding family members) have resettled in the United States under special immigrant visas (SIV), but limited data on their outcomes are available from the Department of State (State) and the Department of Health and Human Services (HHS). State collects data on SIV holders' resettlement outcomes once—90 days after they arrive. GAO's analysis of State's data from October 2010 through December 2016 showed that the majority of principal SIV holders—those who worked for the U.S. government—were unemployed at 90 days, including those reporting high levels of education and spoken English. Separately, HHS collects data on about one-third of resettled SIV holders (those in one HHS grant program). According to HHS's fiscal year 2016 data (the only year available), most of these SIV holders were employed and not receiving cash assistance 6 months after arrival; however, these data are not representative of all SIV holders. GAO did not identify any outcome data for SIV holders beyond 6 months after arrival. HHS annually surveys refugees up to 5 years after arrival, but does not do so for SIV holders. However, it has occasionally used its survey of refugees to analyze selected groups at no additional reported cost. Such analysis could provide valuable information on whether SIV holders have achieved longer-term assimilation, consistent with HHS' mission and program goals. Stakeholders GAO interviewed reported several resettlement challenges, including capacity issues in handling large numbers of SIV holders, difficulties finding skilled employment, and SIV holders' high expectations. Officials from local resettlement agencies in Northern Virginia reported capacity challenges for their agencies and the community due to the large increase of SIV holders. In almost all of GAO's focus groups with principal SIV holders, participants expressed frustration at the need to take low-skilled jobs because they expected that their education and prior work experience would lead to skilled work. State and HHS have taken steps to address some resettlement challenges. For example, in 2017 State placed restrictions on where SIV holders could resettle and HHS announced a new grant to support career development programs for SIV holders, refugees, and others. In addition, State provides information to prospective SIV holders about resettlement. However, the information is general, and lacks detail on key issues such as housing affordability, employment, and available government assistance. Providing such specifics could lead to more informed decisions by SIV holders on where to resettle and help them more quickly adapt to potential challenges once in the United States. What GAO Recommends GAO recommends that 1) HHS consider including SIV holders in its annual survey on refugees' longer-term outcomes, and that 2) State provide more detailed information on key issues to prospective SIV holders. Both agencies agreed with our recommendations.
gao_GAO-18-643
gao_GAO-18-643_0
Background WMATA operates the nation’s second largest heavy rail transit system (Metrorail) and fifth largest bus system (Metrobus), accounting for about 1.1 million passenger trips per weekday. Metrorail runs 6 train lines connecting the District of Columbia to various locations in Maryland and Virginia. A portion of the latest addition, the Silver Line, was opened in 2014. WMATA was created in 1967 through an interstate compact— matching legislation passed by the District of Columbia, the state of Maryland, and the Commonwealth of Virginia, and then ratified by Congress—to plan, develop, finance, and operate a regional transportation system in the National Capital area. A board of eight voting directors and eight alternate directors governs WMATA. The directors are appointed by the District of Columbia, Virginia, Maryland, and the federal government, with each appointing two voting and two alternate directors. Operating Revenues WMATA’s operating revenues from rider fares, parking fees, and paid advertisements, do not cover its annual costs, so it relies on year-to-year funding commitments from Maryland, Virginia, and the District of Columbia, and various forms of federal funding to cover gaps in its capital and operating budgets. WMATA’s operating budget covers personnel costs and contracted services; in fiscal year 2017 about 75 percent of its $1.8 billion operating budget went to personnel costs. WMATA’s capital budget, which covers short-term maintenance and long-term capital projects, totaled $1.2 billion in fiscal year 2017. In 2018, Maryland, Virginia, and the District of Columbia each passed legislation to provide additional recurring annual funding to WMATA generally for capital purposes, totaling $500 million annually across the 3 jurisdictions. Service Levels and Ridership In recent years, WMATA added new rail service while also experiencing declines in ridership. From fiscal years 2006 through 2017, WMATA increased Metrorail service about 23 percent as measured in total railcar revenue service miles, or the miles traveled when the vehicle is in revenue service; WMATA increased Metrobus service slightly, by about 4 percent. Over this same time, ridership declined—by about 17 percent on Metrorail and 12 percent on Metrobus. (See fig. 1). WMATA attributes this ridership decline to multiple factors, including growth in telecommuting, the expansion of alternative transportation options, and a decline in service quality and reliability. In addition, between June 2016 and June 2017, WMATA completed SafeTrack, a large-scale accelerated maintenance program that suspended service on portions of Metrorail, resulting in delays and additional ridership declines. Workforce and Employee Groups WMATA’s workforce is composed of bus and rail operations staff, as well as managers, administrators, law enforcement, and others. In September 2017, after reducing its workforce by eliminating 6 percent of its 13,000 positions, WMATA reported that it had 12,217 employee positions across 6 different employee groups, of which 11,341 were filled. Most WMATA employees—83 percent—are represented by one of WMATA’s five unions, depending on the employees’ positions. The Amalgamated Transit Union Local 689 is the largest union, representing 67 percent of WMATA employees (see table 1). Each union negotiates its own terms on wages, salaries, hours, working conditions, and pensions or retirement, and generally documents these terms in its collective bargaining agreement. Employee and Retiree Benefits WMATA provides a defined benefit pension for almost all of its represented employees and for non-represented employees hired before January 1, 1999. In these pension plans, the benefit a retiree receives is generally based on the retiree’s age and/or years of service and compensation, which may include overtime wages for represented employees. WMATA’s annual contributions to its pension plans are invested in portfolios that include stocks, bonds, and real estate to fund future pension benefits. The Local 689 pension plan is WMATA’s largest, and covered 80 percent of all WMATA pension plan members in fiscal year 2017. Each of the five pension plans is governed by a separate group of trustees responsible for administering the plan. The trustees are composed of a mix of members selected by WMATA and by the respective union or employee group. For example, the trustees for the Local 689 plan include three appointed by WMATA and three by Local 689. WMATA makes payments for four defined benefit retiree health plans. These plans generally cover Local 689 employees, Local 2 employees, Metro Transit Police, and Metro Special Police, in addition to non- represented employees. According to WMATA officials, WMATA’s four retiree health plans are “pay-as-you-go,” meaning WMATA pays for benefits as they become due each year, and funds necessary for future benefits are not accumulated. Increases in WMATA’s Workforce Costs since Fiscal Year 2006 Are Largely Driven by the Cost of Benefits, with Pensions Posing Particular Risk WMATA’s total workforce costs—composed of wages, salaries, and benefits for current and retired employees—increased modestly in inflation-adjusted dollars (on average by about 3 percent annually) from fiscal years 2006 through 2017. This modest increase reflected small increases in wage and salary costs and substantial increases in employee and retiree benefit costs. In particular, WMATA’s required annual contributions to its pension plans increased by an annual average of almost 19 percent and were WMATA’s fastest growing workforce cost component from fiscal years 2006 through 2017. The possibility of further increases in the costs of WMATA’s pension plans poses significant risk to the agency’s financial operations, yet WMATA has not fully assessed these risks. Since 2006, WMATA Wages and Salaries Increased Modestly While Contract Costs More Than Doubled WMATA’s total workforce costs increased by about 3 percent annually on average between fiscal years 2006 and 2017 in inflation-adjusted fiscal year 2017 dollars, with wages and salaries increasing an average 1.1 percent per year, from $645 million in 2006 to $728 million in 2017. These costs grew at a slower rate than the costs of contracted services (7.3 percent annually on average) and employee and retiree benefits (5.6 percent annually on average), as discussed below (see table 2). The total number of employees WMATA budgeted for each year (authorized positions) grew slightly faster than wages and salaries—about 2 percent per year on average—increasing from 10,451 in 2006 to 13,032 in 2017, with similar growth in the number of occupied positions. Wages and salaries increased at a slower rate than WMATA’s workforce in part because, according to WMATA officials, non-union employees did not receive a salary increase for several of these years. In contrast, employees represented by one of WMATA’s five unions generally received annual wage and salary increases, as laid out in their collective bargaining agreements. WMATA officials also estimated that since 2008, between about 10 and 14 percent of its annual wage and salary costs were composed of operating overtime. WMATA officials stated that operating overtime is used to fill gaps in schedules or staffing in positions that have high vacancy rates, such as Metro Transit Police. While wage and salary costs increased modestly, the cost of WMATA’s contracted services more than doubled from fiscal years 2006 through 2017. During this time contracted services costs increased more than 7 percent per year on average, from $123 million in fiscal year 2006 to $267 million in fiscal year 2017. WMATA officials reported large increases during this period in repair and maintenance, custodial services, professional and technical services such as attorneys and management consultants, and WMATA’s MetroAccess contract that provides paratransit door-to-door service for riders unable to use bus or rail. WMATA officials attributed these increases to several factors. First, they stated that paratransit service ridership and the contractor cost per trip have increased. The officials estimated that providing paratransit service currently costs WMATA about $50 per passenger trip. Second, WMATA officials said adding five new Silver Line stations resulted in increases in contract costs because some of the services already provided by contractors, including custodial services and some track work, were extended to the new stations. Third, WMATA officials said they have been using more contractors in recent years to control costs and improve efficiency. For example, they stated they may use contracts to address problems such as a backlog of track inspections because they can procure contractors to complete the work more quickly than they could with current WMATA staff who would have to be pulled away from other duties or new WMATA staff who would have to be hired and trained. WMATA’s Employee and Retiree Benefit Costs Increased Substantially since 2006, but WMATA Has Not Fully Assessed Risks Posed by Its Pension Plans From fiscal years 2006 through 2017, WMATA’s annual costs for its employee and retiree benefits increased substantially in inflation-adjusted fiscal year 2017 dollars. Employee and retiree benefit costs—which include benefits for current employees, such as health care and vacation, and benefits for retired employees such as pensions and health care— increased at an average annual rate of 5.6 percent, from $327 million to $593 million (see table 2 above). These cost increases are reflective of substantial increases in the amount WMATA contributed to its pension plans. These costs increased by an average of 18.9 percent annually, from $25 million in fiscal year 2006 to $168 million in fiscal year 2017. WMATA payments for retiree health benefits increased less dramatically, on average 2.7 percent per year from fiscal years 2008 through 2017($39 million to $49 million). (See fig. 2). WMATA officials attributed increases in employee and retiree benefit contributions to multiple factors including market losses to pension assets incurred after the 2007–2009 financial crisis and an increase in the cost of providing healthcare benefits. Despite paying more for its retiree pension and health plans since 2006, in fiscal year 2017 WMATA had large unfunded retiree health and pension liabilities. Unfunded liabilities are the estimated value of the amount of additional assets, beyond any existing plan assets, that would be required to fully fund accrued liabilities of a plan. The assets of WMATA’s pensions largely consist of investments in stocks, bonds, and real estate. Unfunded liabilities are similar to other kinds of debt because they constitute a promise to make a future payment or provide a benefit. According to WMATA’s fiscal year 2017 Comprehensive Annual Financial Report, WMATA’s pension plans were underfunded by $1.1 billion for fiscal year 2017, of which $814 million was attributed to WMATA’s largest pension plan—Local 689. In contrast, WMATA’s four retiree health plans were pay-as-you-go during fiscal years 2006 through 2017, meaning WMATA’s annual plan contributions were benefit payments for retirees each year in that period. Since WMATA did not make contributions to prefund retiree health benefits, funds necessary for future benefits were not accumulated as assets. As a result, the entire accrued liability was an unfunded liability, and WMATA’s four retiree health plans were unfunded by over $1.8 billion in fiscal year 2017. WMATA officials said they have made several changes to reduce unfunded pension and retiree health liabilities through negotiations with WMATA’s unions. For example, in 2014, Local 689 employees began contributing a portion of their compensation (1 percent) to the Local 689 pension plan. This amount increased to 3 percent in 2015. Local 689 employee contributions reported for fiscal year 2017 were about $22 million, which was about 17 percent of the $127.5 million reported for WMATA’s contribution to their pension plan for that year. In addition, according to WMATA’s fiscal year 2017 Comprehensive Annual Financial Report, non-represented and Local 2 employees hired on or after January 1, 1999 are not eligible for the defined benefit pension plan. WMATA also reported that Local 689 and Local 2 employees hired on or after January 1, 2010, Metro Special Police hired after February 25, 2016, and non- represented employees hired after January 1, 2017 are not eligible for retiree health benefits. Most recently, WMATA created a trust to fund WMATA’s retiree health benefits and invested $3 million in the trust. WMATA’s pension plans, due to their relative size and maturity and investment decisions, pose a particular risk to WMATA’s financial operations: Relative size and maturity: The size of WMATA’s pension plans and the overall maturity of the plans’ participants pose a combined financial risk to WMATA. WMATA’s pension plans assets and liabilities are large relative to its business operations. For example, in fiscal year 2017, WMATA’s pension assets ($3.6 billion) were about 5 times more, and its pension liabilities ($4.7 billion) about 6.5 times more than its annual wages and salaries ($728 million). Because of their relative size, changes in the value of these assets or liabilities— for example, as a result of underperforming investments or revisions to actuarial assumptions—could significantly affect WMATA’s operations. In addition, WMATA’s pension plans are considered “mature” by actuarial measures, meaning, for example, that they have a high proportion of retirees compared to active members. A 2017 WMATA Board of Directors Pension Subcommittee report indicated that if WMATA’s assumed rate of return across all five plans decreased from 7.66 percent to 7 percent, WMATA’s required annual pension contribution would increase $42 million, a 26 percent increase, from 22 percent of wages and salaries ($160.7 million) to about 28 percent of wages and salaries ($203 million). Investment decisions: WMATA’s pension plans assume higher rates of return than state and local pension plans generally do, according to a recent National Association of State Retirement Administrators report. For the 2017 plan year, WMATA’s largest pension plan had an assumed rate of return of 7.85 percent per year, and the weighted average assumed rate of return for WMATA’s five plans combined was 7.66 percent. The average assumed rate of return among the largest state and local government plans was 7.52 percent in 2017, and dropped to a planned 7.36 percent for fiscal year 2018. If WMATA’s pension plan assets return significantly less than assumed, WMATA’s unfunded liabilities will be higher than anticipated, potentially resulting in a spike in required contributions, as occurred in the years following the 2007-2009 financial crisis (see fig. 2 above). WMATA’s pension plans are largely invested in the stock market, which also poses risk. For example, according to a November 2017 report to WMATA’s Board of Directors Pension Subcommittee, 69 percent of WMATA’s plan assets across all five pension plans were invested in the stock market, and only 18 percent in fixed income or cash. Investing in assets such as stocks may increase expected investment returns, but it also increases risk because stock returns are more volatile than investments in high quality bonds that provide a more stable rate of return. In addition, with its mature plans, WMATA faces a shorter time horizon before benefits for its retirees and older workers will become due, leaving less time to recover from investment shortfalls. According to literature on challenges facing U.S. pension plans, plans should take on less risk as they become more mature. This is because investment losses—and corresponding required increases in contributions—can potentially be a high percentage of wage and salary costs, with less time to make adjustments. As described above, WMATA’s pension plans are considered mature, yet they still have a high percentage allocated to risky assets. Although WMATA recently hired a consultant to complete a high-level review of its pensions, it has not fully assessed the risks of its five pension plans to the agency’s financial operations. In 2016 and 2017 WMATA hired a consultant to provide an overview of its five pension plans, including reviewing the plans’ funding strategies and performance. However, the stated purpose of these reports did not include an assessment of risk, and the reports included only limited analysis of the various risks facing WMATA from the plans, for example forecasting WMATA’s pension contributions over the next 10 years, but only under one scenario. In addition, WMATA provided us with analyses conducted by an actuary for each of its five pension plans, which included some limited risk analysis for three of the five pension plans, and no risk analysis for the other two plans, including the Local 689 plan—WMATA’s largest. Neither WMATA nor the trustees for the Local 689 plan have fully assessed the risks of that plan. WMATA’s Office of Internal Compliance has developed a process to periodically assess risks across the agency, known as an Enterprise Risk Management Program, and reported that pension risks could be assessed within this framework. However, WMATA has not yet assessed the fiscal risks from its pension plans within this program. WMATA officials said they are in the process of identifying risks to include in this program for 2019. The internal control standards WMATA follows state that organizations should identify, analyze, and respond to risks related to achieving their objectives. Further, a Society of Actuaries Blue Ribbon Panel reported that it is important for stakeholders—such as trustees, funding entities, plan members, union officials, and, in WMATA’s case, its Board of Directors—to have comprehensive information about the current and expected future financial position of pension plans and the extent of risks facing pension plans. According to the Blue Ribbon Panel, this information should include, among other things, “stress testing,” which projects a plan’s financial outcomes under adverse scenarios. WMATA officials told us that WMATA has not fully assessed pension risks because WMATA’s management does not have control over decisions related to the risks its pension plans take. For example, WMATA officials told us that given that both asset-allocation and investment-return assumptions are the purview of plan trustees who are required to act independently, WMATA has left the decision to determine if risk analysis is necessary to the individual plans’ trustees. WMATA officials stated that even if they were to identify risks, there are not many actions WMATA management could take to change them because trustees have ultimate control over the plans’ investment decisions. However, the investment risks taken by the pension plans’ trustees ultimately affect the amount that WMATA is required to contribute, and assessing those risks could help WMATA better anticipate its required future pension contributions. Without a comprehensive assessment of these risks, WMATA and its stakeholders—such as its Board of Directors—are limited in their ability to prepare for economic scenarios that could ultimately increase the amount WMATA is required to contribute to its pension plans. In addition, if disappointing market returns were the result of a broader economic downturn, WMATA’s revenues—such as those from local jurisdictions— could decline at the same time as higher pension contributions were required. For example, as noted earlier, if WMATA’s pension plans’ assets of $3.6 billion return significantly less than assumed, WMATA could experience a spike in required contributions, as it did in the years following the 2007–2009 financial crisis. Such a spike would further constrain WMATA’s operating budget, and potentially jeopardize its ability to pay for pension contributions or provide transit service. Moreover, without a comprehensive assessment of these risks under various scenarios, WMATA may lack useful information to develop risk mitigation efforts and to inform its collective bargaining negotiations about pay and benefits. Such information would also be useful to WMATA to inform its Board of Directors, and the jurisdictions that fund WMATA, about the impact that adverse economic scenarios could have on WMATA’s ability to provide future service at anticipated funding levels. WMATA Lacks a Strategic Process to Identify and Address Future Workforce Needs WMATA identifies the staffing levels it needs each year through its annual budgeting process, but does not have a strategic process to identify and address its long-term workforce needs to meet the agency’s goals. For example, in preparing the annual budget request for the Board of Directors, WMATA officials identify the number of staff needed in individual departments the following fiscal year. However, WMATA does not have a process for identifying and addressing agency-wide workforce needs beyond one year or in relation to agency-wide goals, contrary to leading practices. In addition, WMATA has some workforce development programs, including some that are piloted or planned, but these programs are not based on an agency-wide assessment of the skills the agency needs to meet its strategic goals. Instead, WMATA’s workforce development programs are directed to short-term needs such as filling vacancies. WMATA Identifies Short- Term Staffing Levels for Its Annual Budget but Has Not Set a Direction for its Long-Term Workforce Needs WMATA officials identify staffing levels needed by individual departments annually, in preparation for WMATA’s annual budget. The annual budget, once approved by WMATA’s Board of Directors, sets a ceiling for the number of positions WMATA can employ in the next fiscal year. For example, in fiscal year 2016, WMATA was authorized to fill up to 13,032 positions in fiscal year 2017. WMATA officials told us that each department, such as Rail Services or Bus Services, estimates the number of positions they will need to meet their mission the following fiscal year. According to WMATA officials, this estimation is based in large part on the number of positions allotted to them in the previous fiscal year. WMATA officials said the budget office assembles this department-level data into WMATA’s agency-wide budget request for the board of directors. WMATA’s recent restructuring of its workforce was also guided by the annual budget process. Beginning in June 2016 in preparation for the fiscal year 2018 budget proposal, WMATA eliminated 800 positions, most of which were vacant. To identify these positions, WMATA’s General Manager directed department heads to help identify any positions that were redundant or obsolete. WMATA officials reported that 637 of the 800 positions eliminated were already vacant, and of the 163 occupied positions most were reassigned to other existing positions. Ultimately, WMATA terminated 62 employees during this time for an estimated savings of $7.3 million (about $116,000 per employee in salary and benefits). Although WMATA estimates departmental staffing needs annually, WMATA officials said the agency does not have a process for identifying the agency’s long-term workforce needs. Instead, officials said that each department typically completes a 3-year business plan through which it may identify the number of employees needed over that period. However, none of the 8 department business plans that we reviewed for calendar years 2017 through 2019 identified the number of employees needed. Further, WMATA’s Chief Operating Office business plan identified the lack of long-term workforce planning as a risk to the office’s ability to meet its core organizational goals. WMATA’s four organizational goals are creating a safety culture and system, delivering quality service, improving regional mobility, and ensuring financial stability and investing in people. According to leading human capital practices we have previously identified, agencies should have a strategic workforce planning process that identifies the workforce, including full-time, part-time, and contracts, needed to meet the agency’s strategic goals now and in the future. Strategic workforce planning helps an agency align its human capital program with its current and emerging mission and ensures that it will have the workforce it needs to accomplish its goals. According to these leading practices, the first step of strategic workforce planning is for top management to set a strategic direction for the agency’s workforce planning efforts, and to involve employees and other stakeholders in the development and communication of these efforts. WMATA does not have a strategic workforce planning process that would address its workforce needs beyond the next fiscal year because it has not prioritized that effort. WMATA officials told us they were interested in creating a strategic workforce plan, and had made previous plans to do so. Specifically, WMATA’s 2013–2025 Strategic Plan reported that the agency was creating a “Strategic Human Capital Plan” that would have developed long-term workforce planning strategies. However, WMATA officials told us that the Strategic Human Capital Plan was never completed due to other, competing priorities such as filling vacant positions and addressing other workforce issues in the upcoming budget. Without a strategic workforce planning process to establish a long-term direction for its workforce, WMATA does not have a clear plan for how it will acquire, develop, and retain the workforce needed to achieve its strategic goals of creating a safety culture, delivering quality service, improving regional mobility, and financial stability. Further, without such a process, WMATA lacks reasonable assurance that its short-term annual budget requests for staff, including the recent restructuring, will move the agency toward these strategic goals. WMATA’s Workforce Development Programs Are Not Based on an Agency-wide Assessment of Gaps in Critical Skills and Competencies WMATA officials told us they have some established workforce development programs, and others piloted or planned. For example, WMATA currently has three specialized recruitment programs to identify qualified veterans, Latinos, and persons with disabilities for WMATA positions. WMATA also provides targeted training for employees such as “principles of supervision” for all new supervisors. WMATA officials told us the agency is also developing a “People Strategy,” which will include multiple workforce development programs for certain entry-level workers and managers to improve their skills and help them to advance in the agency. One component of the People Strategy will be to establish a program to identify and train “high-potential” staff for leadership positions. Although WMATA has some limited workforce development programs, these programs are not based on an agency-wide assessment of skill and competency gaps. According to the COSO internal control standards and leading practices we have previously identified, once an organization’s leadership sets a strategic direction for workforce planning efforts, it needs to conduct a “workforce gap analysis”—a data-driven assessment of the critical skills and competencies the agency will need to achieve its current and future goals. Agencies can use different approaches for this analysis. One example is using information on retirements and attrition to identify future gaps in staffing or skills. Another is “scenario planning” in which an agency identifies how its activities might change in scope and volume in the next 5 years, and then identifies gaps in skills and competencies needed to fill the likely scenarios, rather than planning to meet the needs of a single view of the future. An agency can then develop strategies that are tailored to address any gaps between the skills and competencies they need and the ones they already have. WMATA officials reported that they identify workforce gaps by tracking vacancy rates (percentage of budgeted positions that are vacant) and consulting department leaders about employees departing or retiring. However, WMATA officials said they do not monitor trends in agency- wide retirements and had not projected the number of employees eligible to retire in the future—essential components of a data-driven workforce gap analysis. In comparison, officials from four of the five similar transit agencies we interviewed project the percentage of staff who are eligible to retire in the future, ranging from 3 to 10 years. WMATA officials said the agency has not conducted an agency-wide assessment of its skill and competency needs because it has been more reactive than proactive in response to attrition and retirements and relied on promoting staff to higher-level positions to fill vacancies. For example, until 2017, WMATA had a Superintendent Succession Planning Program, which was designed to prepare bus and rail employees for management roles. WMATA officials reported that this program was initiated in 2009 but is currently on hold as the agency develops its People Strategy. WMATA officials said they plan to implement a different succession planning program, which will offer financial incentives for some managers to transfer knowledge to staff before they retire, as part of the People Strategy. However, without conducting a data-driven assessment of the critical skills and competencies WMATA needs to fill any gaps and achieve its strategic goals, WMATA lacks complete information on where the gaps in its workforce lie, and if its workforce development programs are addressing those gaps or ultimately moving the agency closer to its strategic goals. WMATA Lacks Some Key Elements of an Effective Performance Management System and Sufficient Controls to Ensure Accurate and Timely Performance Reviews WMATA has implemented two performance management systems to cover its various employee groups, but these systems lack some key elements of an effective performance management system. Specifically, WMATA has linked employee performance to pay for some employees; however, WMATA’s performance management systems do not (1) consistently align employee and agency goals or assign responsibilities, (2) make meaningful distinctions in performance, or (3) consistently use competencies to identify the behaviors individual employees need to contribute to strategic goals. In addition, WMATA does not have sufficient controls to ensure that performance reviews are complete, accurate, and submitted within established timeframes and does not use performance management information to track progress towards strategic goals. WMATA’s Performance Management Systems Cover All Employees but Design Lacks Some Key Elements WMATA has implemented two performance management systems that cover all employees: PERFORMetro for non-represented staff and staff represented by Local 2, Fraternal Order of Police, or Local 639; and Performance Conversations for staff represented by Local 689 or Teamsters Local 922. The features of the PERFORMetro and Performance Conversations systems vary somewhat in terms of the frequency of performance reviews, the use of objectives to assess performance, and other characteristics (see table 3). WMATA links pay increases to positive performance for some employees under PERFORMetro, a key element of effective performance management. For example, Metro Special Police must earn a solid performer or better rating to be eligible for salary increases. We have previously noted that high-performing organizations seek to create pay systems that clearly link to employee contributions. WMATA does not link pay to performance for employees who fall under Performance Conversations. Pay increases for these employees—who are represented by two of the largest unions at WMATA—are determined by years of service as described in the collective bargaining agreements. WMATA officials said they had considered linking some pay to performance in the past, but had not pursued this since they believe any changes to how pay is awarded would have to be negotiated between WMATA and each respective bargaining unit. Although WMATA has linked individual performance to pay for some employees, the design of WMATA’s performance management systems lacks three additional key elements of an effective performance management system as identified in our prior work and internal control standards followed by WMATA. Those key elements are: aligning employee and agency goals and identifying responsibilities making meaningful distinctions in performance, and using tailored competencies to define needed skills and behaviors. Aligning employee and agency goals and identifying responsibilities: PERFORMetro is not designed to align individual employee performance with all of its strategic goals. While Performance Conversation forms guide supervisors to discuss the employees’ performance in relation to each of WMATA’s four strategic goals, supervisors under PERFORMetro are required to evaluate employees on individual performance objectives that are aligned with three of these goals. Supervisors under PERFORMetro are not required to evaluate employees on a performance objective aligned with WMATA’s fourth strategic goal—improving regional mobility. WMATA officials told us it is up to individual supervisors to determine whether to evaluate an employee on the fourth strategic goal. Of the 50 performance reviews we assessed, we observed one that aligned an employee’s performance objectives with the organizational goal of improving regional mobility. According to leading performance management practices we previously identified, aligning individual performance objectives with organizational goals helps individuals see the connection between their daily activities and the organization’s goals. Without a mechanism in place to do this for PERFORMetro staff, WMATA may not know how these employees are contributing to increasing regional mobility, and employees may not know how they are performing relative to this goal. In addition, WMATA has not consistently identified how its performance management systems support its overarching strategic goals or assigned responsibilities for implementing these systems. While WMATA issued a staff memo in April 2016 that identified a goal for Performance Conversations—to ensure that employees understand how their performance supports Metro’s strategic goals—WMATA has not done so for PERFORMetro. In addition, none of the performance management documents we reviewed clearly assigned authority or defined responsibilities for implementing either PERFORMetro or Performance Conversations. According to the COSO internal control standards, setting program goals is a key part of the management process, and program- level goals should cascade from agency-level goals. Additionally, these standards include establishing policies and procedures that effectively document a program’s design, delegation of authorities, and assignments of responsibilities. Making meaningful distinctions: WMATA’s performance management systems are not designed to make meaningful distinctions in performance. According to leading performance management practices, the organization’s leadership should make meaningful distinctions between acceptable and outstanding performance of individuals. However, both of WMATA’s performance management systems lack clear definitions for supervisors and employees to use in assessing performance. For example, WMATA leaves it up to employees and their supervisors to identify and define many of the objectives on which employees under PERFORMetro are evaluated. WMATA officials said this provides supervisors some flexibility to account for the responsibilities of employees in different positions. However, the result is that two employees performing the same functions may be evaluated on different objectives, making it difficult to distinguish their performance. Further, under PERFORMetro supervisors are required to rate employees on each objective as “met,” “did not meet,” and “exceeded,” but WMATA does not provide definitions for these categories for each objective. As a result, two employees rated under PERFORMetro could receive different ratings for comparable performance. In addition, for employees under the Performance Conversations system, WMATA does not require supervisors to rate employee performance. Rather, officials told us that WMATA implemented Performance Conversations as a way to encourage more positive, performance-based interactions between employees and management that expanded beyond discipline. WMATA has a discipline-based program for most employees under Performance Conversations (Local 689 bus and rail operations employees and Local 922 bus operators) that establishes standards of conduct these employees must adhere to, and identifies penalties if they do not. This discipline-based program lays out the penalties for violations of employee standards of conduct such as speeding or failing to stop at a red signal. The penalties for conduct violations range from written warnings, to suspensions, to termination. Using competencies tailored to each position: WMATA’s performance management systems do not consistently use competencies to identify the behaviors individual employees are expected to contribute to strategic goals. Although WMATA has established competencies as part of its PERFORMetro system, these competencies are defined in a uniform manner that does not reflect the varied job responsibilities of its employees. Inclusion of such competencies tailored to each position’s responsibilities is a leading practice for an effective performance management system. Competencies, which define the skills and supporting behaviors that individuals are expected to exhibit to carry out their work effectively can provide a fuller picture of an individual’s performance. WMATA defines four competencies for all employees under PERFORMetro—”focuses on safety,” “serves customers,” “accountability,” and “teamwork.” However, these competencies are defined in the same way for all employees under PERFORMetro and are not based on the job responsibilities of each position. For example, WMATA assesses the performance of individuals performing different job functions—such as administrative staff and police officers—by the same competencies and without consideration for how skills and behaviors vary by job function. As such, some portions of the competency descriptions are not applicable to all employees. For example, all PERFORMetro employees are evaluated on the extent that they wear required personal protective equipment and/or clothing, but this may not apply to someone in accounting or human resources. WMATA officials said they are aware of this, and that supervisors choose which portions of the competency descriptions to apply to their employees. Finally, WMATA officials said they do not include competencies for employees under Performance Conversations because Performance Conversations are intended to promote performance discussions, not to evaluate employee performance. However, without competencies tailored to employees’ positions, supervisors are limited in their ability to assess employee performance. WMATA’s performance management systems lack key elements of an effective performance management system in part because the agency has not established comprehensive policies and procedures, as called for by COSO, for its performance management systems. Instead, the agency relies on piecemeal documents—such as staff memos and training—and individual supervisors to define and carry out performance management. By establishing comprehensive policies and procedures that document key elements, such as defined objectives and rating categories, WMATA would be better positioned to assess staff performance and ensure performance management is consistently implemented across supervisors. Additionally, WMATA would be better positioned to use its performance management systems to move employees toward achieving its strategic goals. Better Controls Could Improve the Completeness, Accuracy, and Timeliness of WMATA’s Employee Performance Reviews We found that, in implementing its most recent performance evaluation cycle, WMATA’s reviews of employee performance were often incomplete, inaccurate, or untimely. First, officials said that they do not routinely collect or retain the forms for its Performance Conversations and that accordingly, WMATA does not know the extent to which these reviews were completed. Second, in our review of a non-generalizable sample of 50 PERFORMetro performance evaluations for fiscal year 2016, we found that WMATA supervisors frequently submitted evaluations that were incomplete, inaccurate, or not submitted within established timeframes. Specifically: 25 of the 50 selected files we reviewed were missing either the employee’s or supervisor’s signature required on the initial expectations setting portion of the form; 3 of those 25 files were also missing a required signature on the final review portion of the evaluation form, which provides assurance that the performance evaluation was completed; 10 of the 50 selected files we reviewed were scored incorrectly and thus assigned a performance rating inconsistent with the supporting review. WMATA determines an employee’s final rating based on scores tabulated by supervisors for an employee meeting his or her objectives and demonstrating competencies. Specifically, employees receive separate ratings for objectives and competencies, which are then combined together to yield a final overall rating of “role model,” “solid performer,” or “improvement required”. We found tabulation errors in 10 of the files where, for example, a “solid performer” was given a “role model” rating. Without accurate information about employee performance, WMATA may not be able to recognize employees’ achievements or address potential performance challenges. 22 of the 50 selected files we reviewed were not submitted on time according to timeframes established in a 2016 WMATA staff notice and a 2017 agreement between WMATA and one of its unions. This includes 9 files of employees not represented by a union, 5 law enforcement staff files, and 8 Local 2 staff files. Local 2 officials told us they filed a grievance following delayed performance reviews for its members. Pursuant to the grievance, Local 2 officials signed an agreement with WMATA that if a supervisor does not submit a scheduled performance evaluation within 30 calendar days of a Local 2 employee’s anniversary date, that employee will receive an automatic solid performer rating and any associated pay or step increase. COSO internal control standards state that management should establish control activities, such as policies and procedures, to achieve its goals. Examples of control activities include management reviews and controls over information processing, among other things. A specific type of control activity is a “transaction control,” which helps management ensure that all transactions (in this case, performance reviews) are completely captured, accurate, and timely. Transaction controls may include authorizations or approvals by a higher level of management, or verifications to compare transactions to a policy and then follow-up if the transaction is not consistent with the policy. In the case of WMATA’s performance reviews, this could include comparing a list of employees who should have received a performance review per WMATA policy to a list of the reviews that were submitted to the human resources office. We found that WMATA does not have sufficient controls in place to ensure that supervisors accurately complete performance reviews and submit them to the human resources department within established timeframes. WMATA human resources officials said that for the 2016 review cycle, they emailed a report to supervisors listing year-end performance reviews that were due within 90 days, but did not subsequently ensure that they were completed correctly and on time. Officials said that once supervisors emailed these reviews to the human resources department, human resources staff manually recorded these reviews into WMATA’s personnel information system. WMATA officials told us that human resources staff examined the performance reviews for completion and accuracy. Despite this process, WMATA officials could not provide us reliable information on the number of 2016 performance reviews that were completed, and as previously mentioned, said they did not routinely collect or retain Performance Conversations forms. WMATA officials said they have plans to upgrade their current performance management information technology system, but descriptions of the upgrade that WMATA provided to us do not identify how the upgrade will address the issues we identified. Without controls to ensure that supervisors submit complete, accurate, and timely performance reviews, WMATA lacks information on the performance of its workforce, and employees lack information needed to improve performance. WMATA Does Not Have a Process to Use Employee Performance Information to Monitor Progress toward Strategic Goals WMATA officials told us that they do not have a process to use information from their performance management systems to identify performance gaps, or pinpoint improvement opportunities. We have previously identified that routinely using performance information to track individual contributions to organizational priorities, and then requiring follow-up actions to address gaps, are key performance management practices. This approach allows an agency to use its employee performance information to monitor progress towards its strategic goals. Officials from two transit agencies we spoke to told us they use information from their performance management systems to track performance gaps related to strategic goals. For example, Chicago Transit Authority officials told us that they evaluate employees on competencies related to the organization’s strategic goals of safety, customer service, and teamwork, and then aggregate performance review information to assess the organization’s performance on these goals. WMATA does not make use of employee performance information in part because it has not developed a process to do so. Without a documented process to use employee performance management information to monitor progress on its strategic goals, WMATA may miss opportunities to identify and follow-up on performance gaps and to make full use of the information collected through its performance management systems. Conclusions WMATA transports more than 1 million passengers each weekday, making it central to the mobility and productivity of the nation’s capital. Recent safety incidents and declines in ridership place additional pressure on WMATA to effectively manage its most expensive resource— its workforce. If increases in WMATA’s workforce pension costs continue to outpace increases in WMATA’s other workforce costs, WMATA will be under greater pressure to manage its costs and balance competing priorities. A comprehensive assessment of the fiscal risks these pension investments could pose to WMATA could help it prepare for various economic scenarios and ensure that it can continue to provide benefits to its employees without having to compromise future service to riders to pay for these benefits. Effective workforce planning could also help WMATA by ensuring that WMATA has the people and skills it needs to achieve its goals of safety, customer service, financial stability, and regional mobility now and in the future. Establishing a strategic workforce planning process that involves employees and other stakeholders, and that uses data on WMATA’s workforce to assess competency and skill gaps would provide WMATA with critical information that could help it address any identified gaps and ultimately move it closer to its strategic goals. With effective employee performance management, WMATA also would be better positioned to achieve its goals by explicitly aligning them with the daily tasks of its employees. By establishing comprehensive policies and procedures for its performance management systems that align employee performance objectives with WMATA’s strategic goals and define performance objectives, rating categories, and competencies, WMATA will be better able to steer employees towards behaviors that support the agency’s goals and away from behaviors that do not. Further, establishing controls for supervisors to submit complete, accurate, and timely performance reviews would help ensure that staff receive information needed to improve their performance. Finally, a documented process to make use of the performance information WMATA collects could help it track progress in meeting its organizational goals and identify and address performance gaps. In light of WMATA’s uncertain financial future, improvements in WMATA’s workforce planning and performance management could better position WMATA to navigate that future. Recommendations We are making the following five recommendations to WMATA: 1. WMATA’s General Manager should conduct a comprehensive assessment of the financial risks to which WMATA is exposed from its pension plans and communicate the results to its pension plan trustees and other stakeholders, such as its Board of Directors. This assessment should include information about WMATA’s current and potential future required payments and unfunded liabilities, including under potentially adverse economic scenarios. (Recommendation 1) 2. WMATA’s General Manager should develop a strategic workforce planning process that (1) sets a strategic direction for WMATA’s workforce planning and involves employees and other stakeholders in developing and communicating the process, and (2) includes a data- driven assessment of the critical skill and competencies WMATA needs to fill any gaps. (Recommendation 2) 3. WMATA’s General Manager should establish comprehensive policies and procedures for both of its performance management systems that document the goals of the systems and individuals’ responsibilities for implementing these systems; align employee performance objectives with all of WMATA’s strategic goals; and define performance objectives, rating categories, and competencies tailored to individual positions’ responsibilities. (Recommendation 3) 4. WMATA’s General Manager should establish controls to ensure supervisors fully and accurately complete employee performance reviews and submit them to human resources within established timeframes. (Recommendation 4) 5. WMATA’s General Manager should develop a documented process to use employee performance management information to monitor progress toward WMATA’s strategic goals. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to WMATA and DOT for review and comment. WMATA provided written comments, which we have reprinted in appendix II, and technical comments, which we incorporated as appropriate throughout our report. Regarding our first recommendation that WMATA conduct a comprehensive assessment of the financial risks to which it is exposed from its pension plans, WMATA concurred but stated that the agency has already completed such an assessment and does not believe that any additional assessment would add value. As stated in our report, WMATA hired a consultant in 2016 and 2017 to provide an overview of its five pension plans, including reviewing the plans’ funding strategies and performance. However, the stated purpose of these reports did not include an assessment of risk, and the reports included only limited analysis of the various risks WMATA is facing from the plans, and only considered a single scenario for estimating WMATA’s future pension obligations. As such we concluded that these reports did not constitute a comprehensive assessment of risks facing WMATA from its pension plans. Given the plans’ large size relative to WMATA’s business operations, high proportion of retirees compared to active members, high percentage allocation to risky assets, and high assumed rates of return, WMATA’s pension plans pose significant risk to its financial operations. Without a comprehensive risk assessment, WMATA and its Board of Directors are limited in their ability to prepare for economic scenarios that could compromise WMATA’s ability to provide future service. Thus, we continue to believe that our recommendation is valid and that WMATA should fully implement it. Regarding our second recommendation that WMATA develop a strategic workforce planning process, WMATA concurred and described actions it has underway to address the recommendation. Regarding our third recommendation that WMATA develop comprehensive policies and procedures for both of its performance management systems, WMATA concurred and stated that it is in the process of hiring a consultant to evaluate and redesign WMATA’s performance management systems for fiscal year 2020. WMATA also noted that the agency published a performance management handbook and guide in July 2018 that, among other things, provides definitions and indicators for behaviors assessed in performance evaluations. As part of our recommendation follow up process, we will obtain and review the handbook to determine whether it fully addresses our recommendation. Regarding our fourth recommendation that WMATA establish controls to ensure that supervisors complete and submit employee performance reviews to human resources within established timeframes, WMATA concurred and described actions it plans to take in response. Regarding our fifth recommendation that WMATA develop a documented process to use employee performance management information to monitor progress towards WMATA’s strategic goals, WMATA neither agreed nor disagreed. WMATA stated that it already ties individual employee performance to the agency’s strategic goals, but is open to considering improvements through the third-party consultant it plans to hire to review its performance management systems. In our report we note that WMATA’s PERFORMetro performance management system is not designed to align individual employee performance with all of its strategic goals. Specifically, supervisors under PERFORMetro are required to evaluate employees on individual performance objectives that are aligned with three of WMATA’s strategic goals, but not with WMATA’s fourth strategic goal—improving regional mobility. Further, WMATA officials told us that they do not have a process to use information from their performance management systems to identify performance gaps, or pinpoint improvement opportunities. Thus, we continue to believe that our recommendation is valid and WMATA should fully implement it. We are sending copies of this report to the General Manager of WMATA, the Secretary of Transportation, and the appropriate congressional committees. We provided a draft of this report to WMATA and DOT for review and comment. If you or your staff have any questions about this report, please contact Mark Goldstein at (202) 512-2834 or goldsteinm@gao.gov or Frank Todisco at (202) 512-2700 or todiscof@gao.gov. Mr. Todisco meets the qualification standards of the American Academy of Actuaries to address the actuarial issues contained in this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors are listed in Appendix III. Appendix I: Objectives, Scope and Methodology This report assesses (1) how the Washington Metropolitan Area Transit Authority’s (WMATA) workforce costs have changed from fiscal years 2006 through 2017 and factors contributing to those changes; (2) how WMATA identifies and addresses its current and future workforce needs; and (3) how WMATA has designed, implemented, and monitored its employee performance management systems. To assess how WMATA’s workforce costs have changed since 2006, we used data from WMATA’s annual budgets and annual audited financial statements from fiscal years 2006 through 2017 on the amounts expensed by WMATA on wages and salaries, employee and retiree benefits, contracted services, and other information on WMATA’s pension and retiree medical plans. We selected 2006 to account for any potential effects of the 2007-2009 financial crisis on pension or other costs, and because WMATA began contributing to its largest pension plan again in 2006 after a 6-year period of not contributing to this plan. To adjust WMATA’s costs for inflation, we used quarterly data on the GDP price index, which we obtained from the Bureau of Economic Analysis. Inflation adjustment factors are calculated to align with the definition of WMATA’s fiscal year, which begins on July 1 and ends on June 30 of the following calendar year. Our calculations adjust nominal values for inflation to find real values are expressed in fiscal year 2017 dollars, where fiscal year refers to WMATA’s fiscal year. We also reviewed data WMATA provided on operating and capital overtime costs, and the most recent actuarial reports for each of WMATA’s five pension plans for more information on WMATA’s pension obligations. Additionally, we analyzed characteristics of WMATA’s five pension plans in consultation with GAO’s Chief Actuary and in relation to actuarial principles and recent literature. Further, we consulted with GAO’s Chief Actuary for assistance in interpreting information about WMATA’s pension and retiree medical plans. To assess WMATA’s pension costs, we reviewed pension expense— which reports WMATA’s expense for its pension plans during a year, as measured in accordance with pension accounting standards for financial reporting purposes—and pension contributions, which reports the amount WMATA paid into its pension plans during a year. Both pension expense and pension contributions increased substantially from fiscal years 2006 through 2017. While pension expense is the pension component of WMATA’s employee and retiree benefit cost data described above, changes in pension accounting reporting standards in 2014 resulted in pension expense being reported differently before and after 2014. As such, we relied on pension contributions as our primary measure of growth of WMATA’s annual pension costs. To assess the reliability of WMATA’s budget data, and other data WMATA provided, we interviewed WMATA officials on practices used to assemble these data. We found these data to be sufficiently reliable for our purposes. To identify factors contributing to changes in workforce costs, we interviewed WMATA officials and reviewed WMATA’s annual budgets, annual financial statements, and actuarial statements for information on the total number of authorized represented and non-represented staff, changes in operating overtime costs, changes in pension-related costs, and other factors that could influence workforce cost changes since fiscal year 2006. To evaluate how WMATA identifies and addresses its workforce needs, we compared WMATA’s workforce planning and workforce development efforts to leading practices we previously identified and the Committee of Sponsoring Organizations of the Treadway Commission (COSO) internal control standards, which WMATA follows. We previously developed these leading strategic workforce planning practices based on a review of documents from (1) organizations with government-wide responsibilities for or expertise in workforce planning models and tools, such as the Office of Personnel Management and the National Academy of Public Administration, and (2) federal agencies recommended as having promising workforce planning programs. Additionally, to identify these practices we reviewed our prior reports and testimonies on human capital issues and met with officials from the aforementioned organizations concerning existing workforce planning models and lessons learned from workforce planning experiences. In addition to comparing WMATA’s workforce planning efforts to leading practices and COSO standards, we reviewed WMATA’s 2017–2019 individual department business plans and 2013–2025 strategic plan to describe how WMATA identifies its short- and long-term workforce needs. Furthermore, we obtained and reviewed WMATA information on the positions WMATA eliminated in fiscal years 2017 and 2018, including the number of positions that were vacant or occupied. Lastly, we compared WMATA’s workforce planning approach to those at a non- generalizable sample of five similar U.S. transit and rail agencies, selected based on similarity in size, age, unions representing agency staff, and stakeholder recommendations. Agency size was measured according to unlinked passenger trips and passenger miles data in the American Public Transportation Association’s 2016 Public Transportation Fact Book, the most recent issue available at the time of selection. System age and union status were determined by a review of publicly available information about each transit system such as academic papers and transit agency websites. With input from industry, federal, WMATA, and union stakeholders, we selected the following peer agencies: (1) Chicago Transit Authority, (2) Los Angeles County Metropolitan Transportation Authority, (3) San Francisco Bay Area Rapid Transit District, (4) Southeastern Pennsylvania Transportation Authority, and (5) Metropolitan Transportation Authority, Metro-North Commuter Railroad. To evaluate how WMATA designed, implemented, and monitored its performance management systems, we reviewed documentation on WMATA’s two employee performance management systems— ”PERFORMetro” for non-represented, Office and Professional Employees International Union Local 2, Fraternal Order of Police, and International Brotherhood of Teamsters Local 639 employees; and “Performance Conversation” for Amalgamated Transit Union Local 689 and International Brotherhood of Teamsters Local 922 employees. We compared these systems to leading performance management practices we have previously identified and to the COSO internal control standards. We previously identified these key practices for modern, effective, and credible performance management systems by synthesizing information contained in its previous performance management work. These practices were also provided for comments to officials from the Office of Personnel Management, the Senior Executives Association and the Center for Human Resources Management at the National Academy of Public Administration. In addition to comparing WMATA’s performance management systems to key practices and COSO internal control standards, we also reviewed WMATA’s 2013–2025 strategic plan, which outlines WMATA’s four strategic goals: (1) build and maintain a premier safety culture and system, (2) meet or exceed expectations by consistently delivering quality service, (3) improve regional mobility and connect communities, and (4) ensure financial stability and invest in our people and assets. To assess how WMATA implemented its performance management systems, including what management controls it had in place to track the completion of required annual employee performance reviews, we interviewed WMATA human resources officials and assessed the data they collected on the number of 2016 PERFORMetro year-end reviews that were required and submitted by supervisors. WMATA officials could not tell us how many PERFORMetro reviews or Performance Conversation forms were required over the period we requested. WMATA officials said that they had data on the number of 2016 PERFORMetro reviews submitted to human resources, but did not collect any data on Performance Conversation forms. As such, we requested the list of submitted 2016 PERFORMetro reviews. WMATA human resources management sent an email to all supervisors asking them to send the reviews they had conducted in the 2016 performance period if they had not already done so. While this information met our purposes for performing a non-generalizable review of selected completed performance reviews, data on the number of employees who were required to have a performance review under PERFORMetro in the 2016 performance period and the number of those employees who received a review were not reliable for reporting purposes. WMATA officials agreed with our assessment that these data were not reliable for reporting purposes. From the list of PERFORMetro reviews we received, we selected an initial non-generalizable sample of 60 files to assess based on employee group (non-represented, Local 2, and Metro Transit Police) and job title. We selected 20 files from each of the three employee groups—10 files each from the two job titles within each employee group with the highest number of identified reviews. We selected the 60 files by assigning random numbers to each file within the six selected job titles and selecting the first 10 files in the sorted, randomized list. We adjusted our random selection as needed to ensure our selection included performance reviews completed by multiple supervisors. Our final selection included the following performance review files: Non-represented employees (20 files total) Rail Operations Supervisor (10 files) Transit Field Operations Supervisor (10 files) Local 2 employees (20 files total) Training and Safety Instructor (10 files) Central Control Supervisor (10 files) METRO Transit Police Department (20 files total) METRO Police S (10 files) Special Police Series (10 files) While conducting our file review, we found that the Special Police Series evaluation forms were significantly different than the other files and did not align with the data collection instrument we had designed. As a result, we did not include these 10 files, leaving us with 50 files included in our final analysis. Lastly, as discussed in our report, we did not review any Performance Conversation files as WMATA officials told us that they do not track the completion of these forms and therefore did not have any data on the number of Performance Conversation year-end reviews that were completed in fiscal year 2017, the first year Performance Conversations were implemented. Finally, we interviewed officials from the FTA and union leadership from four of the five unions representing WMATA employees. We conducted our work from July 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Washington Metropolitan Area Transit Authority Appendix III: GAO Contacts and Staff Acknowledgements GAO Contacts Staff Acknowledgments In addition to the contacts named above, Matt Barranca (Assistant Director), Sarah Farkas (Analyst in Charge); Namita Bhatia Sabharwal; Lacey Coppage; Tom Gilbert; Josh Ormond; Steve Rabinowitz; Michelle Weathers; Hannah Weigle; and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study WMATA transports more than 1 million rail and bus passengers each weekday in the nation's capital and surrounding areas. However, recent safety incidents and declines in ridership and revenues have focused public attention on how WMATA manages its workforce and associated costs. GAO was asked to review WMATA's workforce management. This report examines, among other things, (1) how WMATA's workforce costs have changed from fiscal years 2006 through 2017 and factors contributing to those changes, and (2) how WMATA has designed and implemented its employee performance management systems. GAO reviewed WMATA's annual financial statements and budgets from fiscal years 2006 through 2017, and compared WMATA's workforce cost and performance management efforts to leading practices and internal control and actuarial principles. GAO also reviewed a non-generalizable sample of employee performance evaluations selected to include occupations with the highest number of evaluations. What GAO Found The Washington Metropolitan Area Transit Authority's (WMATA) workforce costs—including wages, salaries, and benefits for employees and retirees—increased on average by about 3 percent annually from fiscal years 2006 through 2017. This increase was largely driven by the cost of employee and retiree benefits. Specifically, the amount WMATA was required to contribute to its pension plans increased by an annual average of about 19 percent during this period. Due to their relative size, proportion of retirees compared to active members, and investment decisions, these pension plans pose significant risk to WMATA's financial operations, yet WMATA has not fully assessed the risks. Without comprehensive information on the risks facing its pension plans, WMATA may not be prepared for economic scenarios that could increase its required contributions to an extent that might jeopardize its ability to provide some transit service. WMATA has implemented two employee performance management systems that cover all employees, but these systems lack some key elements of an effectively designed and implemented performance management system. For example, WMATA's performance management systems are not designed to make meaningful distinctions in performance, a key element of an effective system. This design is due in part to WMATA's lack of comprehensive policies and procedures for its performance management systems. In addition, WMATA lacks sufficient controls to ensure that supervisors complete required performance evaluations accurately and on-time. For example, in 10 of 50 performance evaluations we reviewed, we found scoring errors where employees were assigned a performance rating inconsistent with the supporting review. Without comprehensive policies and procedures or sufficient controls over its performance management systems, WMATA lacks tools and information to move employees toward achieving WMATA's strategic goals. What GAO Recommends GAO is making five recommendations to WMATA, including that it develop a comprehensive assessment of risks posed by its pension plans, comprehensive policies and procedures for its employee performance management systems, and controls to ensure supervisors complete required performance evaluations, among other actions. WMATA agreed with four recommendations and neither agreed nor disagreed with the fifth.
gao_GAO-18-470
gao_GAO-18-470_0
Background Executive Order Summaries The President issued two executive orders addressing border security and immigration enforcement on January 25, 2017. These orders direct executive branch agencies to implement a series of reporting, policy, and programmatic provisions to carry out the administration’s border security and immigration policies and priorities. Executive Order 13767 lays out key policies of the executive branch with regard to securing the southern border, preventing further unlawful entry into the United States, and repatriating removable foreign nationals. To support these purposes, the order directs DHS to, among other actions, produce a comprehensive study of the security of the southern border; issue new policy guidance regarding the appropriate and consistent use of detention of foreign nationals for violations of immigration law; plan, design, and construct a wall or other physical barriers along the southern border; and hire and on- board, as soon as practicable, 5,000 additional Border Patrol agents. Executive Order 13767 also directs DOJ to assign immigration judges to immigration detention facilities in order to conduct removal and other related proceedings. Executive Order 13768 focuses on immigration enforcement within the United States. Among other things, the order lays out the administration’s immigration enforcement priorities for removable foreign nationals; directs ICE to hire 10,000 additional immigration officers; states that, as permitted by law, it is the policy of the executive branch to empower state and local law enforcement officials to perform the functions of immigration officers; calls for weekly public reports on criminal actions committed by foreign nationals and any jurisdictions that do not honor ICE detainers with respect to such individuals; and terminates the Priority Enforcement Program while reinstituting Secure Communities. The order also directs DHS and DOJ to ensure that jurisdictions that willfully prohibit or otherwise restrict communication with DHS regarding immigration status information are not eligible to receive federal grants, except as determined necessary for law enforcement purposes. On March 6, 2017, the President issued Executive Order 13780. This order directed agencies to take various actions to improve the screening and vetting protocols and procedures associated with the visa-issuance process and the U.S. Refugee Admissions Program. Specifically, the order directed agencies to conduct a worldwide review to identify any additional information needed from each foreign country to adjudicate visas and other immigration benefits to ensure that individuals applying for such benefits are not a security or public-safety threat. The order also instituted visa entry restrictions for nationals from certain listed countries for a 90-day period; directed agencies to develop a uniform baseline for screening and vetting standards and procedures; and suspended the U.S. Refugee Admissions Program for 120 days in order to review refugee application and adjudication procedures. The order further directed DHS to expedite the completion and implementation of a biometric entry-exit tracking system for travelers to the United States. Implementation of Executive Order 13780 entry restrictions for visa travelers and refugees commenced on June 29, 2017, subject to a June 26 ruling of the U.S. Supreme Court prohibiting enforcement of such restrictions against foreign nationals with a credible claim of a bona fide relationship to a person or entity in the United States. Federal Budget Process and Status Since Executive Order Issuance The federal budget process provides the means for the President and Congress to make informed decisions between competing national needs and policies, allocate resources among federal agencies, and ensure laws are executed according to established priorities. The President generally submits the budget request for the upcoming fiscal year to Congress no later than the first Monday of February (e.g. the fiscal year 2019 budget request was submitted in February 2018). To ensure there is not a lapse in appropriations for one or more federal departments or agencies, regular appropriations bills must be enacted to fund the government before the expiration of the prior appropriations, which would typically be in effect through September 30 in a regular appropriations cycle. If these regular full-year appropriations bills are not enacted by the deadline, Congress must pass a continuing appropriation (or resolution) to temporarily fund government operations into the next fiscal year. For fiscal year 2017, multiple continuing appropriations were enacted to extend funding until the Consolidated Appropriations Act, 2017, was enacted in May 2017. At the time the President issued the executive orders in January and March of 2017, agencies were operating under a continuing appropriation which did not incorporate any funding explicitly for the administration’s immigration and border security priorities, such as hiring 5,000 additional Border Patrol agents. The administration sought additional funds to implement the executive orders through an out-of-cycle March 2017 budget amendment and supplemental appropriations request for the remainder of fiscal year 2017. In May 2017, Congress provided funding for selected priorities through the Consolidated Appropriations Act, 2017. The administration submitted additional funding requests related to the executive orders through the President’s fiscal year 2018 and 2019 budget requests. A number of continuing appropriations acts were enacted from September 2017 through February 2018, providing fiscal year 2018 funding at fiscal year 2017 levels through March 23, 2018. The Consolidated Appropriations Act, 2018, was signed into law on March 23, 2018, providing funding for government operations for the remainder of fiscal year 2018. Figure 1 below provides a timeline of executive order issuance and key milestones in the budget process from December 2016 through March 2018. Agency Roles and Responsibilities DHS, DOJ, and State each play key roles in enforcing U.S. immigration law and securing U.S. borders. Key components and bureaus at the three agencies, and their general roles and responsibilities with regard to border security and immigration enforcement, are described in table 1. DHS, DOJ, and State Took Initial Planning and Programming Actions to Implement Provisions of the Executive Orders DHS, DOJ, and State issued reports, developed or revised policies, and took initial planning and programmatic actions in response to the executive orders. Each agency took a distinct approach to implementing the orders based on its organizational structure and the scope of its responsibilities. Each executive order established near-term reporting requirements for agencies, including updates on the status of their efforts, studies to inform planning and implementation, and reports for the public. According to officials, agencies focused part of their initial implementation efforts on meeting these reporting requirements. In addition, agencies developed and revised policies, initiated planning efforts, and made initial program changes (such as expanding or expediting programs) to reflect the administration’s priorities. DHS: DHS established an Executive Order Task Force (EOTF), which was responsible for coordinating and tracking initial component actions to implement the executive orders. The EOTF assembled an operational planning team with representatives from key DHS components, such as U.S. Customs and Border Protection (CBP) and ICE. The EOTF and the planning team inventoried tasks in the orders, assigned component responsibilities for tasks, and monitored the status of the tasks through an online tracking mechanism and weekly coordination meetings. Additionally, the EOTF coordinated and moved reports required by the orders through DHS. For example, Section 4 of Executive Order 13767 directed DHS to produce a comprehensive study of the security of the southern border. DHS completed and submitted this report to the White House on November 22, 2017, according to EOTF officials. DHS also publicly issued three Declined Detainer Outcome Reports pursuant to Section 9 of Executive Order 13768. Additionally, EOTF officials stated that, in 2017, DHS produced and submitted to the White House 90-day and 180-day reports on the progress of implementing Executive Orders 13767 and 13768. The Secretary of Homeland Security issued two memoranda establishing policy and providing guidance related to Executive Orders 13767 and 13768 in February 2017. One memorandum implemented Executive Order 13767 by outlining new policies designed to stem illegal entry into the United States and to facilitate the detection, apprehension, detention, and removal of foreign nationals seeking to unlawfully enter or remain in the United States. For example, the memorandum directed U.S. Citizenship and Immigration Services (USCIS), CBP, and ICE to ensure that appropriate guidance and training is provided to agency officials to ensure proper exercise of parole in accordance with existing statue. The other memorandum implemented Executive Order 13768 and provided additional guidance with respect to the enforcement of immigration laws. For example, it terminated the Priority Enforcement Program, under which ICE prioritized the apprehension, detention, and removal of foreign nationals who posed threats to national security, public safety, or border security, including convicted felons; and restored the Secure Communities Program, pursuant to which ICE may also target for removal those charged, but not yet convicted, of criminal offenses, among others. Additionally, the memorandum reiterated DHS’s general enforcement priorities. ICE, CBP, and USCIS may allocate resources to prioritize enforcement activities as they deem appropriate, such as by prioritizing enforcement against convicted felons or gang members. DHS components subsequently issued additional guidance further directing efforts to implement the executive orders and apply the guidance from the memoranda. For example, ICE issued guidance to its legal program to review all cases previously administratively closed based on prosecutorial discretion. ICE’s new guidance requested its attorneys to determine whether the basis for closure remains appropriate under DHS’s new enforcement priorities. USCIS also reviewed its guidance for credible and reasonable fear determinations—the initial step for certain removable individuals to demonstrate they are eligible to be considered for particular forms of relief or protection from removal in immigration court. As a result, USCIS made select modifications pursuant to Executive Order 13767, including adding language related to evaluating an applicant’s credibility based on prior statements made to other DHS officials, such as CBP and ICE officers. DHS also initiated a number of planning and programmatic actions to implement the executive orders. In some cases DHS components expanded or enhanced existing regular, ongoing agency activities and programs in response to the orders. For example, in response to Executive Order 13768, ICE officials reported that they expanded the use of the existing Criminal Alien Program. In other instances, DHS components altered their activities consistent with the administration’s immigration priorities. For instance, in response to Executive Order 13768, the Secretary of Homeland Security directed ICE to terminate outreach or advocacy services to potentially removable foreign nationals, and reallocate all resources currently used for such purposes to a new office to assist victims of crimes allegedly perpetrated by removable foreign nationals (the Victims of Immigration Crime Engagement, or VOICE, office, established in April 2017). Additional examples of planning and programmatic actions that DHS took, or officials reported taking, in response to the executive orders are described in table 2. DOJ: Within DOJ, the Office of the Deputy Attorney General coordinated and oversaw DOJ’s initial implementation of key provisions in the executive orders, according to DOJ officials. Specifically, DOJ officials said that the Office of the Deputy Attorney General coordinated and collected information for executive order reporting requirements and participated in an interagency working group related to Executive Order 13780, and interagency meetings related to Executive Order 13767. However, DOJ components were responsible for implementing the provisions and ensuring that they met executive order requirements. In addition, DOJ assisted in the creation and issuance of various reports. For example, officials told us that DOJ provided data to State for a report on foreign assistance to the Mexican government, as required by Section 9 of Executive Order 13767. DOJ also jointly issued three reports with DHS in response to Executive Order 13768 Section 16, which included information regarding the immigration status of foreign-born individuals incarcerated under the supervision of the Federal Bureau of Prisons and in pre-trial detention in U.S. Marshals Service (USMS) custody. The Attorney General issued two memoranda providing policy and guidance related to Executive Orders 13767 and 13768 in April and May of 2017. The April 2017 memorandum contains guidance for federal prosecutors on prioritizing certain immigration-related criminal offenses. For example, the memorandum requires that federal prosecutors consider prosecution of foreign nationals who illegally re-enter the United States after prior removal, and prioritize defendants with criminal histories. The May 2017 memorandum addresses Executive Order 13768’s provision directing DOJ and DHS to ensure that jurisdictions willfully prohibiting immigration status-related communication with the federal government (referred to as “sanctuary jurisdictions”) are not eligible for federal grants. It requires jurisdictions to certify their compliance with 8 U.S.C §1373, under which a federal, state, or local government entity or official may not prohibit, or in any way restrict the exchange of citizenship or immigration status information with DHS. Additionally, DOJ took a number of initial planning and programmatic steps to implement the executive orders. DOJ officials stated that some provisions outlined in the executive orders represent regular, ongoing agency activities and did not require any major changes to be implemented. For example, DOJ detailed Assistant United States Attorneys (AUSAs) and immigration judges to southern border districts and detention centers to assist in prosecutions and to conduct removal proceedings in response to the executive orders. However, while they expanded their efforts, DOJ officials said that detailing immigration judges and AUSAs to the border districts is a regular practice, and not a new function created by the executive orders. Examples of actions that DOJ took, or officials reported taking, in response to the executive orders are described in table 3. State: State’s Bureaus of Population, Refugees, and Migration and Consular Affairs led efforts to implement key provisions in Executive Order 13780. Several legal challenges and resulting federal court injunctions affected State’s implementation of Executive Order 13780 and at times curtailed specific provisions. Initial State actions included conducting reviews and contributing to reports required by the order. For instance, while State generally suspended refugee travel for 120 days, the department, in conjunction with DHS and the Office of the Director of National Intelligence, conducted a review to determine what, if any, additional procedures should be implemented in the U.S. Refugee Admissions Program. According to State officials, the agencies provided a joint memorandum to the President in October 2017 that contained recommendations regarding resumption of the program, specific changes to refugee processing, and further reviews and steps that the interagency group should take. Additionally, State worked with DHS and the Office of the Director of National Intelligence to conduct a worldwide review. This review identified any additional information that the United States may need from each foreign country to adjudicate visas and other immigration benefit applications and ensure that individuals seeking to enter the United States do not pose a threat to public safety or national security. In July 2017, upon completion of this review, DHS, in consultation with State and other interagency partners, issued a report to the President cataloguing information needed from each country and listing countries not providing adequate information. State also issued a number of policies and guidance in response to the executive orders; however, guidance on how to implement certain provisions often changed due to legal challenges. For example, the Bureau of Population, Refugees, and Migration issued 23 iterations of refugee travel restrictions guidance to overseas refugee processing centers in response to federal litigation and budgetary uncertainties. Similarly, the Secretary of State issued a number of cables to visa-issuing foreign posts on implementing travel restrictions for nationals of selected countries following court orders limiting the implementation of such restrictions. Executive Order 13780 contained several time-sensitive provisions directed to the Secretary of State. State focused on first addressing these provisions while working towards longer-term priorities outlined in the order. For instance, Executive Order 13780 Sections 2 and 6 established visa and refugee entry restrictions, which contained near-term timelines. State implemented these provisions, consistent with judicial decisions. Examples of planning and programmatic actions that State took, or officials reported taking, to implement Executive Order 13780 are described in table 4. For more information on specific planning or programmatic actions DHS, DOJ, and State have taken to implement the executive orders, see appendix I. The examples we provided for DHS, DOJ, and State represent initial actions and do not constitute an exhaustive list of actions that agencies have taken, or may take in the future, to fully implement the executive orders. Agency officials anticipate that implementation of the executive orders will be a multi-year endeavor comprising present and future reporting, planning, and other actions. For example, DOJ officials noted that many of the actions that they took to implement the orders will be ongoing and responsive to additional DHS actions. Specifically, DOJ bases the number of immigration judges and AUSAs detailed to the southern border districts on court caseloads driven by ICE. If ICE hires additional officers and attorneys and arrests and files charges of removability against more foreign nationals, then DOJ may need to staff additional judges and AUSAs to meet caseload needs. DHS, DOJ, and State Used Existing Fiscal Year 2017 Resources to Support Initial Executive Order Actions; DHS also Received and Expended Supplemental Funds Existing Fiscal Year 2017 Resources: Many of the initial actions agencies and components took in response to the executive orders fit within their existing fiscal year 2017 budget framework and aligned with their established missions. At the time the executive orders were issued in January and March of 2017, federal agencies were operating under existing continuing appropriations pending enactment of fiscal year 2017 appropriations; therefore the new administration’s border security and immigration priorities and policies had not yet been incorporated into the budget process. As a result, it is not always possible to disaggregate which fiscal year 2017 funds were used for implementation of the executive orders versus other agency activities. For example, while the orders call for a surge in hiring at CBP and ICE, these agencies regularly hire additional personnel to offset attrition or to meet budget hiring targets as part of their normal operations. We asked agencies to identify budgetary resources they used specifically to address the executive orders. In some cases agencies were able to quantify their expenditures; however in other cases they could not. For example, according to DOJ officials, the Executive Office for Immigration Review, which conducts immigration court proceedings, spent close to $2.4 million in existing funds to surge approximately 40 immigration judge positions to detention centers and the southwest border from March through October 2017 in response to Executive Order 13768. DHS’s USCIS reported expending approximately $4.2 million detailing asylum officers to immigration detention facilities along the southern border from February 2017 through February 2018. Additionally, as a result of the 120-day suspension of refugee admissions, State cancelled airline tickets for previously approved refugee applicants, which resulted in a cost of nearly $2.4 million in cancellation and unused ticket fees. State officials noted that, aside from the ticket costs, other budgetary costs associated with implementing the order are difficult to disaggregate from other processing activities. For example, any budgetary costs associated with refugees who were admitted on a case-by-case basis were absorbed into overseas processing budgets. In some cases, agencies also identified cost savings or avoidances. For example, State reported a total cost avoidance of over $160 million in fiscal year 2017, partially as a result of admitting fewer refugees than originally planned under the prior administration. While the costs above were part of agencies’ normal operations, we identified one case where Congress approved a DHS request to reprogram $20 million from existing programs to fund the planning and design of new physical barriers along the border, including prototype design and construction. Specifically, CBP reprogrammed $15 million from funds originally requested for Mobile Video Surveillance System deployments and $5 million from a border fence replacement project in Naco, Arizona. Additionally, we identified another case where DHS shifted funds and notified Congress, but determined Congressional approval for reprogramming was not required. Specifically, in response to Executive Order 13768, the Secretary of Homeland Security directed ICE to reallocate any and all resources used to advocate on behalf of potentially removable foreign nationals (except as necessary to comply with a judicial order) to the new VOICE office. As part of this effort, ICE’s Office of the Principal Legal Advisor determined that the creation of the VOICE office fell within ICE’s authority to carry out routine or small reallocations of personnel or functions. According to officials at DHS, DOJ, and State, there were no additional requests to reprogram or transfer funds to implement the executive orders. DHS budget officials stated that any future requests from DHS components to reprogram or transfer funds would typically be considered at the midway point in the budget cycle. All three agencies indicated that they used existing personnel to implement the executive orders and, in some cases, a substantial amount of time was spent preparing reports, planning to implement provisions, and responding to changes or new developments in the executive orders. For example, USCIS officials noted that the agency devoted a significant number of manpower hours to aligning USCIS priorities to the executive orders. ICE’s Office of Human Capital established a dedicated executive order hiring team to plan for the hiring surge directed by Executive Order 13768. Additionally, officials at State told us that personnel were diverted from normal operations in order to implement executive order policy actions and that there were overtime costs associated with some provisions. In most cases, agencies did not specifically track or quantify the amount of time spent on these efforts; however, ICE’s Office of Human Capital tracked the amount of time spent on planning for the potential surge in ICE hiring in its human resource data system. According to ICE information, ICE personnel charged approximately 14,000 regular hours (the equivalent of 1,750 8-hour days) and 2,400 overtime hours to this effort from January 2017 through January 2018. Fiscal Year 2017 Request for Supplemental Appropriations: In March 2017, the President submitted a budget amendment along with a request for $3 billion in supplemental appropriations for DHS to implement the executive orders and address border protection activities. In May 2017, an additional appropriation of approximately $1.1 billion was provided in response to this request, some of which DHS used to fund actions to implement the orders. For example, CBP received $65.4 million for hiring and, according to CBP officials, used these funds to plan and prepare for the surge in Border Patrol agents directed by Executive Order 13767. As of January 2018, CBP had obligated $18.8 million and expended $14.1 million of the $65.4 million it received. Additionally, ICE received $147.9 million for custody operations. At the end of fiscal year 2017, ICE had obligated and expended nearly all—over 99.9 percent—of the funds it received. Fiscal Years 2018 and 2019 Budget Requests and Fiscal Year 2018 Appropriations: Agency officials anticipate additional costs to further implement the executive orders and expect that certain provisions will require a multi-year effort. According to DHS officials, the agency expects to incorporate executive order implementation into its annual strategic and budgetary planning processes. DHS officials also noted that additional future planning and funds will be needed to fully implement actions in the orders. Agencies plan to continue to use their base budgets as well as request additional funds as needed to carry out their mission. Examples of DHS and DOJ fiscal year 2018 budget requests and appropriations to implement executive order provisions are listed below. CBP requested $1.6 billion and in the Consolidated Appropriations Act, 2018, received approximately $1.3 billion to build new and replace existing sections of physical barriers along the southern border. CBP also projected out-year funding for construction along certain segments of the border through 2024. ICE requested $185.9 million for approximately 1,000 new immigration officers and 606 support staff. ICE’s fiscal year 2018 appropriation included $15.6 million to support the hiring of 65 additional investigative agents, as well as 70 attorneys and support staff. DOJ requested approximately $7.2 million to hire additional attorneys in support of the orders. According to DOJ officials, DOJ received sufficient funds in the fiscal year 2018 budget to meet the hiring goal for attorneys. DHS and DOJ also requested funds for fiscal year 2019 to implement executive order provisions, examples of which are listed below. ICE requested $571 million to hire 2,000 immigration officers (including 1,700 deportation officers and 300 criminal investigators) and 1,312 support staff (including attorneys). DOJ requested $1.1 million for 17 paralegal support positions to support the additional attorneys requested in the fiscal year 2018 request. DOJ also requested approximately $40 million to hire new immigration judges and their supporting staff, citing an over 25 percent increase in new cases brought forward by DHS over the course of fiscal year 2017. DHS and DOJ components that were not directly tasked with responsibilities in the executive orders have also begun to plan for potential effects as agencies implement the orders. For example, as CBP and ICE work to meet the hiring surge in the orders, USMS anticipates a likely increase in the number of individuals who are charged with criminal immigration offenses and detained pending trial, resulting in a corresponding increase in its workload. USMS developed a multi-year impact statement which projected possible effects on USMS prisoner operations, judicial security, and investigative operations. According to DOJ officials, these efforts may inform USMS’s budget requests and future year planning. For example, for fiscal year 2018 USMS requested approximately $9 million to hire 40 USMS deputies to support the executive orders. For fiscal year 2019, USMS projected that the administration’s policies to increase immigration enforcement and immigration-related prosecutions could result in an increase of nearly 19,000 prisoners between fiscal year 2017 and fiscal year 2019 and a corresponding budget increase of approximately $105 million for immigration expenses. In addition, officials at the Federal Law Enforcement Training Centers stated that they coordinated with Border Patrol and ICE to assess future training needs and project future resource requirements based on the hiring assumptions in the executive orders. For example, the Federal Law Enforcement Training Centers requested an increase of $29 million in fiscal year 2018 and $25.7 million in fiscal year 2019 for tuition and training requirements to implement the executive orders, among other funding requested. Appendix I includes additional information on funds DHS, DOJ, and State have obligated, expended, or shifted, to implement provisions of the executive orders. Agency Comments We provided a draft of this report to DHS, DOJ, and State for review and comment. DHS provided written comments, which are reproduced in appendix III; DOJ and State did not provide written comments. In its written comments, DHS discussed resources and legislative authorities the department believes it needs to carry out executive order requirements. All three agencies provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Homeland Security, the Attorney General, and the Secretary of State. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. Appendix I: Key Actions and Budgetary Costs Related to Implementing Executive Order 13767, 13768, and 13780 Provisions Purpose This appendix contains summaries of initial actions that the Department of Homeland Security (DHS), Department of Justice (DOJ), and Department of State (State) took to implement selected programmatic provisions of the President’s executive orders on border security and immigration. These orders include Executive Order 13767, Border Security and Immigration Enforcement Improvements; Executive Order 13768, Enhancing Public Safety in the Interior of the United States; and Executive Order 13780, Protecting the Nation from Foreign Terrorist Entry into the United States. These summaries also contain overviews of budget information related to implementing the executive orders, including obligations, expenditures, and budget requests where available, among other things. Table 5 lists the summaries and the executive order provisions on which they focus. Methodology for Selecting Executive Order Provisions We reviewed the executive orders and placed each provision directed at DHS, DOJ, and State into one of three categories: (1) analyses and reports, (2) policies, and (3) programs. We defined the analyses and reports category as executive order provisions that direct agencies to review and analyze data, policies, processes, and operational mission areas and produce reports. We defined the policies category as executive order provisions that establish new or modify existing policies, guidance, or processes related to border security or immigration. We defined the programs category as tangible, measurable, and quantifiable executive order provisions that implement policies. We confirmed our categorization with each agency, particularly for the programs category, since it was sometimes ambiguous whether provisions would lead to actions that were tangible, measurable, and quantifiable. Specifically, we reviewed agency documentation, such as a DHS inventory of tasks related to the executive orders, and interviewed agency officials. In some cases, we moved policy provisions to the programs category if agency efforts to implement the policy were underway. We prepared summaries for each executive order provision(s) we categorized as a program. For each program, we identified actions planned, completed, or underway at DHS, DOJ, and State as of March 2018 through reviewing documentation, interviewing agency officials, and submitting data collection instruments. For each program we also collected available budgetary costs—specifically, any funds requested, appropriated, obligated, and expended for executive order implementation from January 2017 through March 2018. We reviewed publicly available budget requests, congressional budget justifications, public laws, and budgetary data from agencies’ internal data systems. While we were able to identify certain funds directly attributed to the executive order provisions from these documents, it was not always possible to extract funds specifically meant for implementing the executive order provisions from more general budget increase requests, appropriations, or expenditures. To specifically identify funds used for the executive order provisions, we reviewed agency documentation, interviewed agency budget and program officials, and submitted written questions as necessary. In instances where we were unable to differentiate executive order provision funds from regular operating funds, we identified the larger account used for executive order funds and noted this distinction. We analyzed agency documentation on the policies, procedures, and processes for maintaining budgetary data and interviewed agency officials about their data collection practices to assess the reliability of these data. We determined that the data were sufficiently reliable for our purposes. Executive Order: 13767 Provision Summary: Key Agency(ies) Responsible: Program Context: Statutes enacted from 1996 through 2007 authorize DHS, subject to certain criteria, to take necessary actions to construct physical barriers and roads to deter illegal crossings in border areas experiencing high levels of illegal entry. As we previously reported in 2017, from fiscal years 2005 through 2015, CBP increased the total miles of primary border fencing on the southwest border from 119 miles to 654 miles. Action Overview CBP has taken initial steps to plan, design, and construct new and replacement physical barriers on the southern border. For instance, CBP began the acquisition process for a Border Wall System Program, including developing plans to construct barrier segments and awarding eight task orders with a total value of over $3 million to design and construct barrier prototypes (four made from concrete and four made from non-concrete materials). CBP selected San Diego, California as the first segment and plans to replace an existing 14 miles of primary and secondary barriers. DHS plans to use fiscal year 2017 funding for the replacement of the primary barrier which it plans to rebuild to existing—as opposed to prototype—design standards. In January 2018, DHS leadership also approved cost, schedule, and performance goals for a second segment in the Rio Grande Valley in Texas, which will extend an existing barrier with 60 miles of new fencing. The Consolidated Appropriations Act, 2018, stated that fiscal year 2018 funds for primary pedestrian fencing are only available for “operationally effective designs deployed as of ,” such as steel bollard fencing currently deployed in areas of the border. As of April 2018, CBP and DHS were evaluating what, if any, impact this direction will have on the department’s plans, according to DHS officials. Additionally, DHS waived specific legal restrictions, such as environmental restrictions, in order to begin construction of barriers in the El Centro and San Diego Border Patrol sectors in California; and the Santa Teresa, New Mexico segment of the El Paso Border Patrol Sector. DHS also completed a categorical exclusion for replacement of a segment of existing barriers in El Paso, Texas. Budget Overview To fund the barrier prototypes, Congress approved a DHS request to reprogram $20 million in fiscal year 2017. Specifically: CBP reprogrammed $15 million from funds originally requested for Mobile Video Surveillance System deployments. The funds were originally part of the fiscal year 2015/2017 Border Security Fencing, Infrastructure, and Technology (BSFIT) Development and Deployment funding and were available due to a contract bid protest and delays associated with the Mobile Video Surveillance System Program. CBP also reprogrammed $5 million from funds originally intended for a fence replacement project in Naco, Arizona. The funds were part of fiscal year 2016 BSFIT Operations and Maintenance funding and were available as a result of unanticipated contract savings. The Naco Fence Replacement project will be completed within its original scope, according to CBP documentation. DHS also received an appropriation in fiscal year 2017 to replace existing fencing and to install new gates; and an appropriation in fiscal year 2018 for border barrier planning and design, and to replace existing fencing and build new barriers. As previously discussed, the Consolidated Appropriations Act, 2018, limited the use of funds provided for construction of new and replacement primary pedestrian fencing to previously deployed fencing designs. DHS has requested, but has not received, fiscal year 2019 funds for building new barriers. For more information regarding funding for future barrier construction projects along the southern border, see table 6. According to CBP documentation, the total cost to construct the Border Wall System Program over approximately 10 years is $18 billion. DHS headquarters conducted an independent cost estimate for the San Diego and Rio Grande Valley segments of the program, which CBP adopted as the program’s life cycle cost estimate. Acquisition and operations and maintenance costs for the Rio Grande Valley segment were separately described in other DHS documents and are shown in table 7 below. Provision: Sections 5 and 6 Sections 5 and 6 pertain to detention facilities and detention of foreign nationals for violations of immigration law, pending the outcome of their proceedings or to facilitate removal. The order directs the Department of Homeland Security (DHS) to take immediate actions to construct, operate, or control facilities to detain foreign nationals at or near the southern border, and assign asylum officers to immigration detention facilities, among other things. Additionally, the order directs the Department of Justice (DOJ) to immediately assign immigration judges to immigration detention facilities. DHS: U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), U.S. Citizenship and Immigration Services (USCIS) DOJ: Executive Office for Immigration Review (EOIR) ICE and U.S. Border Patrol officials stated they consider custody determinations on a case by case basis. Additionally, officials from CBP’s Office of Field Operations stated they inspect all applicants for admission in accordance with the Immigration and Nationality Act, as prescribed by the executive order and a February 2017 memorandum the Secretary of Homeland Security issued. ICE, through its Enforcement and Removal Operations directorate, manages the nation’s immigration detention system, which houses foreign nationals detained while their immigration cases are pending or after being ordered removed from the country. DOJ’s EOIR is responsible for conducting immigration court proceedings, appellate reviews, and administrative hearings, pursuant to U.S. immigration law and regulation. ICE initially intended to increase bed capacity at detention facilities in order to accommodate potential surges in apprehensions that could result from implementation of the executive order. According to ICE officials, ICE identified 1,100 additional beds available at detention facilities already in use. However, officials also stated that, as of February 2018, ICE has not needed to use these additional beds due to a decrease in the number of apprehensions. Additionally, ICE officials indicated no acquisition actions were needed because contracts and agreements are in place at existing detention facilities and additional beds are available for excess capacity. CBP and ICE are continuously monitoring bed space requirements based on migration volume. According to ICE officials, as of February 2018, ICE had no additional actions planned to increase bed capacity. DHS’s Office of Strategy, Policy and Plans convened a cross-component meeting to discuss detention standards, which govern the conditions of detainee confinement, according to DHS officials. ICE officials reported that ICE is currently re-writing its national detention standards (the standards applicable at most county jails housing immigration detainees). According to officials, the new standards are intended to make it easier for local jurisdictions to comply with standards without completely re-writing their existing policies to conform to ICE’s requirements. USCIS officials told us they began working with ICE to identify where additional asylum officers were needed based on workload needs and space availability as soon as the executive order was issued in January 2017. From February 2017 through February 2018, USCIS deployed between 30 and 64 asylum officers during any given week along the southern border and continues to do so in response to caseload needs. USCIS continues to monitor and periodically adjust asylum officer staffing requirements, according to USCIS officials. DOJ officials stated that DOJ components coordinated with ICE to identify removal caseloads along the southern border that were large enough to warrant additional immigration judges. According to DOJ officials, from March 2017 through October 2017, EOIR detailed approximately 40 immigration judge positions, both in person and by video teleconference, to 19 DHS detention facilities, including many along the southern border, in response to the executive order. DOJ officials further explained that as caseloads fluctuated, some of the details ended, some in- person details were converted to video teleconference, and some details were converted to permanent immigration judge positions. EOIR often details immigration judges for operational reasons; however officials noted that the scale of this detail mobilization was larger because of the executive order. Fiscal Year 2017: Because Executive Orders 13767 and 13768 were issued during fiscal year 2017, DHS submitted a budget amendment and requested supplemental appropriations to address the needs of the department in support of executive order implementation. The request proposed funding to increase daily immigration detention capacity to 45,700 detention beds by the end of fiscal year 2017. The request stated that the detention capacity was necessary to implement the administration’s immigration enforcement policies for removing foreign nationals illegally entering or residing in the United States. ICE: On May 5, 2017, ICE received a supplemental appropriation of $236.9 million for enforcement and removal operations, including $147.9 million for custody operations, $57.4 million for alternatives to detention, and $31.6 million for transportation and removal operations. According to ICE documentation, almost all of the funds from that additional appropriation were obligated and expended at the conclusion of fiscal year 2017, as shown in table 8. USCIS: USCIS documentation estimated that it expended at least $4.2 million detailing asylum officers to immigration detention facilities along the southern border from February 2017 through February 2018. Fiscal Year 2018: The President’s budget requested an additional $1.5 billion above the 2017 annualized continuing appropriations level, for expanded detention, transportation, and removal of foreign nationals who enter, or remain in, the United States, in violation of U.S. immigration law. As part of the $1.5 billion requested, the ICE congressional budget justification requested $1.2 billion in additional funds to support an average daily population (ADP) of detainees of 51,379—a 49 percent increase over fiscal year 2016 ADP (34,376). The request stated that Executive Order 13768 and subsequent department guidance were expected to drive increases in the ADP due to the increase in ICE law enforcement officers and an expected increase in the average length of stay at detention facilities. ICE also requested funds for transportation and alternatives to detention. In fiscal year 2018, ICE was appropriated $4.1 billion to support enforcement and removal operations. According to DHS officials, the Consolidated Appropriations Act, 2018, provides funds for an ADP of 40,520 total beds, 10,859 lower than requested. Fiscal Year 2019: The President’s budget requested $2.5 billion for detention and removal capacity. As part of the $2.5 billion requested, ICE’s congressional budget justification states $2.3 billion will support an ADP of 47,000. According to the ICE congressional budget justification, the number of beds will sustain the fiscal year 2017 ADP level (38,106) and provide additional detention capacity stemming from the continued implementation of Executive Order 13768. ICE also requested funds for transportation and alternatives to detention. Prior GAO Work: Our prior work on immigration detention examined ICE’s formulation of its budget request and cost estimate for detention resources. In April 2018, we found errors and inconsistencies in ICE’s calculations for its congressional budget justifications and bed rate model. Specifically, we found that ICE made errors in its budget justifications, underestimated the actual bed rate, and its methods for estimating detention costs did not meet the characteristics of a reliable cost estimate. We also found ICE did not document its methodology for its projected ADP. We recommended that ICE assess and update its adult bed rate and ADP methodology and take steps to ensure that its budget estimating process fully addresses cost estimating best practices. DHS concurred with our recommendations and plans to take actions in response to them. Fiscal Year 2017: DOJ documentation showed it expended approximately $2.4 million detailing immigration judge positions to immigration detention facilities from March 2017 through October 2017, either through video teleconferencing, or in-person, to adjudicate removal proceedings. EOIR officials explained the funds used were unobligated balances carried over from a prior fiscal year. Fiscal Year 2018: For fiscal year 2018, DOJ requested an increase of $75 million to hire 75 additional immigration judge teams to enhance public safety and law enforcement. According to DOJ officials, the agency received sufficient funds in the fiscal year 2018 budget to meet this hiring goal. Fiscal Year 2019: The fiscal year 2019 President’s budget also requests an increase of $40 million for 75 new immigration judge teams at EOIR and nearly $40 million for 338 new prosecuting attorneys at ICE to ensure immigration cases are heard expeditiously. According to the President’s budget, these investments are critical to the prompt resolution of newly-brought immigration charges and to reduce the 650,000 backlog of cases currently pending in the immigration courts. EOIR’s fiscal year 2019 congressional budget justification includes a program increase totaling almost $65 million to provide funding for immigration judges and support staff, as well as information technology efforts. This increase supports initiatives that implement Presidential and Attorney General priority areas, among other things. Provision Summary: Section 11 directs the Department of Homeland Security (DHS) to ensure that parole is exercised on a case- by-case basis in accordance with existing statutory criteria, and that asylum referrals and credible and reasonable fear determinations are conducted in a manner consistent with relevant statute and regulation. Key Agency(ies) Responsible: Program Context: USCIS has discretion to authorize parole for urgent humanitarian reasons or significant public benefit, which it uses to allow an individual, who may be inadmissible or otherwise ineligible for admission to come to the United States for a temporary period. USCIS asylum officers adjudicate asylum applications filed with USCIS, and conduct credible and reasonable fear screenings to determine if certain removable foreign nationals may be eligible to seek particular forms of relief or protection in immigration court. In fiscal year 2019, USCIS requested a total increase of $287.5 million for all programs, projects, and activities to support changes in operational requirements driven by changes to benefit request receipt volumes and complexity of work, including implementing the executive orders. Additional Funds Saved and Expended: According to USCIS officials, USCIS saved approximately $274,000 from not renewing contracts to administer the Central American Minors Parole Program. According to USCIS documentation, USCIS expended approximately $70,300 to deploy FDNS officers along the southern border from March 2017 to February 2018. Executive Order: 13767 and 13768 Provision Summary: Key Agency(ies) Responsible: Program Context: CBP and ICE hiring demands are driven by various factors, such as national security objectives, executive-level policies, legislative mandates, and component-specific operational requirements. Border Patrol agents are to respond to, and interdict, cross-border threats and ICE officers are responsible for apprehending individuals within the United States who may be removable for various reasons, including entering the country illegally or being convicted of certain crimes. Action Overview DHS has taken a number of actions to implement the executive order hiring provisions. Specifically, DHS requested and the Office of Personnel Management approved a number of changes to assist DHS and its components with the executive order hiring directives. These changes include granting CBP and ICE direct hire authority and a special salary rate for polygraphers, among others. DHS’s Office of the Chief Human Capital Officer and DHS components’ human capital offices also began additional hiring planning, such as refining component-level hiring plans, coordinating on potential joint hiring events, and targeting specific recruitment efforts, such as military veterans. CBP and ICE have also taken the following additional actions: CBP: In November 2017, CBP awarded a contract not to exceed $297 million to Accenture Federal Service LLC to help with law enforcement hiring for all CBP components. The contract is structured so the contractor receives a set dollar amount for each law enforcement officer hired—80 percent for each final offer letter and 20 percent for each law enforcement officer who enters on duty. The contractor is to assist CBP in hiring 7,500 qualified agents and officers, including 5,000 Border Patrol agents, 2,000 CBP officers, and 500 Air and Marine Interdiction agents over 5 years. CBP expects Accenture to be fully operational and effectively provide surge hiring capacity by June 2018, according to CBP officials. ICE: According to ICE Office of Human Capital (OHC) officials, OHC is ensuring policies and procedures are in place so that ICE is ready to begin hiring additional immigration officers and support staff if funds are appropriated. In January 2018, ICE OHC also issued a contract solicitation for recruitment, market research, data analytics, marketing, hiring, and onboarding activities. ICE OHC sought to procure comprehensive hiring and recruitment services to assist ICE OHC in meeting the demands required to achieve the executive order’s hiring goals and develop efficiencies to current OHC processes. ICE aimed to have a similar pricing structure as CBP’s Accenture contract, according to the solicitation. Specifically, according to the solicitation, the yet to be selected contractor would receive a set dollar amount for each frontline officer hired–80 percent for each preliminary offer letter and 20 percent for each frontline officer who enters on duty. The contractor would assist ICE in hiring 10,000 law enforcement agents, including 8,500 deportation officers and 1,500 criminal investigators. It would also assist in the hiring of approximately 6,500 support personnel positions. In May 2018, the contract solicitation was cancelled; however, the government anticipates re-soliciting the requirement in fiscal year 2019. According to the contract cancellation notice and an ICE OHC official, DHS cancelled the contract due to delays associated with the fiscal year 2018 budget and hiring timelines, as well as the limited number of additional ICE positions funded in the fiscal year 2018 budget. In the interim, ICE is partnering with the Office of Personnel Management to meet the executive order’s hiring goals and develop efficiencies to current OHC processes, according to ICE officials. Because Executive Orders 13767 and 13768 were issued during fiscal year 2017, DHS submitted a budget amendment and requested supplemental appropriations to help address the needs of the department in support of executive order implementation. The request included funding for DHS agencies to begin building the administrative capacity necessary to recruit, hire, train and equip the additional 5,000 Border Patrol agents and 10,000 ICE officers. The Federal Law Enforcement Training Centers (FLETC), which provides training to law enforcement professionals who protect the homeland, including any new ICE and CBP personnel hired as result of the executive orders, also requested funds to support these efforts. On May 5, 2017, CBP received an additional appropriation of $65.4 million to improve hiring processes for Border Patrol agents, CBP officers, and Air and Marine Operations personnel, and for officer relocation enhancements. Of the $65.4 million appropriated in fiscal year 2017, CBP obligated $18.8 million and expended $14.1 million as of January 2018. While ICE also received additional funding for custody operations, alternatives to detention, and transportation and removal, it did not receive supplemental funds in fiscal year 2017 specifically for hiring. DHS also requested funds for CBP, ICE, and FLETC hiring and training in fiscal year 2018 and fiscal year 2019. For additional details, see table 9. According to FLETC officials, the total average cost to provide basic law enforcement training varies by agencies and position, as shown in table 10. FLETC officials noted their partners also provide additional training unique to their missions, which is not included in the costs below. Action Overview ICE officials reported expediting review of pending 287(g) requests and approved 46 additional state and local jurisdictions for the program from February 2017 through March 2018, bringing the total to 76 law enforcement agencies in 20 states. See figure 2 for a map of additional jurisdictions approved. Section 10 and Section 8 of Executive Orders 13767 and 13768, respectively, direct the Department of Homeland Security (DHS) to engage with state and local entities to enter into agreements under Section 287(g) of the Immigration and Nationality Act. DHS: U.S. Immigration and Customs Enforcement (ICE) The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 added Section 287(g) to the Immigration and Nationality Act, which authorizes ICE to enter into agreements with state and local law enforcement agencies, permitting designated state and local officers to perform immigration law enforcement functions. According to ICE officials, ICE also conducted outreach with state and local officials and identified potential law enforcement partners with whom to enter into possible future 287(g) agreements. U.S. Customs and Border Protection (CBP) officials stated that they agreed to support ICE’s program expansion efforts and provided hundreds of viable state and local law enforcement referrals to ICE to assist with this effort. For example, CBP reviewed data and conducted a gap analysis, to include a survey, to identify potential law enforcement partners for future 287(g) memorandums of agreement. CBP officials further noted that they introduced new language into Operation Stonegarden grant guidance that allows the use of grant funding to support CBP-identified, 287(g) law enforcement operational activities. According to CBP and ICE officials, efforts to develop a 287(g) enforcement model that can be used for this purpose are pending. According to ICE officials, the agency is considering developing a program under which designated local law enforcement officers would be trained and authorized to serve and execute administrative warrants for individuals who are in violation of U.S. immigration laws at the time they are released from state criminal custody. ICE officials indicated that program participants would have limited authority under 287(g). For example, they would not interview individuals regarding nationality and removability, lodge detainers, or process individuals for removal. ICE has not yet finalized the program and it may evolve as ICE further develops the program, according to ICE officials. ICE is also leveraging an existing Basic Ordering Agreement, a procurement tool to expedite acquisition of a substantial, but presently unknown, quantity of supplies or services, according to ICE officials. A Basic Ordering Agreement is not a contract, but rather, is a written instrument of understanding, negotiated between ICE and state and local jurisdictions, to house detainees upon ICE’s issuance and their acceptance of an Immigration Detainer and either a Warrant for Arrest of Alien or Warrant of Removal. For any order placed under the agreement, ICE will reimburse the provider, such as a state or local jurisdiction, for up to 48 hours of detention, under applicable regulations. The rate will be fixed at $50.00 for up to 48 hours of detention. No payment will be made for any detention beyond 48 hours. The Secretary of Homeland Security vested authority in CBP to accept state services to carry out certain immigration enforcement functions pursuant to Title 8, United States Code Section 1357(g). According to CBP officials, CBP also joined a 287(g) Program Advisory Board, which reviews and assesses ICE field office recommendations about pending 287(g) applications. Participation in the 287(g) program is expected to expand further in fiscal years 2018 and 2019, according to ICE. Additionally, ICE anticipates further increase in the number of 287(g) memorandums of agreement in fiscal years 2018 and 2019. In fiscal year 2018, ICE requested $24.3 million for ICE 287(g) program funding. According to the explanatory statement accompanying the Consolidated Appropriations Act, 2018, the 287(g) program was fully funded at the requested level. In fiscal year 2019, ICE requested $75.5 million for ICE 287(g) program funding. Executive Order: 13767 and 13768 Provision: Sections 13 and 11 Provision Summary: Section 13 of Executive Order 13767 directs the Department of Justice (DOJ) to establish prosecution guidelines and allocate appropriate resources to ensure that federal prosecutors prioritize offenses with a nexus to the southern border. Section 11 of Executive Order 13768 directs DOJ and the Department of Homeland Security (DHS) to develop and implement a program to ensure that adequate resources are devoted to prosecuting criminal immigration offenses, and to develop cooperative strategies to reduce the reach of transnational criminal organizations and violent crime. Key Agency(ies) Responsible: border districts developed guidelines for prioritizing misdemeanor cases involving individuals illegally entering the United States for the first time. However, according to these officials, southern border districts developed these guidelines based on an initial high volume of apprehensions, and when apprehensions decreased the guidelines were no longer necessary and never published. DOJ: Executive Office for United States Attorneys (EOUSA) DHS: Immigration and Customs Enforcement (ICE) EOUSA provides executive and administrative support for United States Attorneys and Assistant United States Attorneys (AUSAs). AUSAs conduct trial work, as prosecutors, in which the United States is a party, including prosecution of criminal immigration offenses. Western District of Texas and Arizona, and two AUSAs each to the Southern District of California, the District of New Mexico, and the Southern District of Texas, for a total of 12 details according to DOJ officials. The first round of details lasted for 6 months, and EOUSA extended the details of one AUSA at each southern border district for an additional 6 months. DOJ officials told us that EOUSA will continue to evaluate the need for additional details along the southern border based on the needs of the districts, as determined by the number of DHS apprehensions. According to DOJ officials, implementation of these provisions is ongoing and will depend largely upon DHS executive order actions—for instance, as DHS hires more enforcement personnel, criminal immigration cases may increase which could spur a need for more AUSAs. ICE litigates charges of removability against foreign nationals and conducts criminal investigations, including investigations of immigration fraud. The Secretary of Homeland Security released a memorandum with guidance on the enforcement of immigration laws in the United States on February 20, 2017. In response to this memorandum, ICE’s Office of the Principal Legal Advisor sent guidance to its attorneys directing them to prioritize legal services supporting the timely removal of foreign nationals in accordance with Executive Order 13768. The guidance directed ICE to review all cases previously administratively closed based on prosecutorial discretion to determine whether the basis for closure remains appropriate under DHS’s enforcement priorities. The guidance also directed ICE to coordinate with the Executive Office for Immigration Review to ensure that foreign nationals charged as removable and who meet the enforcement priorities remain on active immigration court dockets and that their cases are completed as expeditiously as possible. In response to the executive orders, ICE Homeland Security Investigations officials stated that the agency began to focus more of its resources on the investigation and criminal prosecution of immigration fraud. ICE Homeland Security Investigations added five new Document and Benefit Fraud Task Forces throughout the nation and directed field offices to increase staffing of task forces. Additionally, ICE is in the process of combining five Benefit Fraud Units into an immigration fraud center—the National Lead Development Center— that will serve as a new centralized entity that will refer cases to the task forces for enforcement action. A summary of DOJ budget increase requests, appropriations, and expenditures related to prosecution priorities in the executive orders that we identified can be found in table 11. The fiscal year 2018 President’s budget request included $19.3 million for 195 attorney positions in ICE’s Office of the Principal Legal Advisor. According to ICE officials, while the Consolidated Appropriations Act, 2018, included funds for 70 positions for the Homeland Security Investigations Law Division, it did not include funds for additional attorney positions for immigration litigation within the Office of the Principal Legal Advisor. The fiscal year 2019 President’s budget request included $39.7 million for additional attorney resources in ICE’s Office of the Principal Legal Advisor. Provision: Sections 5 and 10 Sections 5 and 10 direct the Department of Homeland Security (DHS) to take action related to immigration enforcement. Specifically, Section 5 directs DHS to prioritize the removal of certain categories of removable foreign nationals. Section 10 directs DHS to terminate the Priority Enforcement Program (PEP) and reinstitute Secure Communities, among other things. DHS: U.S. Immigration and Customs Enforcement (ICE), U.S. Customs and Border Protection (CBP) Under PEP (from 2015 to 2017), ICE issued a request for detainer (with probable cause of removability) or information or transfer, for a priority removable individual, such as one posing a threat to national security or public safety, including a foreign national convicted of a felony, among others, under DHS’s former tiered civil enforcement categories. Under Secure Communities, ICE may issue detainers for removable individuals charged, but not yet convicted, of criminal offenses, in addition to individuals subject to a final order of removal whether or not they have a criminal history. Pursuant to Executive Order 13768, the Secretary of Homeland Security terminated PEP and reinstituted the Secure Communities program. As such, DHS is no longer required to utilize a tiered approach to civil immigration enforcement with direction to dedicate resources to those deemed of highest priority. Instead, under Section 5 of the executive order, various categories of removable individuals are general priorities for removal, and DHS personnel may initiate enforcement actions against all removable persons they encounter. Further, the DHS memorandum implementing this executive order allows ICE, CBP, and USCIS to allocate resources to prioritize enforcement activities within these categories, such as by prioritizing enforcement against convicted felons or gang members. As part of this effort, ICE reported it reviewed policies, regulations, and forms relevant to enforcement priorities. ICE subsequently rescinded prior enforcement priority guidance and issued new guidance directing application of the new approach to immigration enforcement prioritization. Additionally, ICE eliminated existing forms and created a new form to place detainers on foreign nationals who have been arrested on local criminal charges and for whom ICE possesses probable cause to believe that they are removable from the United States, so that ICE can take custody of such individuals upon release. According to ICE officials, more than 43,300 convicted criminal aliens have been identified and removed through Secure Communities from January 25, 2017 through the end of fiscal year 2017. Pursuant to Executive Order 13768 and in accordance with the Secretary of Homeland Security’s memorandum entitled, Enforcement of the Immigration Laws to Serve the National Interest, ICE’s Enforcement and Removal Operations (ERO) expanded the use of the Criminal Alien Program (CAP) by increasing the use of Criminal Alien Program Surge Enforcement Team (CAPSET) operations, traditional CAP Surge operations, and the Institutional Hearing Program. Specifically, ICE took the following actions: ICE ERO conducted four CAPSET operations in Louisiana, Georgia, and California in fiscal year 2017, resulting in a total of 386 encounters, 275 detainers, and 261 charging documents issued, according to ICE documentation. ICE ERO field offices conducted CAP Surge operations, which concluded in March 2017. According to ICE documentation, the operations collectively resulted in 2,061 encounters, 668 arrests, 1,307 detainers issued, and 614 charging documents issued. ICE, along with the Department of Justice’s Executive Office for Immigration Review and the Federal Bureau of Prisons, expanded the number of Institutional Hearing Program sites by nine, from 12 to 21. As of January 22, 2018, five of the nine Institutional Hearing Program expansion sites were operational. ICE officials reported that ICE also detailed over 30 percent more officers (79 officers) to support Community Shield efforts, an international law enforcement initiative to combat the growth and proliferation of transnational criminal street gangs, prison gangs, and outlaw motorcycle gangs throughout the United States. According to ICE officials, CAP used existing resources in fiscal year 2017 to support the efforts required by Executive Order 13768. ICE also requested funds in fiscal years 2018 and 2019 for CAP. Specifically, ICE stated in its fiscal year 2018 and 2019 congressional budget justifications that CAP performs its duties in accordance with immigration enforcement priorities defined by Executive Order 13768. In fiscal year 2018, ICE requested $412.1 million for CAP. The Consolidated Appropriations Act, 2018, funded $319.4 million for CAP, $92.6 million less than requested. Section 9 directs the Department of Justice (DOJ) and the Department of Homeland Security (DHS) to ensure that jurisdictions in willful noncompliance with 8 U.S.C. § 1373 (section 1373) are ineligible to receive federal grants. The section also directs DOJ to take appropriate enforcement action against any entity that violates section 1373, or which has in effect a policy, statute, or practice that prevents or hinders the enforcement of federal law. Key Agency(ies) Responsible: Program Context: conducted a compliance review of certain jurisdictions relative to 8 U.S.C. § 1373, and issued a report in May 2016 finding that 10 jurisdictions raised compliance concerns. In response, DOJ placed a special condition on certain fiscal year 2016 grant awards, requiring recipients to submit an assessment of their compliance with section 1373. In November 2017, as part of the section 1373 compliance effort predating Executive Order 13768, DOJ sent letters to 29 jurisdictions expressing concern that they may not be in compliance with section 1373, and requesting responses regarding compliance. In January 2018, DOJ sent 23 follow-up demand letters to jurisdictions seeking further documents to determine whether they are unlawfully restricting information sharing by their law enforcement officers with federal immigration authorities, and stating that failure to respond will result in records being subpoenaed. The Attorney General determined that Section 9 will be applied solely to DOJ or DHS federal grants for jurisdictions willfully refusing to comply with section 1373. Under section 1373, a federal, state, or local government entity or official may not prohibit, or in any way restrict the exchange of information regarding citizenship or immigration status with DHS. ICE developed weekly Declined Detainer Outcome Reports detailing jurisdictions with the highest volume of declined detainers and a list of sample crimes suspected or determined to have been committed by released individuals. According to ICE officials, ICE identified data processing errors and incorrect detainer information and is working to correct these issues. ICE officials noted that they temporarily suspended the reports, and have not yet determined a specific time frame for future publications. DHS reviewed all DHS grant programs to determine which programs could be conditioned to require compliance with section 1373 and plans to provide this information to the Office of Management and Budget, according to DHS officials. DOJ has not obligated, expended, or requested any additional funds to implement Executive Order 13768, section 9(a). The fiscal year 2019 President’s budget proposed to amend the Illegal Immigration Reform and Immigrant Responsibility Act of 1996 to condition DHS and DOJ grants and cooperative agreements on state and local governments’ cooperation with immigration enforcement. Section 2 directed multiple agencies, including the Department of State (State) and Department of Homeland Security (DHS), to conduct a worldwide review to identify any additional information needed from each foreign country to adjudicate immigration benefit applications and ensure that individuals applying for a visa or other immigration benefit are not a security or public safety threat. It also directed the agencies to send a report of the findings of the worldwide review to the President. This section further established visa entry restrictions applicable to foreign nationals from Iran, Libya, Somalia, Sudan, Syria, and Yemen for a 90-day period. It also stated that agencies, including State and DHS, could continue to submit additional countries for inclusion in visa entry restrictions. Section 5 required agencies, including State, DHS, and the Department of Justice (DOJ), to develop a uniform baseline for screening and vetting to identify individuals seeking to enter the United States on a fraudulent basis or who support terrorism or otherwise pose a danger to national security or public safety. practices based on the criteria identified above. In July 2017, State directed its posts to inform their respective host governments of the new information-sharing criteria and request that host governments provide the required information or develop a plan to do so. CA directed posts to engage more intensively with countries whose information-sharing and identity-management practices were preliminarily deemed “inadequate” or “at risk” and submit an assessment of mitigating factors or specific interests that should be considered in the deliberations regarding any travel restrictions. According to officials, State and its posts will continue to engage with foreign countries to address information-sharing and identify management deficiencies. Key Agency(ies) Responsible: State: Bureau of Consular Affairs (CA), DHS, and DOJ CA provides consular services in reviewing and adjudicating visa applications for those seeking to enter the United States. DHS adjudicates visa petitions, and DHS and DOJ also play roles in screening and vetting applicants. DHS and DOJ, along with State, are responsible for implementing the enhanced screening and vetting protocols established under the executive order. June 29, 2017 through September 24, 2017. During the implementation period, if an applicant was found ineligible for a visa on other grounds unrelated to the executive order, such as prior criminal activity or immigration violations, the applicant would be refused the visa on those grounds, according to State officials. If the applicant was found to be otherwise eligible for the visa and did not qualify for an exemption or a waiver under the executive order, he or she would be refused on the basis of the executive order. CA sent several cables to posts with guidance on implementing the 90-day travel restriction, including operational guidance and updated guidance following court decisions. CA also offered trainings to consular posts on implementation of the order. A series of legal challenges ultimately led to the June 26, 2017 Supreme Court decision prohibiting enforcement of entry restrictions against foreign nationals who could credibly claim a bona fide relationship with a person or entity in the United States. On September 24, 2017, pursuant to section 2(e) of Executive Order 13780, the President issued Presidential Proclamation 9645, which established conditional restrictions on U.S. entry for certain categories of nationals from Chad, Iran, Libya, North Korea, Syria, Venezuela, Yemen and Somalia, for an indefinite period. According to State officials, State, DHS, DOJ, and other agencies formed a working group and developed a uniform baseline for screening and vetting standards and procedures to ensure ineligible individuals are not permitted to enter the United States, and are implementing the new requirements. The working group conducted a review of the visa screening and vetting process and established uniform standards for (1) applications, (2) interviews, and (3) system security checks, including biographic and biometric checks. According to State officials, for applications, the group identified data elements against which applicants are to be screened and vetted. For interviews, the working group established a requirement for all applicants to undergo a baseline uniform national security and public safety interview. The working group modeled its interview baseline on elements of the refugee screening interview. As of June 2017, State collected most of the data elements online for immigrant and nonimmigrant visas, according to State officials. The President issued a memorandum on February 6, 2018, directing DHS, in coordination with State, DOJ, and the Office of the Director of National Intelligence to establish a national vetting center to coordinate agency vetting efforts to identify individuals who pose a threat to national security, border security, homeland security, and public safety. The National Vetting Center will be housed in DHS, and will leverage the capabilities of the U.S. intelligence community to identify, and prevent entry of, individuals that may pose a threat to national security. On February 14, 2018, the Secretary of Homeland Security appointed a director for the National Vetting Center. The Secretary also delegated authorities of the National Vetting Center to the Commissioner of U.S. Customs and Border Protection. State officials said that personnel worked overtime to implement Section 2 and the following Presidential Proclamation, but did not identify monetary costs or budget increases associated with implementation. DHS also dedicated several full-time staff positions to developing and implementing enhanced screening and vetting protocols, and DHS employees worked overtime to implement these provisions, according to officials. Section 6 directed the Department of State (State) to suspend travel of refugees seeking to enter the United States, and the Department of Homeland Security (DHS) to suspend adjudications on refugee applications, for 120 days. Section 6 further ordered that during the 120- day period, State, together with DHS, and the Office of the Director of National Intelligence review the refugee application and adjudication process to identify and implement additional procedures to ensure that refugees seeking entry into the United States under the United States Refugee Admissions Program (USRAP) do not pose a threat to U.S. security and welfare. This section also capped annual refugee admission at 50,000 in fiscal year 2017. State generally suspended travel of refugees into the United States from June 29, 2017 through October 24, 2017. State coordinated with DHS, the Office of the Director of National Intelligence, and other security vetting agencies on the 120-day review of the USRAP application and adjudication process to determine what additional procedures should be used to ensure that individuals seeking admission as refugees do not pose a threat to the security and welfare of the United States, according to State officials. Upon completion of the review, the agencies submitted a joint memorandum to the President. The United States admitted 53,716 refugees in fiscal year 2017, according to State officials. Throughout fiscal year 2017, State issued guidance that steered the refugee admissions program to different refugee arrival goals during different periods of time due to court decisions and budget considerations. Prior to the issuance of Executive Order 13769, which, after largely being blocked nationwide by a federal court injunction was revoked and replaced by Executive Order 13780, PRM operated at the rate of 110,000 refugees per year. After the issuance of Executive Orders 13769 and 13780, PRM officials noted that at times, State made no bookings for refugee arrivals, and also made bookings based on 50,000 arrivals, as well as 110,000 arrivals. Key Agency(ies) Responsible: Program Context: A series of legal challenges and resulting court injunctions culminated in the June 26, 2017, Supreme Court order limiting State’s implementation of this section to prospective refugees without bona fide ties to the United States. Together with budget uncertainties, State could not enact the refugee travel suspension or 50,000-person admissions cap based on the timeline set in the executive order. Figure 3 below shows key milestones related to this section of the order. The USRAP resettles refugees to the United States in accordance with a refugee admission ceiling set by the President each year. PRM is responsible for coordinating and managing the USRAP. USCIS is responsible for adjudicating refugee applications. According to USCIS officials, USCIS is implementing new requirements and vetting procedures for refugees. For example, these officials stated that USCIS is accessing more detailed biographical information earlier in the vetting process. Additionally, these officials noted that USCIS’s Fraud Detection and National Security unit is conducting additional reviews of applicants, including social media and other information against various databases. USCIS officials further noted that USCIS’s International Operations office sent guidance to the field that established the logistical requirements of the new procedures. As of April 2018, USCIS was finalizing further guidance and training officers for the enhanced review and vetting procedures, according to USCIS officials. State officials said that State and DHS executed four categories of exemptions during the 120-day USRAP suspension: a Congolese woman with a life-threatening illness and her family; 29 unaccompanied refugee minors; 17 Yezidis and other religious minorities in northern Iraq who had been victims of ISIS; and 53 individuals on Nauru and Manus Islands. Provision Summary: Section 9 directs the Department of State (State) to suspend the Visa Interview Waiver Program, subject to certain exceptions. To support this, the provision also directs State to expand the Consular Fellows program so that visa wait times are not unduly affected. The provision also directs State to make language training available to Consular Fellows outside of their core linguistic abilities. Key Agency(ies) Responsible: Program Context: appointments by 12 months. In October 2017, State approved extending offers for follow-on 60-month Limited Non-Career Appointments to Consular Fellows who complete a successful initial 60-month appointment. State officials noted the first officer to accept a follow-on appointment was sworn in during April 2018. CA and State’s Bureau of Human Resources updated the CA Limited Non-Career Appointments handbook to include an implementation plan for extending such appointments, and according to officials, providing language training outside of the applicant’s area of core linguistic ability. Consular Fellows serve in U.S. embassies and consulates overseas and primarily adjudicate visa applications for foreign nationals. The Visa Interview Waiver Program formerly waived in-person interviews for certain categories of visa applicants. In early 2017, State streamlined the application process for Consular Fellows and realigned resources to expedite their security clearance process, according to CA officials. From February 2017 through February 2018, State hired 134 new Consular Fellows, according to CA officials. Additionally, State officials said that they expect to hire 120 more Consular Fellows for the remainder of fiscal year 2018. In August 2017, the Foreign Service Institute created a 12-week Spanish Language program for Consular Fellows who received certain scores on the Spanish language exam, according to CA officials. Eleven Consular Fellows completed the program in January 2018 and 20 more are expected to complete the program in July 2018, according to CA officials. As of January 2018, five Consular fellows were being trained in a language outside their core linguistic ability, according to CA officials. While these actions were taken to support implementation of the executive order, CA officials also told us that hiring Consular Fellows has been a State priority for some time. CA officials said that the bureau has hired an increasing number of Consular Fellows to meet worldwide visa demand since 2012, and that providing consular services is one of State’s highest priorities, as well as a national security imperative. According to CA officials, because the Consular Fellows program is entirely funded by non-appropriated consular fees, subject to fluctuating demand for passports and visas, the expansion of the program did not have appropriations impacts. However, officials did provide per unit costs associated with aspects of expanding the Consular Fellows program. For example, Consular Fellows salaries range from approximately $48,000 to approximately $98,000 and Foreign Service Institute language courses last from 24 to 36 weeks, at a cost of $1,700 per week, per student. Appendix II: Executive Order Reports Executive orders 13767 (Border Security and Immigration Enforcement Improvements), 13768 (Enhancing Public Safety in the Interior of the United States), and 13780 (Protecting the Nation from Foreign Terrorist Entry into the United States) include reporting requirements for the Department of Homeland Security (DHS), the Department of State (State), and the Department of Justice (DOJ). Table 13 lists completed reports as of April 2018, according to DHS, State, and DOJ officials. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Taylor Matheson (Assistant Director), Sarah Turpin (Analyst-in-Charge), Isabel Band, and Kelsey Hawley made key contributions to this report, along with David Alexander, Eric Hauswirth, Sasan J. “Jon” Najmi, Kevin Reeves, and Adam Vogt.
Why GAO Did This Study In January and March 2017, the President issued a series of executive orders related to border security and immigration. The orders direct federal agencies to take a broad range of actions with potential resource implications. For example, Executive Order 13767 instructs DHS to construct a wall or other physical barriers along the U.S. southern border and to hire an additional 5,000 U.S. Border Patrol agents. Executive Order 13768 instructs federal agencies, including DHS and DOJ, to ensure that U.S immigration law is enforced against all removable individuals and directs ICE to hire an additional 10,000 immigration officers. Executive Order 13780 directs agencies to develop a uniform baseline for screening and vetting standards and procedures; and established nationality-based entry restrictions with respect to visa travelers for a 90-day period, and refugees for 120 days. GAO was asked to review agencies' implementation of the executive orders and related spending. This report addresses (1) actions DHS, DOJ, and State have taken, or plan to take, to implement provisions of the executive orders; and (2) resources to implement provisions of the executive orders, particularly funds DHS, DOJ, and State have obligated, expended, or shifted. GAO reviewed agency planning, tracking, and guidance documents related to the orders, as well as budget requests, appropriations acts, and internal budget information. GAO also interviewed agency officials regarding actions and budgetary costs associated with implementing the orders. What GAO Found The Departments of Homeland Security (DHS), Justice (DOJ), and State issued internal and public reports such as studies and progress updates, developed or revised policies, and took initial planning and programmatic actions to implement Executive Orders 13767, 13768, and 13780. For example: DHS's U.S. Customs and Border Protection (CBP) started the acquisition process for a Border Wall System Program and issued task orders to design and construct barrier prototypes. In November 2017, CBP awarded a contract worth up to $297 million to help with hiring 5,000 U.S. Border Patrol agents, 2,000 CBP officers, and 500 Air and Marine Operations agents. DOJ issued memoranda providing guidance for federal prosecutors on prioritizing certain immigration-related criminal offenses. Additionally, from March through October 2017, DOJ detailed approximately 40 immigration judge positions to detention centers and to the southern border to conduct removal and other related proceedings, according to DOJ officials. State participated in an interagency working group to develop uniform standards related to the adjudication of visa applications, interviews, and system security checks. State also implemented visa and refugee entry restrictions in accordance with the Supreme Court's June 26, 2017, ruling. Agency officials anticipate that implementing the executive orders will be a multi-year endeavor comprising additional reporting, planning, and other actions. DHS, DOJ, and State used existing fiscal year 2017 resources to support initial executive order actions that fit within their established mission areas. GAO found that it was not always possible to disaggregate which fiscal year 2017 funds were used for implementation of the orders versus other agency activities. All three agencies indicated that they used existing personnel to implement the orders and, in some cases, these efforts took substantial time. For example, according to ICE data, personnel spent about 14,000 regular hours (the equivalent of 1,750 8-hour days) and 2,400 overtime hours planning for the ICE hiring surge from January 2017 through January 2018. In March 2017, the President submitted a budget amendment along with a request for $3 billion in supplemental appropriations for DHS to implement the orders. In May 2017, DHS received an appropriation of just over $1.1 billion, some of which DHS used to fund actions to implement the orders. For example, CBP received $65 million for hiring and, according to CBP officials, used these funds to plan and prepare for the surge in U.S. Border Patrol agents. As of January 2018, CBP had obligated $18.8 million of the $65 million. Agencies plan to continue to use their base budgets and request additional funds as needed to carry out their missions and implement the orders. For example, for fiscal year 2018, CBP requested approximately $1.6 billion and received (in March 2018) approximately $1.3 billion to build new and replace existing sections of physical barriers along the southern border. For fiscal year 2019, ICE requested $571 million to hire 2,000 immigration officers and DOJ requested approximately $40 million to hire new immigration judges and supporting staff.
gao_GAO-18-467
gao_GAO-18-467_0
Background DOD’s contracting process—governed by laws and regulations—seeks to promote competition, be transparent in conducting business and ultimately satisfy DOD users in terms of cost, quality, and timeliness to protect taxpayers’ interests. DOD’s acquisition process begins at the point when agency needs are established; it includes requirements development and acquisition planning, a process for awarding contracts, and contract administration. While we recognize that requirements development and acquisition planning can affect the time it takes to award a contract, this review focuses on the time from solicitation issuance to contract award. An overview of competition in contracting, contract phases, and DOD initiatives follows. Competition Federal statutes and the Federal Acquisition Regulation (FAR) generally require that federal agencies award contracts through full and open competition, but recognize that such competition is not always feasible or desirable, and authorize the use of other than full and open competition under certain conditions. The exceptions include: only one responsible source exists and no other supplies or services will satisfy agency requirements; unusual and compelling urgency exists; or when authorized or required by statute (for example, statutorily allowed sole-source awards to small businesses). Even when using other than full and open competition, agencies must solicit offers from as many potential sources as is practicable. Generally, contracts awarded using other than full and open competition must be supported by written justifications and approvals that contain sufficient facts and rationale to justify the use of the specific exception to full and open competition. The approval level for these types of contracts varies according to the dollar value of the procurement. Contract Phases We Identified The acquisition planning phase includes pre-solicitation activities such as market research and defining requirements, among others. We identified four contract phases subsequent to acquisition planning: solicitation, initial evaluation, discussion/negotiation, and contract award. See figure 1. Solicitation: Agencies solicit offers from prospective contractors by issuing a request for proposals. The request for proposals informs the prospective contractors of the government’s requirements, the anticipated terms and conditions that will apply to the contract, the information required in a proposal and, in a competitive acquisition, the factors used to evaluate proposals and their relative importance. Those who wish to respond must submit their proposal to the government office in the time and manner stated in the request for proposals. We consider the solicitation phase to begin with solicitation issuance and end at the deadline to submit the initial proposals. Initial Evaluation: Proposal evaluation is an assessment of the proposals and the offerors’ ability to perform the prospective contract successfully. For example, proposals undergo technical evaluation to determine offerors’ ability to meet the technical requirements and cost or price evaluation to determine whether the price is fair and reasonable. Agencies also evaluate proposals when using other than full and open competition as part of agency preparation for negotiation with the offerors. We consider the evaluation phase to begin when contractors submit initial proposals and to end once government contracting personnel receive approval to enter into negotiations or discussions. Discussion/Negotiation: Negotiations are exchanges, in either a competitive or sole-source environment, between the government and offerors that are undertaken with the intent of allowing the offerors to revise the proposals. Negotiations allow the offerors to address any concerns with the proposals or provide additional information on relevant past performance, among other things. We consider this phase to start when the contracting officer receives approval to enter into negotiations and end when contracting personnel receive approval to award the contract. Contract Award: We consider the contract award phase to start when the approval to award the contract is given and end when the contracting officer signs the contract. DOD Initiatives: Source Selection Procedures and Peer Reviews The following DOD initiatives identify certain tasks that contracting officials should address between solicitation issuance and contract award: Source Selection Procedures: DOD updated its source selection procedures in April 2016 to help standardize the process to deliver products at the best value. These procedures outline a common set of principles and procedures for conducting acquisitions in accordance with applicable statutes and regulations. Unless waived, the source selection procedures apply to all acquisitions conducted as part of a major system acquisition and all competitively negotiated acquisitions with an estimated value of more than $10 million. Peer Reviews: The Office of Defense Procurement and Acquisition Policy is responsible for all pricing, contracting, and procurement policy matters within DOD and has required peer reviews of certain DOD acquisitions since 2009. The office currently conducts peer reviews for all procurements with an estimated value of over $1 billion and for noncompetitive procurements for new contract actions valued at $500 million or more. The office generally conducts peer reviews prior to issuance of the solicitation, prior to request for final proposal revisions, and prior to contract award, as well as periodic post-award reviews. Peer review teams include contracting officials from the military departments and defense agencies as well as legal advisors. For acquisitions below $1 billion, the military components must establish their own policies for conducting reviews based on expected acquisition value and the extent of competition. DOD Components Have Taken Steps to Track the Time Frames for Awarding Contracts, but DOD Does Not Have a Strategy for Assessing the Information DOD components in our review have efforts underway to track and reduce the time to award contracts, but these efforts are not coordinated across the department. The DOD components collect information on the time to award contracts, but differ on what information they collect and how they use it. DOD is taking a number of actions to understand the information the components collect such as determining what events are tracked, but DOD does not have a department-wide strategy for collecting and assessing the components’ information. DOD has proposed reducing how long it takes to award contracts. DOD Components Collect Varying Levels of Information about the Time Frames for Awarding Contracts Each component we reviewed collected information on the length of time to award certain contracts, but the information varied. The differences include: (1) the types of contract actions tracked; (2) the start of the period measured; (3) whether components track interim dates between solicitation issuance and award; and (4) how goals to reduce the length of time are determined. For example, the Air Force limits its scope to discrete contract value ranges while the other components include broader dollar ranges. The components also use different starting points to measure the time frames. For example, the Army Contracting Command currently tracks time starting from the submission of an adequate requirements package to contracting officials, which occurs before solicitation issuance. The Air Force, however, tracks how long it takes to award a contract starting from solicitation issuance. The selected components in our review also differ in collecting data for interim phases of the contract award process—such as evaluation or negotiation. Both Navy commands capture multiple data points, such as when negotiations begin, among other events, but there is no common practice for including certain data across the commands that is provided to DOD. Table 1 shows the broad categories of information collected. Concerns within the Air Force about the length of time taken to award contracts led to a process, begun in 2014, for tracking award times for sole-source contracts, including identifying practices and procedures that contributed to the time, according to Air Force contracting officials. The officials stated that this effort helped to reduce the average time to award sole-source contracts between $50 million and $500 million from about 16 months in fiscal year 2014 to about 12 months in fiscal year 2017. Air Force officials attributed the reductions in time to various streamlining initiatives, such as asking for contractors’ feedback on draft solicitations and clarifying as needed. Beginning with new contracts awarded in fiscal year 2014, the Air Force collected information on sole-source contracts between $50 million and $500 million. In early fiscal year 2018, the Air Force expanded its data collection to include competitive contracts from $50 million to $1 billion. The Air Force tracks the time starting from solicitation issuance to contract award. It also tracks interim phases of contract awards such as the start of evaluation or negotiation. According to Air Force officials, they establish fiscal year goals to measure progress based on the average of schedule dates. The data for both the sole-source contracts and now the competitive contracts are collected through a manual data call and are entered into a spreadsheet. The data are reported to the Office of the Assistant Secretary of the Air Force for Acquisition. Army In November 2017, the Deputy Assistant Secretary of the Army (Procurement) called for the formation of an Army-wide team to examine approaches for improving procurement time frames similar to one already underway at the Army Contracting Command. The command began tracking the lengths of time to award contracts in 2015, and expanded the effort across the command in January 2017. The Army Contracting Command: Tracks all procurements based on dollar thresholds, dividing the contracts by competitive and non-competitive actions. Tracks the time from the receipt of the requirements package to contract award. The process does not capture interim phases of contract award such as the start of evaluation or negotiation. Establishes goals by averaging historical data. For instance, a competitively awarded contract between $50 million and $250 million is estimated to take 600 days. Army officials stated that they track actual performance against their goals on a quarterly basis. Collects data through its Virtual Contracting Enterprise system, which includes electronic contract files that can be used to obtain contract data such as solicitation issuance date. The command computes averages by aggregating the data by dollar threshold, contracting organization, and portfolios—such as weapon systems or services contracts. Defense Logistics Agency In November 2014, the Defense Logistics Agency examined awards from 2011 to 2013 to determine areas to focus on to make the contract award process more efficient. Defense Logistics Agency contracting officials stated that they have reduced the award time since they began their assessment by streamlining their procedures. The agency: Collects contract data for all of its procurements. Measures the time period from receipt of purchase requirement package to contract award, but not the phases in between solicitation and contract award—such as evaluation or negotiation. Establishes a goal based on historical averages for the various contract types, such as long-term contracts or delivery orders, in order to aggregate contracts with similar characteristics. The agency varies the goals according to the kind of contract, such as those using simplified acquisition procedures or larger value contracts. For example, the Aviation command’s goal is to award contracts that require certified cost or pricing data with a period of performance that exceeds 3 years within 315 days for fiscal year 2018. For those contracts that do not require certified cost or pricing data, the goal is 215 days. Collects contract data using its contract management systems, continues to assess whether it is meeting timeliness goals on a monthly basis, and revises goals each fiscal year to reflect changes in trends and volume of contract actions. Navy Starting in May 2015, the Navy contracting commands presented data quarterly on execution of contracts and areas for improvement within the contract award process to the Office of the Assistant Secretary of the Navy, (Research, Development, and Acquisition) in response to concerns about the length of time for contract awards. The Navy commands we selected have made efforts to identify bottlenecks within the contract award process. For example, their analysis of the data highlighted the timeliness and quality of the procurement request as a common issue among the Navy contracting commands as well as the justification and approval cycle for sole-source awards. The analyses also included areas for improvement during the process, such as improving guidance and training for technical evaluation teams and exploring opportunities to streamline or waive some peer reviews. Naval Air Systems Command piloted the Procurement Management Tool in fiscal year 2013. The Procurement Management Tool is an electronic system to collect information on contracts, which allows contracting officials to forecast and manage procurement time frames. The system: Maintains data from all of the Naval Air Systems Command’s contracts, starting from acquisition planning (pre-solicitation efforts), in addition to various interim dates such as proposal receipt. The tool allows contracting officials to compare planned, revised, and actual dates. Tracks the overall length of time to award contracts. Navy contracting officials said they use the planned dates as the baseline to compare to the actual dates to determine the variance. Their goal is to reduce the variance between the dates. Uses data from the Command’s contract writing systems, but updates are done manually. Data are made available to Naval Air Systems Command officials and provide them a high-level view of the cost and cycle time drivers that may be selected for further investigation. Reports can be generated at any time, on an as- needed basis. Naval Sea Systems Command, starting in 2005, conducted analyses on the contract award phases that were used to identify problem areas that added time beyond what was anticipated. The analyses also capture data from entities outside of the contracting office, such as program offices. Naval Sea Systems Command has used the analyses to implement streamlining initiatives as well as establish performance measures to assess progress on a quarterly basis. A Naval Sea Systems Command official told us that the command has reduced the average length of time to award contracts above $750,000. Specifically, for competitive contracts, the average was reduced from 467 days to 387 days (about 18 percent), and for sole- source contracts the average was reduced from 336 to days to 278 days (about 18 percent) from fiscal year 2013 through fiscal year 2017. The Naval Sea Systems Command tracks its contracts valued at $750,000 or greater using an electronic data base—E-milestone— to collect contract information. The data base collects information starting from pre-solicitation efforts, which includes the purchase request to contract award. The system includes interim dates within the contract award process, such as the beginning of evaluation. Contracting officials are responsible for capturing both planned and actual dates in the system. Analysis of the variation between the planned and actual dates can be used to identify areas where difficulties occur. Command officials stated that their goal is to reduce the variance between the planned and actual dates. The system reports performance metrics monthly to program executive offices as well as to higher offices. The metrics the command collects reveal acquisition process bottlenecks and facilitate corrective action and acquisition streamlining. DOD Has Proposed Reducing the Length of Time to Award Contracts but Does Not Yet Have a Strategy for Assessing the Information Components Collect According to Defense Procurement and Acquisition Policy officials, DOD is taking steps to address its concerns about the time to issue sole-source contract awards for major weapon systems. DOD has proposed reducing this time by 50 percent over a 3-year period, as measured from the receipt of the requirements to contract award. DOD officials also plan to expand this effort to include competitively awarded contracts. While DOD has proposed reducing the length of time to award contracts by as much as 50 percent, according to DOD officials, it does not have a department-wide strategy for the information components are to collect and report because it has not defined what is to be measured. Internal control standards for the federal government state that management should use relevant information to make informed decisions and evaluate an agency’s performance in achieving key objectives and establish a baseline as a measure to assess progress in achieving its goals. As discussed above, DOD components have made some efforts to collect information to understand the length of time to award contracts for their own management purposes. Since the components differ on when they start measuring the time to award contracts and whether they collect data on interim dates between solicitation issuance and contract award, it is difficult for DOD to ensure that the data from the various components are comparable and comprehensive. This issue was highlighted in the National Defense Authorization Act for Fiscal Year 2018, which contained a provision for DOD to develop a definition of “procurement administrative lead time” to be used throughout the department and a plan for measuring and publicly reporting data on procurement administrative lead time. DOD proposed a definition for the procurement administrative lead time as the time between the date on which DOD issues the initial solicitation for a contract or task order and the date of the award in a February 2018 notice in the Federal Register. The proposed definition applies to DOD contracts and task orders above the Simplified Acquisition Threshold. In addition to issuing the Federal Register notice, Defense Procurement and Acquisition Policy officials have started working with the military components (Army, Navy, and Air Force) to understand the information they have on the time frames for awarding contracts. Further, DOD officials stated that they are starting to identify events common across the components, relative to contract award time frames. According to DOD officials, DOD plans to include pre-solicitation events and some interim events between solicitation issuance and contract award in its DOD-wide data collection efforts. Because DOD’s efforts are in the early stages, they have not established which specific events to measure and how they will use the information collected. Without a strategy for data collection and assessment, DOD will be limited in its ability to assess progress toward achieving its proposed goal and addressing challenges across components. Most of the Selected Weapon Systems- Related Contracts Were Awarded within a Year Our review of a nongeneralizable selection of 129 weapon systems- related contracts had a wide range of time intervals from solicitation issuance to award. The time intervals from solicitation to award ranged from less than a month to more than 4 years, with a median of about 9 months. Based on our analysis, 88 of the 129 contracts were awarded less than a year from the solicitation issuance date, while 38 were awarded between 1 and 2 years. The remaining 3 selected contracts took more than 2 years to award. We analyzed the time taken to award contracts based on three characteristics identified by some DOD officials and contractor representatives that may affect the time taken to award contracts: contract value, extent competed, and contract type. We did not observe any patterns based on these characteristics. The results of our analysis are as follows. Contract Value We found a wide range of time intervals for the 129 contract awards we reviewed, which ranged from about $5 million to over $12 billion. We observed that both shortest and the longest time intervals from solicitation to contract award were for contracts valued under $50 million. One of the two contracts that were awarded within 20 days had a contract value of about $7 million for commercial software services. Figure 2 summarizes information on the time interval based on contract value. Extent Competed DOD contracting officials and industry representatives we interviewed stated that contracts awarded using full and open competition could have a longer time interval than contracts awarded using other than full and open competition due to the need to evaluate proposals from multiple offerors. Twenty-seven of the 129 contracts in our review used full and open competition, and the remaining 102 contracts used other than full and open competition. Based on our analysis, roughly two-thirds of the selected contracts in either group took less than 1 year to award. Specifically: Eighteen of the 27 selected contracts awarded using full and open competition were awarded within a year of solicitation issuance, and the remaining 9 were awarded between 1 and 2 years. Seventy of the 102 selected contracts awarded using other than full and open competition were awarded within a year and 29 of the 102 were awarded between 1 and 2 years. Contract Type DOD contracting officials and industry representatives we interviewed asserted that firm-fixed-priced contracts would generally take a shorter amount of time to award. For example, Navy contracting officials told us that other than firm-fixed-priced contracts—such as contracts with award or incentive fees—could take longer to award because the government would need to negotiate the fee structure with the contractor. We found a wide range of time intervals based on contract type. Roughly two-thirds of the 129 selected contracts were awarded in less than 1 year regardless of contract type. Specifically: Thirty-eight of the 53 firm-fixed-price contracts were awarded within a year of when the solicitation was issued and 50 of the 76 other contracts were awarded within a year of solicitation issuance. Survey Respondents Identified Several Factors Affecting the Time Frame for Awarding Contracts The results of our survey of contracting officials for 37 contracts showed that contracting officials cited a number of factors—such as the quality of the proposal—that helped reduce or increase the time to award the selected contracts. They did not identify any one factor that consistently affected the time to award. Officials for more than half of the contracts reported needing more time to award the contracts than they initially anticipated. Survey Respondents Reported that Most of the Contracts Took Longer than Anticipated to Award and Identified Various Factors That Affected Overall Length of Time to Award Contracts DOD contracting officials we surveyed for 23 of 37 contracts reported needing more time to award their contract than anticipated at the time they issued their solicitation. Table 2 summarizes how respondents in our survey characterized differences between the anticipated contract award date and the actual date. DOD contracting officials cited the decision to make the award an office priority and contractor responsiveness as factors helping to decrease the overall time. In addition, contracting officials for four contracts awarded using full and open competition cited receiving waivers or deviations from relevant federal and service-level acquisition regulations as a factor that reduced the time. In case study interviews, contracting officials for two of these four contracts added that peer review waivers and delegation of the decision authority level to a lower level helped decrease the overall time. According to these contracting officials, they received these waivers because the procurements were considered low risk since the requirements that the offerors needed to meet were straightforward. DOD policy officials said peer review waivers are infrequently requested and granted on case-by-case bases. According to these officials, as of March 2018, 14 peer review waivers had been requested since fiscal year 2016 and all of them were granted. In contrast, contracting officials responding to our survey cited several factors that lengthened the time for contracts that were awarded later than anticipated. For example, in the solicitation phase, contracting officials for contracts awarded using full and open competition cited the lack of quality of the solicitation as a factor that lengthened the time needed, while contracting officials for contracts awarded using other than full and open competition cited the contractor’s inability to provide a timely proposal and government changes in requirements. In an Air Force cost-plus-award-fee contract awarded using other than full and open competition for a ballistic missile-related system valued over $400 million, a contracting official noted that the government changed some of the requirements after solicitation issuance. This resulted in amendments to the solicitation and revisions to the contractor’s proposal, which increased the time needed in the solicitation phase, and led to the contract being awarded later than anticipated. Various Factors Affected Specific Phases from Solicitation Issuance to Contract Award Based on survey responses, we also found variation in the factors that shortened or lengthened the time needed in the different phases— solicitation, initial evaluation, and negotiation. Contracting officials pointed out, however, that additional time needed in one phase could result in less time being needed in other phases. Solicitation Phase Contracting officials cited factors related to the quality of the solicitation and whether there were government changes in requirements as shortening or lengthening the time in this phase. Contracting officials for contracts awarded using other than full and open competition cited the contractor’s inability to provide a timely proposal as a factor that lengthened this phase. For an Army sole- source contract for aircraft maintenance and sustainment support, contracting officials told us that the solicitation phase took longer than anticipated. This phase took over 10 months from the solicitation issuance to when the contractor submitted a proposal. According to the contractor, after solicitation issuance, the government made some changes to the requirements, including the quantities of items. During that period, labor rates had changed, which increased the time needed to submit a proposal so that these changes could be incorporated. Evaluation Phase Some of the factors cited by contracting officials as shortening or lengthening the evaluation phase included those related to the quality of the proposal, the acquisition workforce, or the staff performing evaluations or approving the analyses. Technical and cost or price evaluations, among others, assess the offerors’ ability to perform successfully, ensure that offerors’ proposals meet the requirements listed in the solicitation, and establish that the price is fair and reasonable. Contracting officials we surveyed cited different factors based on the cost or price evaluation, technical evaluations, and the extent competed. Contracting officials with contracts awarded using full and open competition cited the number and quality of the proposals—whether they needed revisions or not—as shortening or lengthening the time needed to complete technical evaluations. For cost or price evaluations, they cited the number of proposals received and the completeness of the information provided by the contractor. Contracting officials with contracts awarded using other than full and open competition cited contractor responsiveness to requests for additional information as a factor regardless of the time needed to complete both types of evaluations. For cost or price evaluations, contracting officials cited factors related to the proposal, such as its quality and timeliness, as among the factors that helped shorten the time. In a case study involving a Navy sole-source research and development contract valued over $1 billion for the Next Generation Jammer, contracting and program officials said it took the contractor about 4 months after submitting the initial proposal to provide the contracting office a complete proposal due to delays in getting subcontractor information. According to these officials, despite the delay, they did not need more time in this phase since they were able to start evaluating the initial proposal consisting of the prime contractor’s technical and cost information, and incorporate analyses for the subcontractor information once they received it. Contracting officials that used other than full and open competition also cited requesting audit assistance from the Defense Contract Audit Agency as a factor that lengthened the time needed for cost or price evaluations. For example, in a Navy firm-fixed-price contract that was awarded using other than full and open competition for radar engineering services valued at $221 million, an audit took longer than anticipated— about 5 months—due in part to a complex pricing model and delays in receiving subcontractor pricing data. While the Defense Contract Audit Agency and the contractor communicated on the pricing data and cost structure, the agency was unable to complete its audit without the subcontractor data. Negotiation Phase In addition to agreeing on the price of a contract, the negotiation phase also includes any additional evaluations of revised proposals. Contracting officials cited the need for subsequent evaluations due to revised proposals as a factor that lengthened this phase. Among other factors, contracting officials cited the contract approval authority level and the approving authority’s availability or responsiveness as factors that shortened this phase. In contrast, contracting officials also cited bid protests or agreement on fees as factors that lengthened it. A contracting official for an Air Force contract awarded using full and open competition cited pre-award bid protests as a factor that lengthened the discussion phase. One of the offerors protested the evaluation of its proposal, which was found technically unacceptable. The offeror’s protest was denied because it was found that the evaluation of the proposal was reasonable and consistent with the terms of the solicitation. In addition, the offeror initially selected for award of the approximately $17 million contract was the lowest priced proposal that was found technically acceptable. However, the contracting officer subsequently found the offeror nonresponsive due to several challenges. These challenges and the pre-award bid protest resulted in a longer than anticipated discussion phase, and the award was made to the next lowest priced offeror. Contracting officials for 2 contracts awarded using other than full and open competition cited obtaining agreement on profit or fee as a factor that lengthened the negotiation phase. For example, in an Army contract for spares, maintenance, and overhaul of an airframe, the government and the contractor disagreed over the profit margin. Negotiations for the approximately $54 million contract stalled until the issue was elevated to higher levels at both the contractor and the government. This contract took about 22 months from solicitation issuance to contract award, with the negotiations phase taking about 8 months from approval to enter into negotiations to approval for contract award. For additional information on the survey results, see appendix II. Conclusions DOD has proposed reducing the time to award contracts in order to address concerns that it is taking too long. To measure progress against its goal, DOD will need relevant information about the time frames involved. DOD components are collecting information on the length of time to award contracts, but their efforts differ. DOD does not have a comprehensive strategy to use the component information already available or to collect other information that may be needed to assess contract award time frames. Having a DOD-wide strategy could enable DOD to consistently and comprehensively track contract award time, assess the factors contributing to this time, leverage the various efforts that the components have taken, identify any best practices, and measure progress toward any goals for reducing the time to award contracts. Currently, DOD does not define the events that should be measured occurring prior to solicitation or those that occur between solicitation issuance and contract award. While the military components collect various information about the length of time to award contracts based on their specific needs and organizational structures, at a minimum, DOD should have relevant information for its own management purposes. As DOD implements provisions in the National Defense Authorization Act for Fiscal Year 2018, the department has an opportunity to identify what data, if any, beyond just the overall procurement administrative lead time should be collected and reported. Identifying the information that is to be collected is a necessary first step for DOD to assess its progress in reducing the time taken to award contracts. Recommendation for Executive Action We recommend that the Secretary of Defense direct the Director, Defense Procurement and Acquisition Policy to develop a strategy regarding contract award time frames that identifies: the information the department needs to collect; and how the department will use the information to assess the time it takes to award contracts. The strategy should seek to communicate the department’s goals related to contract award time frames, seek to leverage ongoing data collection efforts by the various components, and specify the events prior to solicitation and between solicitation issuance and contract award that the department believes should be tracked. (Recommendation 1) Agency Comments We provided a draft of this report to DOD for comment. DOD concurred with the recommendation. DOD provided written comments which have been reproduced in appendix III. DOD also provided technical comments which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretaries of the Army, Navy, and Air Force; the Director, Defense Logistics Agency; appropriate congressional committees; and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology We were asked to evaluate the length of time taken to award weapon systems-related contracts. This report examines (1) the Department of Defense’s (DOD) efforts to determine the time it takes to award weapon systems contracts; (2) what available data show regarding the time between solicitation issuance and award for selected weapon systems- related contracts; and (3) factors identified as contributing to contract award time frames. To understand the procedures DOD follows to award contracts and DOD’s efforts to determine the time it takes to award contracts, we reviewed relevant sections of the Federal Acquisition Regulation (FAR), such as Part 6: Competition Requirements, and Part 15: Contracting by Negotiation, and relevant sections of the Defense Federal Acquisition Regulation Supplement. In addition, we analyzed DOD-level and component-level guidance, policies, memorandums, and training materials on the contract award process. We also reviewed Standards for Internal Control in the Federal Government and prior GAO reports. To determine the extent DOD components (Air Force, Army, Navy and the Defense Logistics Agency) collected and analyzed data and how they are managing the time from solicitation issuance to contract award, we analyzed relevant documentation, such as monthly or quarterly management reviews and briefings. We interviewed acquisition officials at DOD and the components regarding studies or analysis conducted related to the time to award contracts. We selected the components based on the highest total number of contracts and highest total contract value. We discussed contract award time frames included in studies or analysis to determine the selected components’ or commands’ reasons for conducting the analysis, any challenges identified, actions taken to address those challenges, and ongoing efforts to reduce the time needed to complete the contract award process. We also discussed their data collection and verification process, but we did not independently verify the data that were reported in the studies and analyses. We determined that the data reported by the military components were reliable for the purposes of describing data collection and analyses done by DOD components. We also met with industry associations for their perspective regarding the length of time to award weapon systems-related contracts. Identifying Weapon Systems-Related Contracts To understand the length of time taken to award DOD weapon systems- related contracts, and how contract value, extent competed, and contract type relate to that time, we analyzed contract data for a nongeneralizable sample of weapon systems-related contracts from the Federal Procurement Data System-Next Generation (FPDS-NG). We used FPDS- NG to identify DOD weapon systems-related contracts that were newly awarded from fiscal year 2014 through fiscal year 2016, with a contract value of $5 million or more. To include weapon systems-related contracts, we initially selected major defense weapon systems contracts as identified by DOD and identified the supplies or service codes (Product Service Code and North American Industry Classification Systems codes). We then compared the list of contracts with contract information in FPDS-NG to identify the contracts that contain the same codes to identify similar supplies and services. We narrowed the number of contracts using the DOD acquisition program field in FPDS-NG as a proxy to identify weapon systems-related contracts. For multiple award contracts, we selected the first contract awarded among those that were awarded under the same solicitation as indicated by the contract number. We excluded contracts that were awarded under specific circumstances that use different acquisition procedures, such as contracts awarded under simplified acquisition procedures. In addition, we excluded basic ordering agreements; blanket purchase agreements; orders of any type, including task and delivery orders; and extensions of existing contracts. We excluded undefinitized contract actions and contracts that included foreign funds or foreign military sales because of the peculiarities associated with these procurements. We also excluded contracts coded as Ballistic Missile Defense Organization in FPDS-NG because this field was used broadly to include contracts for both weapon systems and non- weapon systems. We further limited our selection of contracts to selected military components—Air Force, Army, Navy and the Defense Logistics Agency based on the highest number of contracts and highest total contract value. We then identified the largest commands within these components also based on the number of contracts and total contract value. Air Force- Air Force Materiel Command, Army- Army Contracting Command Defense Logistics Agency- Aviation Navy- Naval Air Systems Command Navy- Naval Sea Systems Command Defense Logistics Agency-Aviation, Air Force Materiel Command, and the Army Contracting Command awarded the higher number of contracts and the highest total value within their respective components. For Navy, the Naval Air Systems Command awarded the higher number of contracts, but the Naval Sea Systems Command awarded a higher total value, so we included both. For multiple award contracts, we selected the first contract awarded among those that were awarded under the same solicitation as indicated by the contract number. We excluded contracts that were awarded under specific circumstances that use different acquisition procedures, such as contracts awarded under simplified acquisition procedures. In addition, we excluded basic ordering agreements; blanket purchase agreements; orders of any time, including task and delivery orders; and extensions of existing contracts. We excluded undefinitized contract actions and contracts that included foreign funds or foreign military sales because of the peculiarities associated with these procurements. We also excluded contracts coded as Ballistic Missile Defense Organization in FPDS-NG because this field was used broadly to include contracts for both weapon systems and non-weapon systems. As a result, we initially identified a nongeneralizable sample of 145 contracts. In addition, we used the information contracting officials reported in our web-based survey to confirm whether the 60 contracts we surveyed met our selection criteria, and excluded those that did not. These exclusions resulted in a nongeneralizable selection of 129 weapon systems-related contracts. To assess FPDS-NG data reliability, we compared the FPDS-NG data to the contract documentation that we obtained for the solicitation issuance and contract award dates to verify the dates. We verified the contract value, extent competed, and contract type by comparing the data reported in FPDS-NG, such as the contract number and award value, to information in the contract documentation. We also verified the solicitation and contract award dates using contract documentation. We determined that the FPDS-NG data was reliable for the purposes of identifying a nongeneralizable sample of contracts and analyzing time between solicitation and contract award dates, contract value, extent competed, and contract type. Survey Methodology To obtain information on the factors that helped or hindered the length of time to award contracts, we conducted a web-based survey of contracting officials—such as contracting officers or contract specialists—for 60 contracts. The survey collected information from contracting officials on the start and end dates of the solicitation, initial evaluation, discussion or negotiation, and contract award phases. We also collected information on factors that helped mitigate the time interval or hindered contracting officials from completing the solicitation, initial evaluation, and discussion or negotiation phases. For the survey, we additionally screened out contracts awarded using sealed bidding. We also did not include the Defense Logistics Agency-Aviation as part of the survey because it is a combat support agency providing weapon systems parts for the military services. From 145 of the 171 selected weapon systems-related contracts, we randomly selected 20 contracts from the Air Force Materiel Command, 20 from the Army Contracting Command, 10 from the Naval Air Systems Command, and 10 from the Naval Sea Systems Command for a nongeneralizable survey sample. For the survey, we identified the time to award contracts by phases, from solicitation issuance to contract award. These phases are based on discrete events found in the FAR or component-specific guidance as necessary steps in awarding a contract by negotiation. The 4 phases we identified are: Solicitation: from solicitation issuance to solicitation closing date or Initial Evaluation: from solicitation closing date or receipt of initial proposal to when contracting personnel receive approval to enter into discussion or negotiation Discussions/negotiations: from approval to enter into discussion or negotiation to approval to award the contract Contract award: from approval to award the contract to the date the contract was signed by the contracting officer. We conducted a total of eight telephone pre-tests on the contents and format of the survey with officials from the Air Force Materiel Command, Army Contracting Command, Naval Air Systems Command, and Naval Sea Systems Command to determine if the questions were understandable and answerable, in addition to verifying that the terminology used in the survey was accurate, and that the survey was unbiased. As a result of the pre-tests, we refined the survey as appropriate. We emailed a link to the web-based survey to contracting officials for the 60 selected weapon systems-related contracts on October 19, 2017. To encourage respondents to complete the survey, we sent reminder emails and made telephone calls to contracting officials after the initial email was sent. We closed the survey on January 10, 2018. Of the 60 contracts we surveyed, we excluded 18 contracts that did not meet our selection criteria based on the responses from the contracting officials. These included contracts that were not newly awarded, used sealed bid procedures, or contained foreign funding or foreign military sales. Of the 42 remaining contracts, we received responses from contracting officials for 37 contracts, for an overall response rate of 88 percent. The survey included event dates, which differentiate between the phases. We did not verify the start and end dates of the phases reported in the survey and relied on contracting officials’ responses. We did, however, verify the dates for solicitation issuance and contract award against the FPDS-NG reported data and contract documentation as part of the verification process for the 129 selected contracts. We emailed contracting officials in certain instances where we needed clarification on survey responses. For example, we followed-up on responses that differed from FPDS-NG reported data and responses that indicated that a contract was awarded using both full and open and other than full and open competition, among others. We made corrections to the data as needed. Case Study Methodology For more in-depth information on the factors and circumstances that affected the time from solicitation issuance to contract award, we selected 7 contracts from the survey for further analysis. To obtain a variety of contract characteristics, we selected the case studies based on certain criteria including: (1) representation of different DOD components; (2) a range of longer and shorter time intervals between solicitation and contract award date; (3) contracts with larger contract value; and (4) the extent the contracts were competed. We selected 4 contracts awarded using other than full and open competition and 3 awarded using full and open competition. For the purposes of our report, full and open competition after exclusion of sources is considered to be full and open competition. We did not select contracts from the Naval Sea Systems Command as part of our case study because the extent of competition was not confirmed at the time of selection. For these 7 contracts, we reviewed the survey results, analyzed contract file documentation, and conducted interviews with available contracting officials and program office officials, as well as contractor representatives to obtain their perspectives on the factors that helped or hindered the time from solicitation issuance to contract award. We conducted this performance audit from January 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Supplemental Survey Results for Selected Questions from GAO’s Survey of Factors Affecting the Length of Time to Award Contracts We distributed a web-based survey to a random sample of contracting officials for 60 weapon systems-related contracts and reviewed responses for 37 contracts. The survey results presented in tables 4 through 13 are nongeneralizable. For more information on our methodology for designing and distributing the survey, see appendix I. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the contact named above, Penny Berrier (Assistant Director), Peter Anderson, David Ballard, Sonja Bensen, Lorraine Ettaro, Kurt Gurka, Gina Hoover, Julia Kennon, Carol Mebane, Anh Nguyen, Bonita Oden, Jenny Shinn, Abby Volk, and Robin Wilson made major contributions to this report.
Why GAO Did This Study DOD's contracting process is designed to protect taxpayers' interests, among other things, and can take time. DOD leadership and contractors have expressed concern about the length of time to award contracts and DOD has proposed reducing that time. GAO was asked to evaluate the length of time to award weapon systems contracts. This report examines (1) DOD's efforts to determine the time it takes to award contracts; (2) data on the time interval from solicitation to contract award for selected contracts; and (3) factors identified as contributing to contract award time frames. GAO used the Federal Procurement Data System-Next Generation to identify new weapon systems-related contracts awarded in fiscal years 2014 through 2016, valued over $5 million, among other factors. GAO selected a nongeneralizable sample of 129 contracts at four DOD components with the highest total dollar value and highest number of contracts from those fiscal years for further analysis. GAO analyzed contract documentation and surveyed contracting officials on a subset of contracts to determine the factors affecting the time between solicitation issuance and award. What GAO Found Although the Department of Defense (DOD) has proposed reducing the time it takes to award contracts related to weapon systems, the department has a limited understanding of how long it currently takes and therefore lacks a baseline to measure success. The DOD components GAO reviewed—Air Force, Army, Defense Logistics Agency, and Navy—collect data on their time frames for awarding contracts. However, they do so in different ways in the absence of a DOD-wide strategy for what information should be collected. For example, the Air Force measures the time to award beginning with solicitation issuance, while the other components use a different starting point. As a result, information the components collect is not comparable and is of limited use for understanding contract award time frames department-wide. Determining what information is needed to monitor time taken to award contracts consistently across components should help DOD assess its progress toward reducing the time. GAO analyzed the time from solicitation issuance to award for 129 weapon systems-related contracts and found it ranged from less than a month to over 4 years. Although some DOD and industry officials stated that contract value could affect contract award time frames, GAO observed a wide range of time intervals and did not observe any patterns based on this characteristic. (See figure below.) According to DOD contracting officials GAO surveyed, factors that can help reduce—or, alternatively lengthen—the time between when a solicitation is issued to when a contract is awarded include a decision to make the contract award an office priority and how quickly contractors respond to requests for additional information after initial proposals are received. What GAO Recommends GAO recommends that DOD develop a strategy that identifies the information it needs to collect and how it will use the information to assess contract award time frames. DOD concurred.
gao_GAO-18-404T
gao_GAO-18-404T_0
Background Regulatory Flexibility Act RFA requires that federal agencies, including financial regulators, engaged in substantive rulemaking analyze the impact of proposed and final regulations on small entities. If a rule might have a significant economic impact on a substantial number of small entities, regulators are to consider any significant regulatory alternatives that will achieve statutory objectives while minimizing any significant economic impact on small entities. RFA defines “small entity” to include small businesses, small governmental jurisdictions, and certain small not-for-profit organizations. RFA does not seek preferential treatment for small entities. Rather, it requires agencies to use an analytical process that includes identifying barriers to small business competitiveness and seeks a level playing field for small entities. For each draft rule that requires a notice of proposed rulemaking, RFA requires regulators to prepare an initial regulatory flexibility analysis that contains an assessment of the rule’s potential impact on small entities and describes any significant alternatives to reduce the rule’s significant economic impact on small entities while achieving statutory objectives. Following a public comment period, RFA requires regulators to conduct a similar analysis when they promulgate the final rule. If the head of the agency certifies in the Federal Register that the rule would not have a significant economic impact on a substantial number of small entities, agencies do not have to conduct the initial or final analysis. Certifications must include a statement providing a factual basis for the certification. Section 610 of RFA requires agencies to review, within 10 years of a final rule’s publication, those rules assessed as having a significant economic impact on a substantial number of small entities to determine if they should be continued without change, amended, or rescinded (consistent with statutory objectives) to minimize any significant economic impact on small entities. RFA designates certain responsibilities to the Small Business Administration’s Chief Counsel for Advocacy, including monitoring agency compliance with RFA and reviewing federal rules for their impact on small businesses. Executive Order 13272 requires the Small Business Administration’s Office of Advocacy (Office of Advocacy) to provide notifications and training about RFA requirements. The Office of Advocacy published guidance on RFA compliance in 2003 (updated in 2012 and August 2017). For example, the guidance details components regulators should include in their certifications to obtain meaningful public comments, such as a description and estimate of the economic impact. Economic Growth and Regulatory Paperwork Reduction Act of 1996 Under EGRPRA, the Federal Reserve, FDIC, and OCC are to categorize their regulations by type and provide notice and solicit public comment on all regulations for which they have regulatory authority to identify areas of the regulations that are outdated, unnecessary, or unduly burdensome. The act also includes requirements on how the regulators should conduct the reviews, including reporting results to Congress. The first EGRPRA review was completed in 2007. The second began in 2014, and the report summarizing its results was submitted to Congress in March 2017. While NCUA is not required to participate in the EGRPRA review, NCUA has been participating voluntarily. NCUA’s assessment of its regulations appears in separate sections of the 2007 and 2017 reports to Congress. Community Banks and Credit Unions Saw Regulations on Mortgage Reporting and Disclosures and Anti-Money Laundering as Most Burdensome Community bank and credit union representatives we interviewed identified three areas of regulations as most burdensome to their institutions: 1. Data reporting requirements related to loan applicants and loan terms under the Home Mortgage Disclosure Act of 1975 (HMDA). 2. Transaction reporting and customer due diligence requirements as part of the Bank Secrecy Act and related anti-money laundering regulations (collectively, BSA/AML). 3. Disclosures of mortgage loan fees and terms to consumers under the Truth in Lending Act and the Real Estate Settlement Procedures Act of 1974 Integrated Disclosure (TRID) regulation. Institution representatives told us they found these regulations were time- consuming and costly to comply with because the requirements were complex, required individual reports that had to be reviewed for accuracy, or mandated actions within specific timeframes. For example, among the 28 community banks and credit unions whose representatives commented on HMDA-required reporting in our focus groups, 61 percent noted having to conduct additional HMDA-related training. Representatives in most of our focus groups said that they had to purchase or upgrade software systems to comply with BSA/AML requirements, which can be expensive, and some representatives said they have to hire third parties to comply with BSA/AML regulations. Representatives in all of our focus groups and many of our interviews said that the TRID regulations have increased the time their staff spend on compliance, increased the cost of providing mortgage lending services, and delayed the completion of mortgages for customers. However, federal regulators and consumer advocacy groups’ representatives said that benefits from these regulations were significant, such as collecting HMDA data that has helped address discriminatory practices. Staff from Financial Crimes Enforcement Network (FinCEN), which has delegated authority from the Secretary of the Treasury to implement anti-money laundering regulations, told us that the transaction reporting required and due-diligence programs required in BSA/AML rules are critical to safeguarding the U.S. financial sector from illicit activity, including illegal narcotic trafficking proceeds and terrorist financing activities. The Consumer Financial Protection Bureau (CFPB) has taken steps to reduce the burdens for community banks and credit unions associated with the HMDA and TRID regulations. Also, FinCEN has developed several efforts in reducing the reporting requirements from BSA/AML regulations to reduce regulatory burden, such as a continuous evaluation process to look for ways to reduce burden associated with BSA reporting requirements, soliciting feedback through an interagency working group about potential burden, and expanding the ability of institutions to seek a Currency Transaction Report filing exemption when possible. To reduce institutions’ misunderstanding of the TRID regulation, CFPB has published a Small Entity Compliance Guide and a Guide to the Loan Estimate and Closing Disclosure Forms. However, CFPB officials acknowledged that some community banks and credit unions may be misinterpreting the regulation’s requirements. We found that CFPB had not directly assessed the effectiveness of the guidance it provided to community banks and credit unions. Until the guidance is assessed for effectiveness, CFPB may not be able to respond to the risk that small institutions have implemented TRID incorrectly. We recommended that CFPB should assess the effectiveness of TRID guidance to determine the extent to which TRID’s requirements are accurately understood and take steps to address any issues as necessary. CFPB agreed with the recommendations and intends to solicit public input on how it can improve its regulatory guidance and implementation support. Financial Regulators Consider Burden When Developing Regulations, but Their Reviews under RFA Need to Be Enhanced One of the ways that financial regulators attempt to address the burden of regulations is during the rulemaking process. For example, staff from the Federal Reserve, FDIC, and OCC all noted that when promulgating rules, their staff seek input from institutions and others throughout the process to design requirements that achieve the goals of the regulation at the most reasonable cost and effort for regulated entities. Once a rule has been drafted, the regulators publish it in the Federal Register for public comment. The staff noted that regulators often make revisions in response to the comments received to try to reduce compliance burdens in the final regulation. Under RFA, financial regulators conduct analyses during the rulemaking process that are intended to minimize economic impact on small entities. However, we found several weaknesses with the RFA analyses, policies, and procedures of six financial regulators— Federal Reserve, OCC, FDIC, Securities and Exchange Commission (SEC), Commodity Futures Trading Commission (CFTC), and CFPB— that could undermine the goal of RFA and limit transparency and public accountability. Certifications Were Not Always Consistent with Office of Advocacy Guidance and Other Best Practices In reviewing 66 certifications by the six regulators, we found that in most (43 of 66) the regulators provided a factual basis and concluded the rule would not apply to small entities or have any economic impact. According to the regulators, these rules included activities in which small entities do not engage, pertained to the regulator’s internal processes, did not create new regulatory requirements, or eliminated duplicative rules. Additionally, regulators concluded in 5 of 66 certifications that the rule would have a beneficial impact on small entities. Other certifications lacked information that would help explain the determination. Specifically, in 18 of 66 certifications, the regulators found the rule would have some economic impact on small entities, but concluded the impact would not be significant for a substantial number of small entities. But the factual basis provided for most of the 18 certifications (across all six regulators) lacked key components the Office of Advocacy and the Office of Management and Budget (OMB) recommended for understanding the analyses regulators used to support their conclusion. Examples include the following: Data sources or methodologies. In 15 of 18 certifications regulators did not describe or did not fully describe their methodology or data sources for their conclusions. Broader economic impacts. The certifications generally did not address broader economic impacts such as cumulative effects, competitive disadvantage, or disproportionality of effects and focused most of the analysis on specific compliance costs. Defining key criteria. Regulators generally did not define the criteria they used for “substantial number” and “significant economic impact” in their certifications. Limited information. Three certifications included none of the Office of Advocacy’s suggested components, such as the number of affected entities, the size of the economic impacts, or the justification for the certification. While many of the regulators’ certification determinations incorporated key components, the weaknesses and inconsistencies we found could undermine the act’s goal. For example, incomplete disclosure of methodology and data sources could limit the public and affected entities’ ability to offer informed comments in response to regulators’ certification assessments in proposed rules. Many RFA-Required Analyses Had Weaknesses Our review of recent rules in which the agency performed an initial and final regulatory flexibility analysis found that the evaluation of key components—potential economic effects and alternative regulatory approaches—was limited in many cases, although the extent varied by regulator. RFA requires initial and final analyses to include information to assist the regulator, regulated entities, and the public in evaluating the potential impact of rules on small entities. The most important components include the assessment of a rule’s potential economic effects on small entities—such as compliance costs—and the identification and evaluation of alternative regulatory approaches that may minimize significant economic effects while achieving statutory objectives. The evaluations for some rules of economic impact on small entities did not describe or estimate compliance costs. Analyses we reviewed also generally did not evaluate differences in estimated compliance costs for identified alternatives. Five of six regulators did not consistently disclose the data sources or methodologies used for estimating the number of subject small entities or compliance costs. By not fully assessing potential economic effects or alternatives, regulators may not be fully realizing the opportunity to minimize unnecessary burdens on small entities, which is the primary goal of RFA. Regulators Lacked Comprehensive Policies and Procedures for RFA Analyses Five of six regulators have written guidelines that restate statutory requirements for certifications and preparing regulatory flexibility analyses and provide some additional guidance for staff. However, the regulators generally have not developed comprehensive policies and procedures to assist staff in complying with RFA, which may contribute to the weaknesses we identified in some certifications and regulatory flexibility analyses. Federal internal control standards state the importance for agency management to establish through policies and procedures the actions needed to achieve objectives. The extent to which regulators’ guidance included policies and procedures varied. But the guidance generally did not include procedures for evaluating a rule’s potential economic impact on small entities; identifying and assessing regulatory alternatives that could minimize economic impact on small entities; disclosing methodology and data sources; and creating and maintaining documentation that supports findings. By developing policies and procedures that provide specific direction to rulemaking staff, the regulators could better ensure consistent and complete implementation of RFA requirements and more fully realize the RFA goal of appropriately considering and minimizing impacts on small entities during and after agency rulemakings. In our January 2018 report, we recommended that each of the regulators develop and implement specific policies and procedures for consistently complying with RFA requirements and related guidance for conducting RFA analyses. Five agencies generally agreed with this recommendation and one did not provide written comments. EGRPRA Reviews Resulted in Some Reduction in Burden, but the Reviews Have Limitations Regulators took some actions to reduce burden as part of EGRPRA reviews, but we also identified opportunities to improve analyses and reporting. Results of 2017 EGRPRA Review Included Some Actions to Reduce Regulatory Burden To conduct the most recent EGRPRA review, the Federal Reserve, FDIC, and OCC sought comments from banks and others and held public meetings to obtain views on the regulations they administer. In the report they issued in March 2017, the regulators identified six significant areas in which commenters raised concerns: (1) capital rules, (2) Call Reports, (3) appraisal requirements, (4) examination frequency, (5) Community Reinvestment Act, and (6) BSA/AML regulations. In the report, these regulators described various actions that could address some of the concerns that commenters raised including: On September 27, 2017, the regulators proposed several revisions to capital requirements that would apply to banks with less than $250 billion in assets and less than $10 billion in total foreign exposure. For example, the revisions simplify capital treatment for certain commercial real estate loans and would change the treatment of mortgage servicing assets. The regulators developed a new Call Report form for banks with assets of less than $1 billion and domestic offices only. In June 2017 and November 2017, the regulators issued additional proposed revisions, effective June 2018, to the three Call Report forms that banks are required to complete. For example, community banks would report certain assets (nonperforming loans not generating their stated interest rate) less frequently—semi-annually instead of quarterly. The regulators proposed raising the threshold for commercial real estate loans requiring an appraisal from $250,000 to $400,000. They also recently issued guidance on how institutions could obtain waivers or otherwise expand the pool of persons eligible to prepare appraisals if suitable appraisers are unavailable. The three regulators also issued a final rule in 2016 making qualifying depository institutions with less than $1 billion in total assets eligible for an 18-month examination cycle rather than a 12-month cycle. Although NCUA is not required to participate in the EGRPRA process, the 2017 EGRPRA report also includes a section in which NCUA describes actions it has taken to address regulatory burdens on credit unions. In the report, NCUA identified five significant areas raised by commenters relating to credit union regulation, including: (1) field of membership and chartering; (2) member business lending; (3) federal credit union ownership of fixed assets; (4) expansion of national credit union share insurance coverage; and (5) expanded powers for credit unions. In response, NCUA took various actions. For example, NCUA modified and updated its field of credit union membership by revising the definition of a local community, rural district, and underserved area, which provided greater flexibility to federal credit unions seeking to add a rural district to their field of membership. NCUA also lessened some restrictions on member lending to small business and raised some asset thresholds for what would be defined as a small credit union so that fewer requirements would apply to these credit unions. CFPB Was Not Included in 2017 Review and Significant Mortgage Regulations Were Not Assessed One of the limitations in the EGRPRA process is that the statute mandating the process does not include CFPB and thus the significant mortgage-related regulations and other regulations that it administers— regulations that banks and credit unions generally must follow—were not included in the most recent EGRPRA review. The depository institution regulators cannot address these mortgage regulation-related burdens because they no longer have rulemaking authority for certain consumer financial statutes. However, CFPB does have its own processes to assess the burden of regulations it has implemented. For example, section 1022(d) of the Dodd-Frank Wall Street Reform and Consumer Protection Act requires CFPB to conduct a one-time assessment of each significant rule it adopts under federal consumer financial law within 5 years of the rule’s effective date. But CFPB staff told us that they have not yet determined whether certain other regulations that apply to banks and credit unions, such as the revisions to requirements, will be designated as significant and thus subjected to the one-time assessments. During 2017, CFPB launched an internal task force to coordinate and bolster its continuing efforts to identify and relieve regulatory burdens for small businesses, such as community banks, that potentially will address any regulation the agency has under its jurisdiction. However, CFPB has not provided public information on the extent to which it intends to review regulations applicable to community banks and credit unions or provided information on the timing and frequency of the reviews. In addition, it has not indicated the extent to which it will coordinate the reviews with depository institution regulators as part of EGRPRA reviews. Until CFPB publicly provides additional information indicating its commitment to periodically review the burden of all its regulations, community banks, credit unions, and other depository institutions may face diminished opportunities for regulatory relief. In our February 2018 report, we recommended that CFPB issue public information on its plans for reviewing regulations, including information on the scope of regulations, timing and frequency of reviews, and the extent to which the reviews will be coordinated with the other regulators as part of the EGRPRA reviews. CFPB agreed with the recommendation and committed to developing additional plans for reviews of key regulations and publicly releasing such information. In the interim, CFPB stated it intends to solicit public input on how it should approach reviewing regulations. Regulators Have Not Conducted or Reported Quantitative Analyses Another limitation in the EGRPRA process conducted by the Federal Reserve, FDIC, OCC, and NCUA was that these regulators did not conduct or report on quantitative analyses during the EGRPRA process to help them determine if changes to regulations would be warranted. Our analysis of the 2017 EGRPRA report indicated that in responses to comments in which the regulators did not take any action, the regulators generally provided only their arguments against taking actions and did not cite analysis or data to support their narrative. EGRPRA does not require the regulators to collect and report on any quantitative data they collected or analyzed as part of assessing the potential burden of regulations. In contrast, executive branch agencies tasked under executive orders to conduct retrospective reviews of regulations generally must collect and analyze quantitative data as part of assessing the costs and benefits of changing existing regulations. Conducting quantitative analysis for retrospective reviews could serve as a best practice for the depository institution regulators. By not performing and reporting quantitative analyses where appropriate in the EGRPRA review, the regulators may be missing opportunities to better assess regulatory impacts, (including identifying the need for any changes or identifying benefits) and making their analyses more transparent to stakeholders. In our February 2018 report, we recommended that the four depository institution regulators develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. The regulators agreed with the recommendation. For example, the Federal Reserve plans to coordinate with FDIC and OCC to identify opportunities to conduct quantitative analyses where feasible during future EGRPRA reviews. NCUA also said it should improve its quantitative analysis. Regulators Have Not Considered the Cumulative Effects of Regulations An additional limitation in the EGRPRA process we identified was that the depository institution regulators had not assessed the ways in which the cumulative burden of the regulations they administer may have created overlapping or duplicative requirements. Under the current process, the regulators have responded to issues raised about individual regulations based on comments they have received, not on bodies of regulations. However, congressional intent in tasking regulators with EGRPRA reviews was to ensure they considered the cumulative effect of financial regulations. A 1995 Senate Committee on Banking, Housing, and Urban Affairs report stated while no one regulation can be singled out as being the most burdensome, and most have meritorious goals, the aggregate burden of banking regulations ultimately affects a bank’s operations, its profitability, and the cost of credit to customers. In our February 2018 report, we recommended to the Federal Reserve, FDIC, NCUA, and OCC that as part of their EGRPRA review they develop plans for conducting evaluations that would identify opportunities to streamline bodies of regulation. The regulators generally agreed with the recommendation and said they would work together to identify ways and opportunities to decrease the regulatory burden created by bodies of regulation. In addition, FDIC stated it would continue to monitor the cumulative effects of regulation; for example, through a review of community and quarterly banking studies and community bank Call Report data. Regulators’ Approach to RFA-Required Retrospective Reviews Varied, Including the Extent to Which They Developed Policies Financial regulators took varying approaches to performing retrospective reviews for RFA; additionally, some regulators had not yet developed policies and procedures for conducting and reporting reviews. Federal Banking Regulators Relied on Other Retrospective Reviews to Meet RFA Section 610 Requirements We assessed section 610 reviews and found that the Federal Reserve, FDIC, and OCC conducted retrospective reviews that did not fully align with RFA’s requirements. Officials at each of the agencies stated that they satisfy the requirements to perform section 610 reviews through the EGRPRA review process. But the requirements of the EGRPRA reviews differ from those of the RFA-required section 610 reviews. For example, the EGRPRA review process relies on public comments to identify rules that may be outdated, unnecessary, or unduly burdensome, while public comments are only one component of section 610 reviews. The Office of Advocacy stated that agencies may satisfy section 610 requirements through other retrospective reviews if these other reviews meet the criteria of section 610. According to an official from the Office of Advocacy, the office has not yet made a determination on whether the EGRPRA review process satisfies those requirements. Although the agencies stated that they fulfill RFA requirements through EGRPRA, without confirming this with the Office of Advocacy, it is possible that they are not meeting RFA section 610 requirements and therefore may not be achieving the small-entity burden reduction that the statute seeks to ensure. In our January 2018 report, we recommended that the Federal Reserve, FDIC, and OCC coordinate with the Office of Advocacy to determine whether the EGRPRA review process satisfies the requirements of section 610 and, if not, what steps should be taken to align the process with section 610 requirements. The Federal Reserve and FDIC generally agreed with this recommendation, and OCC did not provide written comments. SEC Reviews Were Late and Not Fully Consistent with RFA Requirements or Office of Advocacy Guidance Our review of 46 SEC section 610 reviews found that they were conducted late and were not fully consistent with RFA requirements or the Office of Advocacy’s guidance for such reviews. RFA requires rules to be reviewed within 10 years of their publication as final rules, but SEC conducted all but one of its reviews 12 years after the rules were published. The reviews generally lacked substantive analysis, and no rules were amended as a direct result of their section 610 review. The reviews generally provided no evidence of empirical analysis and no data to support the conclusions of the reviews, as recommended by the Office of Advocacy and OMB. In most cases, the reviews lacked a description of whether, or to what extent, the rule was affecting small entities. SEC does not have written policies or procedures for completing rule reviews pursuant to RFA section 610, potentially contributing to the weaknesses we identified (timing and lack of data and analysis to support findings). Therefore, in our January 2018 report, we recommended that SEC develop and implement specific policies and procedures for performing section 610 reviews. SEC generally agreed with the recommendation. SEC also does not publicly disclose the findings or conclusions of its section 610 reviews. Although RFA does not require that agencies publish the results of 610 reviews, the Office of Advocacy recommends that to enhance transparency, agencies should communicate with interested entities about the reviews. Executive orders also highlight public disclosure of retrospective reviews. Lack of public disclosure limits the transparency of the reviews, hindering the public’s ability to hold agencies accountable for the quality and conclusions of their reviews. In our January 2018 report, we recommended that SEC publicly disclose its section 610 reviews, or summaries, with the basis for any conclusions. SEC generally agreed with the recommendation. CFTC and CFPB Plan to Develop Policies and Procedures for Future Retrospective Reviews CFTC and CFPB plan to put procedures in place for section 610 reviews. According to CFTC officials, the agency has not conducted any section 610 reviews in at least the last 10 years. CFPB has not yet been required to conduct any section 610 reviews. Section 610 reviews are required within 10 years of a rule’s publication as a final rule; to date, none of the rules issued by CFPB, which was created in 2010, have met this deadline. In our January 2018 report, we recommended that CFTC and CFPB develop policies and procedures for section 610 reviews that would include documenting analyses and public reporting of results. CFTC and CFPB generally agreed with the recommendation. Chairman Chabot, Ranking Member Velázquez, and members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Michael E. Clements, Director, Financial Markets and Community Investment, at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Cody Goebel (Assistant Director), Stefanie Jonkman (Assistant Director), Katherine Carter (Analyst in Charge), Kevin Averyt, Bethany Benitez, Jeremy A. Conley, Pamela R. Davidson, Nancy Eibeck, Andrew Emmons, Courtney L. LaFountain, William V. Lamping, Marc Molino, Lauren Mosteller, Barbara Roesmann, and Jena Y. Sinkfield. Other assistance was provided by Farrah Graham and Tim Bober. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Federal financial regulators must comply with various rulemaking and review requirements, including those in RFA and EGRPRA. These statutes require analyses relating to regulatory burden, small entities, or both. RFA requires analyses of a rule's impact on small entities and alternatives that may minimize any significant economic impact. It also requires agencies to review rules (within 10 years) to determine if the rules should be amended or rescinded. EGRPRA directs specified regulators to review regulations at least every 10 years and identify areas that are outdated, unnecessary, or unduly burdensome on insured depository institutions. This statement is based on findings from GAO's January 2018 report on RFA implementation ( GAO-18-256 ) and February 2018 report on regulatory burden on community banks and credit unions ( GAO-18-213 ). GAO discusses regulatory burdens and how financial regulators address regulatory burdens through the rulemaking process and retrospective reviews. For those reports, GAO's work included reviewing Federal Register notices; regulators' workpapers, policies and procedures; and reports to Congress on EGRPRA reviews. GAO also interviewed more than 60 community banks and credit unions. What GAO Found More than 60 smaller depository institutions told GAO that regulations for reporting mortgage characteristics; reviewing transactions for potentially illicit activity; and disclosing fees, conditions, and mortgage terms to consumers were the most burdensome. Institution representatives said these regulations were time-consuming and costly because the requirements were complex and required reporting that had to be reviewed for accuracy. Financial regulators and others noted these regulations provide various benefits as well, such as preventing lending discrimination or use of the banking system for illicit activity. The Regulatory Flexibility Act (RFA) requires federal agencies to analyze the impact of their regulations on small entities. GAO found several weaknesses with the analyses of six financial regulators—Board of Governors of the Federal Reserve System (Federal Reserve), Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation (FDIC), Securities and Exchange Commission, Commodity Futures Trading Commission, and Consumer Financial Protection Bureau (CFPB)—that could undermine the goal of RFA and limit transparency and public accountability. For example, some analyses lacked important information, such as data sources, methodologies, and consideration of broad economic impacts. Evaluations of potential economic effects and alternative regulatory approaches also were limited. Finally, regulators generally lacked comprehensive policies and procedures for RFA implementation. By not developing such policies and procedures, regulators' ability to consistently and effectively meet RFA objectives may be limited. The Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA) and RFA require regulators to conduct retrospective reviews, and GAO found weaknesses. EGRPRA. GAO found limitations in activities regulators undertook for retrospective reviews under EGRPRA. CFPB, which has regulatory authority for a number of consumer financial laws, was not included in the most recent review process. Moreover, as part of their EGRPRA reviews, the Federal Reserve, OCC, FDIC, and the National Credit Union Administration had not conducted and reported analyses of quantitative data nor had these regulators assessed the cumulative effect of regulations. Addressing these limitations in the EGRPRA processes likely would make the analyses they perform more transparent, and potentially result in additional burden reduction. RFA. The issues GAO identified with RFA retrospective reviews (section 610 reviews) included some regulators using the EGRPRA process to fulfill RFA requirements and gaps or weaknesses in analysis and documentation. But EGRPRA requirements do not fully align with RFA's, and it is not clear if the EGRPRA process satisfies the requirements of section 610. Also, regulators generally have not developed policies and procedures for section 610 reviews. By meeting section 610 review requirements, regulators will be in a better position to minimize any significant economic impact of a rule on a substantial number of small entities, as the statute seeks to ensure. What GAO Recommends GAO made a total of 20 recommendations to the financial regulators in the two reports to improve their policies and procedures and analysis under RFA and in retrospective reviews. The regulators generally agreed with the recommendations.
gao_GAO-18-575
gao_GAO-18-575_0
Background North American Energy Trade Energy markets across the United States, Canada, and Mexico are extensively integrated. For example, Canada and Mexico—respectively, the largest and fourth largest foreign suppliers of crude oil to the United States—together supply almost half of total U.S. petroleum imports, according to DOE data. The United States is by far Canada’s most significant crude-oil customer. In addition, Canada and Mexico are major buyers of petroleum products refined in the United States. A growing trade in natural gas produced in the United States is also increasingly important to the energy relationship among the three countries, according to a government report. Moreover, trade in the other energy commodities, such as electricity, natural gas liquids, and coal, is comparatively small yet important to some U.S. regions. In 2017, the value of the energy trade between the United States and its North American neighbors exceeded $125 billion, with almost $83 billion in U.S. energy imports and almost $43 billion in exports, according to U.S. Census data (see fig. 1). Cross-Border Energy Infrastructure Extensive cross-border infrastructure is used to transport oil, refined petroleum products, and natural gas between the United States and both Canada and Mexico. Pipelines are the primary means of transporting crude oil from Canada to the United States; at present, six pipeline systems link the petroleum-producing regions in western Canada to U.S. markets. Marine vessels are the primary means used to convey Mexican crude oil imported by the United States. Marine vessels are also used to transport more than 75 percent of refined petroleum products exported by the United States to Canada and Mexico, and pipelines, rail, or trucks are used to transport the remainder. Pipelines are also used to transport all U.S. exports of natural gas to Canada and Mexico as well as Canadian gas exports to the United States, with 24 pipelines crossing the U.S.– Canadian border and 16 pipelines crossing the U.S.–Mexican border. Cross-border electrical infrastructure is significant between the United States and Canada but is limited between the United States and Mexico. There are 30 major U.S.–Canadian transmission connections, while synchronized U.S.–Mexican connections exist only at the border between Mexico and the state of California. North American Energy Integration and Security According to the U.S. Department of Energy (DOE), energy integration is in the interest of all North American countries because it expands the size of energy markets, creates economies of scale to attract private investment, lowers capital costs, and can reduce energy costs for consumers. Expanding energy systems may also allow for the development of a more diverse mix of energy resources, processing facilities, and end uses, all of which increase energy security. The International Energy Agency defines energy security as the uninterrupted availability of energy sources at an affordable price. According to agency documents, long-term energy security primarily involves timely investments to supply energy to meet economic development and environmental needs. Short-term energy security focuses on the ability of the energy system to react promptly to sudden changes in the supply-demand balance. Mexico’s Energy Reform Energy reforms in Mexico’s oil and gas sector, which received limited capital investment for decades, could contribute to North American energy and security as well as cross-border energy trade, according to government reports. Until 2013, Mexico’s constitution prohibited foreign involvement in most activities in the oil and power sectors, according to a think-tank report. According to the report, the Mexican congress enacted a sweeping energy reform in 2013 that ended the state-owned oil company PEMEX’s monopoly over oil exploration and production and the state-owned electric company Federal Electricity Commission’s control over electricity generation. As a result, Mexico’s energy sector opened to foreign investment in ways not possible since the sector was nationalized in 1938, providing new opportunities for U.S. investors, according to the think-tank report and the Congressional Research Service. According to Mexican government officials, since that time Mexico has established or revamped a number of agencies to govern and operate its energy sector and has awarded leases and contracts to expand exploration, production, and distribution of energy supplies. Since Mexico’s reform was enacted, U.S. companies have participated in winning bids for each of Mexico’s oil and gas tenders, with $6.5 billion pledged in upstream investment, according to the think-tank report. North American Free Trade Agreement Renegotiation The current administration has made the renegotiation of the North American Free Trade Agreement (NAFTA) a priority; as of April 2018, negotiations to renew NAFTA had been ongoing since August 2017. According to a January 2017 Congressional Research Service report, since NAFTA entered into force on January 1, 1994, its market-opening provisions have gradually eliminated nearly all tariff and most nontariff barriers on goods produced and traded within North America, including energy commodities. In addition, according to energy industry representatives, despite previous limited investment opportunities in Mexico, NAFTA has enhanced North American energy integration, facilitating a greater flow of oil, natural gas, and petroleum-derived products among all three North American countries. U.S. Agencies’ Roles and Responsibilities Related to North American Energy Integration A number of U.S. agencies oversee activities related to energy collaboration efforts with Mexico and Canada. We identified the following eight agencies as having a role in energy cooperation efforts that may support North American energy integration. Department of Energy (DOE). DOE is responsible for advancing the energy, environmental, and nuclear security of the United States. DOE also plays a lead role in North American energy integration activities. DOE has established partnerships with its primary government partners in Canada and Mexico—the Department of Natural Resources of Canada (Natural Resources Canada) and Mexico’s Secretariat of Energy—through various memorandums of understanding (MOU). While multiple DOE offices engage in energy integration activities, the Office of International Affairs has primary responsibility for international energy cooperation and leads key cooperation initiatives. The Office of International Affairs is responsible for coordinating the framework for bilateral collaboration between DOE and Natural Resources Canada. According to DOE, areas of U.S.–Canadian cooperation include responsible development of unconventional oil and gas, safe and modern infrastructure, responsible use of energy and energy efficiency, and carbon capture and storage. DOE issues presidential permits for cross-border electric transmission lines and associated facilities, authorizes electricity export and is responsible for authorizing natural gas exports from the United States. Authorization for natural gas exports is granted without modification or delay for U.S. partner countries in free trade agreements that provide for national treatment for trade in natural gas, which, according to Congressional Research Service, presently includes Canada and Mexico. Department of the Interior (Interior). Interior plays an important role in domestic energy production, managing energy produced on America’s federally managed lands and the U.S. outer continental shelf, including oil, gas, coal, wind, solar, and hydropower. Interior also has important cooperative relationships with counterpart agencies in Canada and Mexico, according to Interior officials. As subject matter experts, various Interior offices, such as the Bureau of Ocean Energy Management, and the Bureau of Safety and Environmental Enforcement collaborate with their counterparts in Canada and Mexico to share information, experience, and best practices and provide advice and technical assistance. Interior’s Office of International Affairs is responsible for providing coordination and support as needed on cross-cutting international issues that relate to more than one bureau, including energy cooperation. Department of Commerce (Commerce). Commerce’s International Trade Administration works to remove barriers to U.S. energy development and trade, notably U.S. exports of energy resources and products to Mexico and Canada. The International Trade Administration also works to open markets for energy products in Mexico and Canada, and organizes trade missions. Department of State (State). As the lead agency for foreign policy related to energy, State may play a part in most bilateral and trilateral efforts. State’s role related to North American energy integration includes that of a convening or facilitating authority. State’s Bureau of Energy Resources typically leads these activities but also works closely with State’s Bureau of Western Hemisphere Affairs. In addition, as subject matter experts, State energy and economic officers at embassies in Canada and Mexico report on energy policy and market developments and play a role in communicating, and helping to facilitate interactions, with other U.S. federal agencies and their foreign counterparts. Further, the Secretary of State plays a key role in energy infrastructure because of his or her responsibility in issuing or denying presidential permits for liquid petroleum pipelines that cross U.S. international borders. Department of Transportation (DOT). DOT plays a role in regulating and enforcing safety standards for the transportation of energy products, including crude oil and gas, ethanol, and natural gas. According to DOT, its Pipeline and Hazardous Materials Safety Administration is responsible for regulating and ensuring the safe and secure movement of energy and other hazardous materials to industry and consumers by all modes of transportation, including pipelines. DOT officials work closely with their counterparts in Canada and Mexico when developing draft regulations related to various energy transportation issues, notably those that could affect cross-border trade and safety. Federal Energy Regulatory Commission (FERC). As an independent regulatory agency, FERC has authority to regulate the transmission of electricity, natural gas, and oil between U.S. states and plays a role in facilitating cross-border natural gas pipelines. FERC has responsibility for issuing or denying presidential permit applications for natural gas pipelines that cross the U.S. border with Mexico or Canada. United States Agency for International Development (USAID). As part of its mission and in support of U.S. foreign policy, USAID leads the U.S. government's international development through partnerships and investments. According to USAID officials, USAID has played a role in integrating the electricity markets of the United States and Mexico by supporting the synchronization of regulations, enhancing investment opportunities, and creating easier transmission interconnections between the two countries. As part of those efforts, USAID facilitated technical exchanges between Mexican officials and U.S. grid operators, universities, and other stakeholders. Department of the Treasury (Treasury). The role of Treasury’s newly reorganized Office of Investment, Energy, and Infrastructure includes developing a multipart approach that seeks to promote U.S. exports of energy and energy infrastructure; attract investments in the areas of energy and infrastructure; and catalyze private capital for the financing of exports and investment projects. As part of that approach, the office is in the process of formulating and negotiating energy frameworks with foreign partners, including Mexico, according to officials. U.S., Canadian, and Mexican Governments Cooperate on Energy Integration, although Some Strategic Efforts Have Made Limited Progress Generally speaking, the United States cooperates on energy integration with Canada and Mexico strategically at the presidential and ministry levels and technically at the agency level, although progress on some strategic efforts has been limited. At the presidential level, trilateral cooperation has occurred mainly through the North American Leaders’ Summit, where the leaders of the three countries discuss economic issues, including energy, according to U.S. government officials. The last summit was in 2016, and as of April 2018 a future summit had not been scheduled. At the ministry level, DOE and State have recently conducted meetings with their Canadian and Mexican counterparts. However, efforts to develop a North American Energy Strategy were placed on hold in late 2017 because of disagreement about its scope, although discussions resumed in 2018, according DOE officials. At the agency level, U.S. officials and their counterparts in Mexico and Canada cooperate technically to address specific issues related to North American energy integration. Figure 2 illustrates the three levels of cooperation on energy integration between the U.S., Canadian, and Mexican governments. Presidential-Level Cooperation U.S. cooperation with Canada and Mexico at the presidential level has occurred primarily through the North American Leaders’ Summit, according to U.S. government officials. During the summits, the leaders of the three countries meet to discuss economic, social, and political issues—including energy—on which the three countries can cooperate. The summits have taken place every 1 or 2 years since 2005; the last summit was held in June 2016, in Ottawa. State officials said that if past patterns were followed, the next summit would be scheduled in 2018 and hosted by the United States. However, a future summit had not been scheduled as of April 2018. State officials told us that it is the responsibility of the White House to decide whether a North American Leaders’ Summit will take place and that they therefore would not comment on whether a summit will be scheduled in 2018. The 2016 summit, which focused on energy, formalized the North American Climate, Clean Energy, and Environment Partnership Action Plan (Action Plan), which included pledges to cut greenhouse gas emissions from the oil and gas sectors, boost the development of clean power, and support the development of cross-border transmission lines. However, according to State officials, implementation by each country is voluntary, because the commitments in the Action Plan are not binding. According to State officials, the National Security Council (NSC)—the agency responsible for implementing the Action Plan—has indicated that specific aspects of the plan are being reviewed to ensure alignment with the current administration’s policy priorities. Officials from State, Interior, and Energy—which are among the agencies responsible for developing or implementing certain Action Plan commitments—said that, although they have continued to work with Mexico and Canada on energy-related issues, efforts to implement the plan had not been conducted since January 2017. The United States has also engaged at the presidential level bilaterally with Mexico and Canada to address issues that include energy integration. In a February 2017 meeting—their first during the current administration—the U.S. President and the Canadian Prime Minister identified a number of areas in which the two countries agreed to cooperate, including improving energy security. As of April 2018, the current administration had not held a presidential meeting with Mexico. The previous administration held bilateral presidential meetings with both Mexico and Canada that resulted in the initiation of efforts to improve energy integration. For example, meetings in 2010 and 2011 led to the establishment of, respectively, the U.S.–Mexico High-Level Regulatory Cooperation Council and the U.S.–Canada Regulatory Cooperation Council to help align the countries’ regulatory principles. Ministry-Level Cooperation The U.S. Secretaries of Energy and State cooperate with their Canadian and Mexican counterparts (i.e., ministers) through meetings focused to varying extents on energy, according to DOE and State officials. DOE cooperates with Natural Resources Canada and Mexico’s Secretariat of Energy through various bilateral and trilateral meetings that focus on energy collaboration and integration. State holds bilateral and trilateral ministry-level meetings with its Canadian and Mexican counterparts, where discussions may include energy cooperation. For example, in February 2018, State attended the North American Foreign Ministers’ Meeting in Mexico, where energy and the renegotiation of NAFTA were topics of discussion. State also co-chairs, with Commerce and the United States Trade Representative, the High Level Economic Dialogue with Mexico. However, according to Commerce officials, High Level Economic Dialogue meetings have not been held since 2016. According to DOE officials, ministry-level meetings result in important exchanges of information and collaborative efforts. DOE officials indicated that ministry-level cooperation on energy integration with Mexico and Canada has been consistent. For example, soon after his confirmation in March 2017, the U.S. Secretary of Energy visited Mexico to initiate talks on cooperation, where he made statements recognizing Mexico’s importance both as an economic partner and, along with Canada, in promoting regional energy security. During this visit, the Secretary announced a proposal to pursue a North American energy strategy that would promote comprehensive energy and economic security for the three countries. Characterizing its development as a top priority, the Secretary stated that the North American energy strategy was meant to establish a robust trilateral work plan to guide trilateral cooperation on shared energy interests, such as developing North America’s untapped energy resources, diversifying energy supplies, and supporting the growth of each country’s energy industries. Canadian and Mexican energy officials whom we interviewed expressed agreement with the proposal to develop a North American energy strategy and indicated that a regional energy strategy would further facilitate energy integration efforts. DOE officials stated that DOE, Natural Resources Canada, and Mexico’s Secretariat of Energy held a ministry-level meeting in November 2017— the North American Energy Ministerial—in part to discuss the proposed trilateral energy strategy. However, efforts to formalize the strategy were subsequently suspended because of a lack of agreement on its scope, according to U.S., Canadian, and Mexican officials. Instead, the three ministries released a joint summary outlining their discussions on efforts to address regional energy security. According to DOE officials, the ministries resumed discussions of the strategy in 2018 and are continuing to work on developing either a joint energy strategy or a separate document that would accomplish the objective of such a strategy. However, Canadian officials told us that any expected document on cooperation may not be comprehensive enough to be labeled a strategy. Officials of DOE, Natural Resources Canada, and Mexican Secretariat of Energy said that, despite not having a formal North American energy strategy, the three countries maintain a cooperative ministry-level relationship. Agency-Level Cooperation U.S. agency staff and their counterparts in Mexico and Canada cooperate to address specific, technical issues related to North American energy integration, according to U.S., Canadian, and Mexican officials. According to DOE officials, cooperation may be trilateral or bilateral and may be led by various U.S. agencies with the required technical expertise. For example, according to DOE staff, they are working on a technical project with Canada and Mexico to improve energy import and export data that all three countries can use. According to DOE officials, involvement in agency-level technical cooperation can occur apart from higher-level strategic or political cooperation and often addresses ongoing issues essential to the industry’s functioning, such as transborder industry inspections and information sharing. According to Interior officials, involvement in agency-level technical cooperation almost always occurs apart from higher-level strategic or political cooperation. Some U.S. agencies’ technical cooperation with their Mexican counterparts is more recent than their cooperation with their Canadian counterparts, according to U.S. agency officials. Officials from Interior, one of the agencies involved in providing technical assistance to Mexico, explained that since 2013, when Mexico’s energy reform began allowing private investment in its oil, gas, and electricity sectors, Mexico has sought to establish regulatory frameworks and oversight mechanisms comparable to those in the United States and Canada. For example, according to Mexican officials, Interior assisted Mexico’s regulatory agencies in developing oversight regulations for their oil and gas sectors, while USAID helped Mexico’s Secretariat of Energy to plan future electricity infrastructure development and meet its clean energy goals. In contrast, U.S. agencies’ technical cooperation with Canadian agencies was already well established, according to some U.S. agency officials. U.S. Agencies Reported Numerous Activities Related to North American Energy Integration The eight federal agencies that we identified as having a role in North American energy integration—DOE, Interior, Commerce, State, DOT, FERC, USAID, and Treasury—reported involvement in 81 activities related to facilitating energy integration from 2014 through 2017. While some of these activities had multiple purposes or goals, the activities generally comprised five types: technical discussions and assistance, regulatory cooperation, international agreements and other instruments, trade promotion, and research and development. In addition, agencies reported having undertaken other activities, such as internal policy reviews. Table 1 shows the types and numbers of activities that each agency reported. (See app. III for a full listing of these agencies and descriptions of their activities). Technical Discussions and Assistance Several of the U.S. agencies we surveyed reported having participated in technical discussions that provided a forum for exchanging information and best practices with their Canadian and Mexican counterparts. Four agencies—DOE, Interior, State, and USAID—identified a total of 33 technical forums and assistance activities, such as consultative mechanisms, technical committees, and assistance programs. For example: DOE. Since 2015, DOE has participated with Natural Resources Canada and Mexico’s Secretariat of Energy in a trilateral working group focused on carbon capture, utilization, and storage (CCUS) initiatives. According to DOE officials, the group meets twice per year to exchange information about each country’s CCUS programs. DOE officials reported that the group’s primary focus has included carbon- capture technologies, with an emphasis on industrial CCUS, CCUS on power systems and carbon dioxide utilization in enhanced oil recovery, and consistent and harmonized messaging regarding CCUS. DOE also engages in bilateral nuclear security cooperation with Mexico, supporting two to three workshops with Mexico annually on topics such as nuclear security culture and cybersecurity for nuclear facilities. Interior. Since Mexico’s energy reforms, Interior has held discussions with Mexican agencies about environmental and other matters related to offshore oil and gas extraction. Interior also participates in a number of international multilateral forums, including the International Regulators Forum, the International Offshore Petroleum Environmental Regulators and the International Upstream Forum, which bring together government regulators from multiple countries, including Mexico and Canada. State. State has provided technical assistance to Mexico through the Power Sector Program, which supplies guidance and training on a number of regulatory frameworks, market processes, and software tools to support Mexico’s transition to a competitive power market. For example, the program has supported the development of a competitive wholesale power market through technical support to Mexico’s Energy Regulatory Commission, the National Center for Energy Control, and Mexico’s Secretariat of Energy. USAID. According to USAID officials, the agency’s Mexico mission energy program has provided technical assistance to Mexico's Secretariat of Energy, its National Energy Control Center, its Energy Regulatory Commission, and the Federal Electricity Commission. USAID officials reported that this assistance focused on a wide range of energy-integration activities, including the design and implementation of four energy auctions, as well as the development of a public–private contract mechanism to tap private sector resources for energy-transmission construction. As part of this program, USAID also designed, and is currently managing, an activity to reduce social impacts associated with energy-infrastructure projects. USAID also provided technical assistance on grid integration and the planning and development of infrastructure, according to officials. Regulatory Cooperation U.S. agencies engage in regulatory cooperation to support coordination in the various energy sectors and to try to identify gaps, best practices, and inconsistencies among U.S., Canadian, and Mexican regulations. Four of the agencies we surveyed—DOE, DOT, FERC, and USAID—reported 13 regulatory cooperation efforts, including discussions between regulators and trilateral and bilateral working groups focused on the various energy sectors and the development of reliability standards. For example: DOE. In 2011, the U.S. President and Canadian Prime Minister created the Canada–United States Regulatory Cooperation Council to facilitate closer cooperation between the countries to develop more effective approaches to regulation. As part of that effort, DOE and Natural Resources Canada have cooperated on two joint energy initiatives, according to DOE officials. First, DOE and Natural Resources Canada have cooperated on energy efficiency standards, with the goal of aligning new and updated standards and test methods for energy-using equipment through enhanced information sharing. Second, DOE and Natural Resources Canada have cooperated on developing natural gas–transportation standards. According to DOE officials, DOE and Natural Resources Canada will continue to build on previous work, facilitate the development of common codes and standards by industry organizations, and explore opportunities for alignment among stakeholders. DOT. DOT officials reported having worked with Canadian and Mexican agencies to collaborate on regulations and standards related to various modes of transportation. For example, DOT has engaged in the North American Pipeline Safety Regulator Initiative. According to DOT’s survey response, the goal of this initiative is to share perspectives, experience, and information on regulatory activities as well as effective strategies for improving pipeline safety for each participating agency and for cross-border energy pipelines. According to officials, DOT also collaborates with Transport Canada on certain facility investigations. FERC. FERC officials reported that FERC has represented the U.S. government at meetings of the Trilateral Electric Reliability Working Group, where U.S., Canadian, and Mexican regulators coordinate on electric grid reliability issues. USAID. According to USAID officials, under a mechanism financed and managed by USAID, the National Association of Regulatory Utility Commissioners provided technical assistance to the Mexican Energy Regulatory Commission on energy-integration topics, such as auctions, reducing barriers to investment and competitive market restructuring. International Agreements and Other Instruments According to agency officials the U.S. government and individual U.S. agencies have entered into various formal agreements to engage Canada and Mexico on energy integration. Three of the agencies we surveyed— DOE, Interior, and State—identified 11 international agreements and other instruments related to North American energy integration, including several MOUs with Canadian and Mexican counterpart agencies. According to officials, such agreements often include a framework under which bilateral and trilateral cooperation can proceed and can serve to highlight areas of priority or focus for the countries. According to one official, the MOUs are based on need and create a mechanism for technical experts to collaborate on specific topics or action items. Other officials noted that periodic renewals of MOUs can provide opportunities to decide whether agreed-on activities have been completed, are obsolete and should be discontinued, or should be continued. The following are examples of the agencies’ reported activities: DOE. In 2014, DOE, Natural Resources Canada, and the Mexican Secretariat of Energy signed an MOU to further collaboration on data and information sharing and to create a trilateral framework for sharing publicly available information. The MOU outlined several areas of cooperation, including systematic comparison of energy export and import flow data; sharing of publicly available geospatial information related to utility infrastructure; exchange of views and projections of cross-border flows of natural gas, electricity, crude oil, and refined products; and development of a cross-reference for the three countries’ energy sector terminology. According to DOE officials, as a result of this MOU, an integrated trilateral energy information website was launched in November 2017. The website consolidates energy-related data, integrated maps, analyses, and references from the three countries in English, French, and Spanish. Interior. Interior officials reported that since 2014, Interior has signed several binding and nonbinding instruments, including two MOUs, to facilitate cooperation with Mexico on energy and environmental matters. In 2016, Interior signed two MOUs with its counterparts in Mexico to facilitate bilateral cooperation on energy and environmental cooperation. Moreover, Interior helped to negotiate the Agreement between the United States and Mexico Concerning Transboundary Hydrocarbon Reservoirs in the Gulf of Mexico, which entered into force in 2014. According to Interior officials, the department, in coordination with State, implements the agreement, which addresses the development of oil and gas reservoirs that span the international maritime boundary between the two countries in the Gulf of Mexico. State. State has played a role in finalizing a United States–Mexico agreement on peaceful nuclear cooperation, according to State officials. The officials said that the U.S. and Mexican governments have agreed on the final text of the agreement, which is awaiting approval by the countries’ legislatures. Trade Promotion Commerce leads U.S. trade promotion efforts related to energy integration. In response to our survey, Commerce officials reported having engaged in at least 10 trade promotion activities and Treasury officials reported one additional effort. Commerce. Commerce activities include trade missions to Canada and Mexico, seminar and event presentations, and buyers’ programs. For example, Commerce officials have organized export promotion activities targeting the Canadian and Mexican markets and led delegations of Canadian and Mexican executives to attend major U.S. trade shows in the energy sector to facilitate business partnerships with U.S. firms through its International Buyers Program. According to Commerce officials, Canadian delegations typically consist of 15 to 20 executives and Mexican delegations typically consist of 25 to 100 executives. In addition, Commerce officials reported that the department’s Foreign Commercial Service in Canada has organized and staged annual country briefings and interactions with trade associations from multiple countries at two lead events—the Global Petroleum Show and the Atlantic Petroleum Show. Further, according to Commerce officials, the International Trade Administration conducted two energy-related trade missions to Mexico in 2017—a civil nuclear trade mission and a renewable energy trade mission. Treasury. Treasury’s Office of Investment, Infrastructure, and Energy has formulated and negotiated a framework for energy activities with Mexico’s Secretariat of Energy and the National Center for Energy Control. This energy framework is designed to achieve a high degree of energy integration, growth, and security through initiatives in the energy and infrastructure areas, to be jointly pursued by the United States and the host country partner, according to a Treasury official. The effort will involve Treasury’s Office of Technical Assistance and is envisioned to include activities such as assisting the Mexican government to realize more value and impact with various procurement projects related to the energy value chains. Research and Development U.S. agency officials and their foreign counterparts cited research and development activities as an important aspect of cooperation to facilitate North American energy integration. Three of the agencies we surveyed— DOE, Interior, and DOT—reported having engaged in seven scientific research and development activities. Examples include the following: DOE. DOE officials reported that the department plans to explore areas of mutual interest for trilateral cooperation in the area of civil nuclear research and development with Natural Resources Canada and Mexico’s Secretariat of Energy. In addition, DOE is engaged bilaterally with Canada in research and development on topics such as nuclear reactor technologies, including small modular reactors. Interior. In 2014, Interior’s U.S. Geological Survey issued a report on the assessment of unconventional oil and gas resources in northeast Mexico. In addition, Interior officials reported that the U.S. Geological Survey has collaborated with Canada on scientific research to better understand the geological framework from eastern Arctic Alaska to the Canadian Arctic Islands. DOT. DOT has engaged with its Canadian counterpart in research and development activities focused on alternative fuels. For example, DOT officials reported that its Federal Aviation Administration Center of Excellence for Alternative Jet Fuels and Environment and Canada’s Transport Canada are undertaking cooperative research and development that primarily focuses on the development of sustainable alternative jet fuels and technical research on aviation noise and emissions mitigation. Other Efforts Three agencies—DOE, Commerce, and State—reported engaging in a total of six other efforts related to North American energy integration. For example: DOE. A Joint U.S.–Canada Electric Grid Security and Resilience Strategy was released in December 2016. DOE and Natural Resources Canada developed this strategy and its accompanying plans to improve the grid security of the countries’ shared electric system. The three goals of the strategy are to protect today’s electric grid and enhance preparedness, to manage contingencies and enhance response and recovery efforts, and to build a more secure and resilient future electric grid. According to DOE officials, DOE is working to implement numerous items from an accompanying action plan over multiple years. Commerce and DOE. Commerce and DOE lead the United States– Mexico Energy Business Council with their Mexican counterparts. According to Commerce officials, the council is a unique effort to gather consensus recommendations from the council’s private sector representatives on ways to strengthen the economic and commercial ties between energy industries in the two countries. The council has met twice a year since its creation in 2016 and has developed a set of recommendations for consideration by U.S. and Mexican government officials. According to DOE officials, these recommendations were discussed at the Council meeting on June 15, 2018. State. State officials reported that the department is engaged in an ongoing effort to streamline its review process for presidential permit applications for cross-border energy infrastructure. U.S. Agencies Reported Coordinating through High-Level Interagency Meetings and Working-Level Efforts Agency officials reported coordinating their energy integration–related activities through a number of coordination efforts and mechanisms. First, an interagency working group at NSC represents a high-level interagency coordination effort. In addition, staff preparations for high-profile bilateral and trilateral summits present further opportunities for high-level interagency coordination. Moreover, agency staff engage in working-level efforts such as serving on formal coordinating bodies that bring together stakeholders in the energy sector; soliciting input from, or providing input to, other agencies; and participating in direct coordination activities at the program level. Coordination at National Security Council Interagency Meetings According to participating agency officials, NSC created a working group in May 2017 to facilitate formal interagency coordination on North American energy integration. Officials reported that the working group comprises representatives of NSC, DOE, Interior, Commerce, and State and has met five times since it was formed, most recently in November 2017. According to officials from participating agencies, the group’s primary purpose is to bring together the key agencies that have a stake in North American energy integration and to receive guidance and input from NSC and other agencies on related activities. Consequently, the group also serves advisory, information-sharing, and coordination purposes. We spoke with agency officials who participate in the working group, asking in particular about their experiences in several aspects that we have previously identified as key to interagency collaboration—identifying outcomes, establishing leadership, involving relevant participants and clarifying their roles and responsibilities, and obtaining necessary resources. Some officials noted the value of the group’s meetings. The following summarizes the officials’ comments. Outcomes. Officials of agencies participating in the NSC-led working group reported that it served primarily as a mechanism to promote coordination and to bring awareness of agencies’ bilateral and trilateral engagement with Canada and Mexico to the NSC and to the other agencies participating in the group. Agency officials identified this high-level, in-person coordination as valuable and as one of the group’s primary outcomes. For example, according to the officials, agencies contributed to, and developed, a matrix of cross-border energy activities with Mexico and Canada, which helps to make the administration and other agencies aware of each other’s efforts and to see the bigger picture of those efforts. Agencies also developed a coordinated set of talking points on energy integration. One official noted that, because staff from the various participating agencies often work with the same Canadian and Mexican counterparts, coordinating the talking points is useful for ensuring that messages are presented in a consistent and substantive way. Leadership. Participating agency officials indicated that NSC has a clear leadership role in the interagency group and is responsible for calling the meetings, setting the agenda, and assigning tasks to participants. Officials noted that NSC appropriately assigned tasks based on agencies’ particular expertise and capabilities. Officials also reported being generally satisfied with NSC’s leadership and noted that this group has created a needed space for high-level interagency coordination. Roles and responsibilities. Agency officials did not report any confusion about their roles and responsibilities in the NSC-led working group. According to agency officials, the agencies generally served in a primarily informational and advisory role, sharing information with each other about their respective agencies and providing input both during and outside the group’s meetings. Participating agency officials are responsible for providing updates on relevant energy-related activities at each meeting. Other assigned tasks include drafting and clearing coordination documents, talking points, and policy papers. Participants. According to participating agency officials, the agencies invited to participate in the NSC-led working group—DOE, Interior, Commerce, and State—were those with the most relevant roles related to North American energy integration. Officials noted that they communicated with each other regularly to follow up on issues raised at a meeting or as a part of normal agency coordination. Resources. Participating agency officials generally reported that, because the NSC working group’s meetings aligned with their regular and ongoing responsibilities, additional resources were not required. Coordination in Connection with Bilateral and Trilateral Summits Bilateral and trilateral summits can be important methods of collaboration with Mexico and Canada and also serve as episodic or event-related mechanisms for U.S. interagency coordination on energy integration– related activities. According to U.S. agency officials we interviewed, multiple agencies have provided input and advice in preparation for summits and meetings such as the North American Leaders’ Summit, the North American Energy Ministerial, the North American Foreign Ministers’ Meeting, and the U.S.–Mexico High Level Economic Dialogue. According to officials, broadly focused ministerial meetings such as these have included participation from multiple agencies. For example, the U.S.– Mexico High Level Economic Dialogue—a whole-of-government effort that included energy as one its priorities—led to the development of the U.S.–Mexico Energy Business Council, which is cochaired by DOE, Commerce, Mexico’s Secretariat of Energy, and Mexico’s Secretariat of Economy. Agency officials we interviewed stated that they also coordinate on follow-up efforts after these meetings. For example, DOE and Commerce officials said that they conduct ongoing coordination on council business, holding weekly calls with each other and their Mexican counterparts to coordinate the council’s implementation. In addition, Commerce officials told us that they report on the council’s progress to other agencies at the NSC working group. Other Interagency Coordination Efforts Officials of multiple agencies we surveyed reported other interagency coordination efforts related to North American energy integration. These efforts included participating in formal coordinating bodies, soliciting and providing input, collaborating directly with other agencies’ staff, and collaborating at U.S. embassies. Participating in formal coordinating bodies. Multiple U.S. agencies (e.g., DOT, FERC, and the Department of Homeland Security) participate in the Electricity Sub-Sector Coordinating Council, the Energy Sector Government Coordinating Council, and the Oil and Natural Gas Sector Coordinating Council, according to DOE officials. The Electricity Sub-Sector Coordinating Council’s charter states that the council’s purpose includes coordinating activities and initiatives designed to improve the reliability and resilience of the electricity subsector and serving as the principal liaison between the council’s membership and the Energy Sector Government Coordinating Council. The Energy Sector Government Coordinating Council—the government counterpart of the Electricity Sub-Sector Coordinating Council and the Oil and Natural Gas Sector Coordinating Council — enables interagency and cross-jurisdictional coordination on planning, implementing, and executing resilience programs for the nation’s critical energy infrastructure. Agency officials reported that the Oil and Natural Gas Sector Coordinating Council serves as the principal liaison between the U.S government and representatives for oil and natural gas companies and major trade associations on matters related to oil and natural gas physical and cyber security. Soliciting and providing input. Multiple agencies reported soliciting or providing input regarding certain energy integration efforts. For example, multiple agencies contributed to the development of the U.S. Quadrennial Energy Review, which explicitly discusses North American energy integration and how to better assess and promote opportunities for better coordination between U.S., Canadian, and Mexican energy systems. In addition, DOE, State, and FERC officials reported coordinating with each other and with the Department of Defense to obtain required concurrence on presidential permit applications. For example, when State was reviewing the presidential permit for the Keystone XL pipeline, State asked seven other agencies to provide their insights and opinions, according to State officials. Collaborating directly with other agencies’ staff. Multiple agency officials reported working with other agencies as needed. For example, Treasury officials reported working with staff from State, Interior, and DOE to formulate and negotiate a framework of energy- and infrastructure-related initiatives with Mexico. Agency officials also reported that agency staff responsible for various energy integration activities have engaged in a number of informal activities—including periodic meetings, telephone calls, and e-mails—to directly coordinate these efforts with related federal and industry efforts. Collaboration at U.S. embassies. Agency officials we interviewed at the U.S. embassies in Canada and Mexico stated that they have routinely collaborated and coordinated on energy integration–related activities with staff of other relevant U.S. agencies who were also stationed at the embassies or who visited the embassies from the United States. U.S. Agencies Obtain Feedback and Input from Private Sector and Civil Society Stakeholders through Both Formal and Informal Mechanisms U.S. agencies reportedly obtain feedback and input from private sector and civil society stakeholders through a variety of formal and informal mechanisms. To gather this input, agencies use formalized mechanisms such as requests for public comment through the Federal Register, public hearings, public-private bodies, and joint stakeholder events. Civil society and private industries also employ informal methods to communicate their positions to agencies and individual agency staff. Formal Mechanisms U.S. agencies solicit and consider private sector and civil society input related to North American energy integration through formal mechanisms that include provisions for public comments in response to Federal Register notices; open hearings, where public comment is allowed; and public–private input entities. The Administrative Procedure Act of 1946 generally requires agencies to publish a notice of proposed rulemaking and to provide an opportunity for public comment through the Federal Register. The private sector and civil society use this process to formally issue public statements on various topics related to energy integration. For example, with regard to the renegotiation of NAFTA, private sector entities and environmental groups have sent letters to the U.S. Trade Representative expressing their respective concerns about negotiations related to the energy sector. Agencies can also hold public hearings where stakeholders can make statements and submit data. According to the Office of the Federal Register, some agencies operate under laws that require rulemaking hearings, while others may hold public meetings to obtain public input or to help affected groups better understand the proposed rule. Moreover, Office of the Federal Register documents state that many agencies are beginning to use webcasts and interactive Internet sessions to broaden the audience attending public meetings. Further, under the National Environmental Policy Act, a process exists through which stakeholders can provide input during the consideration of environmental effects of proposed projects for which the agency has prepared an environmental impact statement. Additionally, agencies may use formal public–private bodies and collaborations that gather private sector and civil society input on energy integration issues. For example, the private sector members of the U.S.– Mexico Energy Business Council are able to provide recommendations to U.S. and Mexican agencies. The council’s stated objectives are to (1) bring together representatives of the respective energy industries of the United States and Mexico to discuss issues of mutual interest, particularly ways to strengthen the economic and commercial ties between energy industries in the two countries, and (2) communicate actionable, nonbinding recommendations to the U.S. and Mexican governments. According to officials, the council comprises 20 private sector representatives—10 from the United States and 10 from Mexico—and is co-chaired by DOE, Commerce, and the Mexican ministries of economy and energy. Officials reported that the council is to meet twice each year to provide consensus recommendations to both governments on ways to improve the safety and efficiency of energy-related activities, improve the commercial environment and investment climate, and enhance collective energy security. Civil society representatives also participate in some formal advisory and information-gathering collaborations. For example, in 2015, the United States and Mexico held an energy education roundtable that brought together key stakeholders to explore possible areas for cooperation, including sharing best practices in energy education, developing vocational and polytechnic-level energy skills training programs, examining joint industry certifications, and promoting greater communication among key players in both countries. In another example, the 2016 North American Leaders’ Summit announced the first annual Stakeholder Dialogue on North American Competitiveness, with a goal of providing private sector, local government, labor, and civil society representatives an opportunity to contribute ideas on increasing North American competitiveness. In response, the Wilson Center, a think tank, in coordination with the three North American governments, assembled a group of more than 40 representatives of entities engaged in North American issues. The results of this dialogue included recommendations on energy integration–related issues, such as energy infrastructure and the U.S. presidential permitting process. Civil society stakeholders also provide expertise by participating in activities such as the U.S.–Canada Northern Oil and Gas Research Forum. According to agency officials, this forum has typically been held every 2 years at locations in the United States and Canada since its founding in 2008 by Interior’s Bureau of Ocean Energy Management and Canada’s Indigenous and Northern Affairs Canada. The forum provides an opportunity for decision-makers, regulators, industry members, nongovernmental organizations, and scientists to discuss current scientific research and future directions for northern oil and gas activities, according to Interior officials. Informal Mechanisms Agencies receive input on North American energy integration from the private sector and civil society through informal mechanisms such as letters, emails, phone calls, interactions at various related events, personal connections, and reports. According to private sector and civil society representatives we interviewed, open letters (e.g., letters to the editor) and letters sent to agencies allow such groups to describe their perspectives on policy choices and advocate for preferred solutions. One civil society stakeholder noted that think tanks and trade association reports and forums also play an important role in allowing civil society and industry to communicate their perspectives and positions to Congress and federal agencies. Another civil society stakeholder reported having directly contacted State officials responsible for issuing presidential permits for the Keystone XL pipeline. Industry association representatives noted that they also have opportunities for informal meetings with agency officials at various events or through phone calls. During our discussions with civil society and private sector organizations, we heard that informal feedback or input mechanisms between stakeholders and agencies were available and functional. U.S., Canadian, and Mexican Officials Suggested Steps to Further Energy Integration but Expressed General Satisfaction with Intergovernmental Cooperation Some of the officials we interviewed from all three countries suggested several new or additional steps that the U.S. government could take, in cooperation with Canada and Mexico, to address factors that might impede energy integration and to facilitate a more integrated and secure energy market in North America. Suggestions mentioned by officials in all three countries included aligning energy-related regulations, streamlining the U.S. presidential permitting process, facilitating cross-border transportation of equipment and workers, and involving states and provinces in energy integration efforts. However, U.S., Canadian, and Mexican officials we interviewed expressed general satisfaction with bilateral and trilateral strategic and technical cooperation regarding efforts to facilitate North American energy integration. U.S., Canadian, and Mexican Officials Suggested Steps to Enhance North American Energy Integration Some U.S., Canadian, and Mexican officials we interviewed suggested several new or additional steps that the U.S. government, in cooperation with Canada and Mexico, could take to address several factors that may impede energy integration and to facilitate a more integrated and secure energy market in North America. According to some officials, factors that may impede energy integration include duplicative or inconsistent energy regulations, inconsistent cross-border permitting processes, difficulties in cross-border movement of equipment and workers, and the need to involve states and provinces in transborder issues. The text box shows steps suggested by at least one official in all three countries to address these factors. Steps Suggested by U.S., Canadian, and Mexican Officials to Further North American Energy Integration Align energy-related regulations. To reduce the burden on energy companies conducting transborder activities, align regulations, codes, and standards in appropriate sectors in all three countries, to the extent possible. Streamline the U.S. presidential permitting process. To assure that requirements are consistently implemented, by having a set process for obtaining presidential permits for transborder energy infrastructure projects. Facilitate cross-border movement of equipment and workers. To avoid delays in business and trade transactions, implement processes to facilitate movement of energy company personnel and equipment across borders. Involve states and provinces in energy integration efforts. Given states’ and provinces’ control over local regulations, resources, and markets, increase their involvement in efforts to advance North American energy integration. U.S., Canadian, and Mexican officials suggested that the three countries should continue working together to eliminate unnecessary differences in energy sector regulations. Some officials indicated that harmonizing, when appropriate, or establishing comparable regulations, codes, and standards in all three countries could reduce the burden on energy companies conducting transborder activities and enhance regulatory cooperation. According to some government officials and private sector representatives we interviewed, the need to align U.S., Canadian, and Mexican energy-related regulations is generally recognized by industry stakeholders as a factor that could be addressed to further facilitate regional energy integration. Several government initiatives have been undertaken to increase alignment or reduce differences among the countries’ regulatory frameworks, such as the creation of the U.S.–Canada Regulatory Cooperation Council and the U.S.–Mexico High-Level Regulatory Cooperation Council. Nevertheless, officials in the three countries identified a need to expand efforts in certain areas. For example, according to one Canadian official, because of the large number of energy regulations, much remains to be done to align them. According to an industry association representative in Mexico, eliminating duplicative regulations is very challenging and efforts to align them have sometimes not been sufficient. For example, he explained that one company—a member of his association—embarking on a transborder project reported having to conduct extensive work to meet Mexican regulations, despite an earlier effort by Mexico’s Agency for Safety, Energy, and Environment and Interior’s Bureau of Safety and Environmental Enforcement to develop similar regulations for safety and environmental management systems. Streamline the U.S. Presidential Permitting Process Some U.S., Canadian, and Mexican government officials suggested that the U.S. government should streamline its presidential permitting process to ensure that requirements for obtaining permits for transborder energy infrastructure projects are consistently implemented. U.S. presidential permits are required for the construction, connection, operation, and maintenance of certain facilities that cross the United States’ borders with Canada and Mexico. Issuance or denial of permits is delegated to the U.S. Secretary of State for pipelines that transport liquids such as petroleum and petroleum products, to FERC for natural gas pipelines, and to DOE for electricity transmission lines. Some officials in Canada and Mexico explained that industry sector representatives have expressed concerns about the process for obtaining the permits. Members of a Canadian energy association expressed concern that requirements for the presidential permits have not been implemented consistently or in a timely manner. According to a representative from the association, in some cases presidential permits have been granted in a relatively short period of time, while in other cases the process has taken over 2 years. For example, members of the Canadian energy association said that the company developing the Keystone XL pipeline spent a significant amount of money and time trying to navigate the permit process before receiving permits in March 2017. A representative from a Mexican energy association told us that, whereas Mexico’s energy reforms were aimed at increasing efficiency to attract investment, the processing time for U.S. presidential permits—up to 2 years, according to the representative—can both interfere with the Mexican government’s efforts and hinder more integration between the two countries. Some U.S. government officials acknowledged a need for streamlining the presidential permitting process. State and DOE officials informed us that they had initiated internal reviews to streamline the process but that as of April 2018, these reviews were ongoing and a completion date had not been set. Facilitate Cross-Border Movement of Equipment and Workers U.S., Canadian, and Mexican government officials suggested implementing processes to facilitate the movement of energy company personnel and equipment across borders to reduce delays in business and trade transactions. In a discussion among stakeholders after the 2016 North American Leaders’ Summit, participants agreed that there is a significant need to increase the efficiency with which cargo and individuals cross North America’s land borders and that border efficiency and the competitiveness of North America as a region are strongly linked. In addition, a U.S. official working with small and midsize U.S. energy companies with operations in the United States and Canada told us that moving equipment and personnel across the border can be challenging. The official explained that equipment has sometimes crossed the border with minimal delays but at other times has been detained for hours or days. Energy associations from the United States, Canada, and Mexico have advocated jointly for NAFTA negotiations to include provisions that would facilitate the movement of equipment, such as drilling rigs and vessels, and personnel—including for emergency response—across the U.S.– Canadian and U.S.–Mexican borders. These associations have also advocated for a NAFTA visa program to provide access for skilled energy professionals. Involve States and Provinces in Energy Integration Efforts U.S., Canadian, and Mexican government officials suggested increasing the involvement of states and provinces in energy integration efforts, given their control over local regulations, resources, and markets. The roles played by states and provinces in the countries’ energy sectors vary by country. Canada’s system, where provinces have control over natural resources and specific related procedures such as approval process for local permits, is less centralized than the United States’ system, according to a Canadian government official. In contrast, Mexico’s system is more centralized than the United States’, with Mexican states’ having less control over natural resources, according to a Mexican government official. In addition, the official stated that, while discussion of North American energy integration often focuses on the role of national governments, the inclusion of U.S. and Mexican states and Canadian provinces—especially those on the border—in discussions of regional energy integration is essential. Moreover, the electricity sector is particularly influenced by the participation of states and provinces because of the sector’s dependence on regional markets and interconnected infrastructure, according to an electricity sector representative. For example, the representative stated that the design of Canada’s electrical transmission system sometimes facilitates the transport of electricity more easily from north to south, to the United States, than from east to west, across Canadian provinces. As a result, U.S. markets are an important outlet for Canadian generators in eastern Canada and the Pacific Northwest. Currently, little cross-border electricity trade takes place between the United States and Mexico, other than the importation of electricity from a few power plants in Baja California, Mexico, to supply demand in the San Diego area. A Mexican government official stated that the limited level of electricity integration between the United States and Mexico is due in part to the role of the U.S. states in regulating the electricity industry, since their regulations and plans for working with Mexico may differ. According to the official, it is therefore essential to include the states in any discussions about promoting integration of the U.S. and Mexican electricity sectors. According to USAID officials, a public–private contractual mechanism developed by USAID to tap private sector resources for constructing electricity transmission will be used for the first time to build a project to connect the Mexican state of Baja California with the rest of the Mexican grid, which had previously been isolated from Baja California and the market in the U.S. state of California. According to these officials, the transmission line could also connect the Mexican state of Sonora with the U.S. state of Arizona. U.S., Canadian, and Mexican Officials’ Views on Bilateral and Trilateral Cooperation U.S., Canadian, and Mexican energy officials we interviewed indicated that they were generally satisfied with bilateral and trilateral strategic and technical cooperation to facilitate North American energy integration. According to U.S. officials, energy is an area in which the interests of the United States, Canada, and Mexico align and cooperation has been well established. U.S. officials also stated that trilateral cooperation works well at both the strategic and technical levels and that regional cooperation enhances energy security for all three countries. Canadian officials stated that cooperation with the United States at the strategic level has often served as a springboard for purposeful action to address shared priorities. Mexican officials stated that there has been extensive communication with the United States on energy issues, in particular at the ministerial and agency levels, which has led to activities to improve integration. Some officials also identified changes in the overall foreign policy context that could affect cooperation in the future. Some Canadian and Mexican government officials we interviewed expressed concern that the renegotiation of NAFTA and the administration’s decision to withdraw from the Paris Agreement could create uncertainty among investors in the energy sector. U.S., Canadian, and Mexican officials , as well as private sector representatives we interviewed, stated that they viewed NAFTA renegotiation as an opportunity to improve the agreement and that any changes to NAFTA should “do no harm” to free-trade arrangements in energy commodities. However, a Canadian official told us industry representatives had expressed concern that the issue of energy could be used as a pawn in NAFTA renegotiations, resulting in harm to the sector. Furthermore, some Mexican officials stated that they were particularly concerned that any change in Mexico’s status as a U.S. free-trade partner could complicate flows of natural gas from the United States, which has assumed a more important role as an energy source for Mexico. Some Canadian and Mexican officials we interviewed expressed concern that the June 2017 announcement of the United States’ intention to withdrawal from the Paris Agreement could create a perception of an uneven playing field and uncertainty among energy sector investors, given Canada’s and Mexico’s continued participation in the accord. However, the officials noted that the commitment of some U.S. states, cities, and private sector companies to adhere to the accord’s tenets may minimize any negative impacts of the U.S. government’s withdrawal on their countries’ energy sectors. State and DOE officials we interviewed said they did not expect the U.S. renegotiation of NAFTA and withdrawal from the Paris Agreement to have a significant impact and stated that the energy sector in North America is already well integrated and well positioned to address these changes. U.S. government officials we interviewed noted that the United States’ energy sector is already extensively integrated with both Canada’s and Mexico’s. The officials stated that most easily accomplished actions to promote integration have already been taken and that they are primarily looking for ways to enhance a system that is working well. They also stated that it is important not to disrupt the advances that have already been made. Further, they stated that, to enhance integration, it is necessary to focus on practical steps that result in concrete changes to further facilitate cross-border production and trade. Agency Comments and Our Evaluation We provided a draft of this report to DOE, Interior, Commerce, State, DOT, FERC, USAID, Treasury, the Environmental Protection Agency, and the Department of Homeland Security for review and comment. We received comments from USAID, which are reproduced in appendix IV. In its comments, USAID provided additional information about the agency’s contributions to North American energy integration, which we incorporated in the report as appropriate. DOE, Interior, Commerce, DOT, FERC and Treasury provided technical comments, which we also incorporated as appropriate. State, the Environmental Protection Agency, and the Department of Homeland Security informed us that they had no comments. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Energy, the Interior, Commerce, State, Transportation, the Treasury, and Homeland Security; the Executive Director of FERC; the Administrators of USAID and the Environmental Protection Agency; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope and Methodology This report examines (1) ways in which the U.S., Canadian, and Mexican governments cooperate on North American energy integration; (2) U.S. agencies’ activities to facilitate North American energy integration; (3) U.S. agencies’ efforts to coordinate among themselves on North American energy integration; (4) ways in which U.S. agencies obtain feedback and input from U.S. industry and civil society regarding North American energy integration; and (5) steps that U.S., Canadian, and Mexican officials suggested to further facilitate North American energy integration. To address these objectives, we reviewed documents and information provided by cognizant U.S., Canadian, and Mexican government officials; U.S., Canadian, and Mexican energy sector associations; and U.S. civil society groups such as think tanks and environmental nongovernmental organizations. We conducted field work in Mexico City, Mexico, and in Ottawa, Canada, where we met with government and energy sector association representatives. We also collected information on activities related to North American energy integration from U.S. agencies implementing such activities. In addition, we obtained and analyzed data from the U.S. Census Bureau regarding the extent of the United States’ energy trade with Canada and Mexico. To examine the ways in which the U.S., Canadian, and Mexican governments cooperate on North American energy integration, we interviewed officials in each country who were responsible for energy- related cooperation, asking about the processes they follow to cooperate on energy integration at the strategic and technical levels. In the United States, we spoke with officials from the Departments of Energy (DOE), State (State), Interior (Interior), Commerce (Commerce), the Treasury (Treasury), Transportation (DOT), and Homeland Security (DHS); the U.S. Agency for International Development (USAID); and the Federal Energy Regulatory Commission (FERC). We also corresponded with officials from the Environmental Protection Agency. In addition, we spoke with officials of the North American Electric Reliability Corporation. Further, we met with Canadian and Mexican embassy officials in Washington, D.C. In Canada, we met with officials from Natural Resources Canada and Global Affairs Canada and spoke with officials from the Alberta provincial government. In Mexico, we met with officials from Mexico’s Secretariat of Energy; National Hydrocarbons Commission; Energy Regulatory Commission; National Gas Control Center; National Center for Energy Control; and Agency for Safety, Energy and Environment. We also reviewed documents developed to formalize bilateral and trilateral cooperation, such as the 2016 North American Climate, Clean Energy, and Environment Partnership Action Plan; documents related to the U.S.–Mexico High Level Economic Dialogue; and the “2017 North American Energy Ministerial Joint Summary.” To examine U.S. agencies’ energy integration activities implemented since 2014, we reviewed agency documents; interviewed DOE, Interior, Commerce, State, DOT, FERC, USAID, and DHS officials; and corresponded with officials from the Environmental Protection Agency. Also, in May 2018, we contacted Treasury officials after learning about recent Treasury activities related to North American energy integration. In addition, we sent a survey to DOE, Interior, Commerce, State, DOT, FERC, USAID, and DHS, asking them to, among other things, identify their energy integration activities implemented from 2014 through 2017; describe each activity, including its purpose; identify the type of activity (e.g., joint research and development, trade mission, forum for technical discussion, regulatory cooperation, technical assistance, other); and identify other agencies participating in the activity. The survey also asked whether the identified activities were bilateral with Canada or Mexico, trilateral, multilateral, or unilateral. We followed up with the agencies to ask for clarifications. We did not independently determine whether the agencies had identified all relevant activities. In addition, we reviewed agencies’ responses to identify any overlap and duplication among federal energy integration efforts. We focused on the goals and outcomes of energy integration activities as described in the agency-provided descriptions and in background material, as needed. We also focused on the activities’ target populations, or intended beneficiaries, since a bilateral U.S.–Canadian activity would have a different target population than a bilateral U.S.–Mexican activity. We compared activities within categories to look for evidence of duplication or overlap based on the description provided by the agency or other background material (i.e., agency website and documents). We determined, in accordance with GAO’s definitions of duplication and overlap, that no two of the agency activities were duplicative or overlapping because the activities did not have the same or similar goals or the same or similar beneficiaries. To examine U.S. agencies’ efforts to coordinate with each other on North American energy integration, we conducted interviews with DOE, Interior, Commerce, and State officials, asking them to identify and discuss any mechanisms, such as interagency groups, offices, activities, or initiatives, used for collaboration for the purposes of energy integration. To conduct a more detailed analysis of interagency coordination on North American integration, we interviewed participants in a National Security Council (NSC)–led interagency working group using a standard set of questions about interagency coordination and collaboration. We selected the NSC working group because it provides an example of very high-level interagency collaboration, could address multiple aspects of energy integration, and had a specific focus and effect on energy-integration efforts. Although we had intended to interview NSC officials, as of April 2018, NSC had not responded to our requests for documents and an interview. As a result, we were unable to include NSC views about the interagency collaboration considerations discussed. However, we were able to mitigate this limitation by interviewing and comparing the testimonial evidence of officials from the four participating agencies. We provided agency officials a structured set of questions about interagency coordination and collaboration that were based on key considerations for implementing interagency collaboration identified in a prior GAO report. To examine the ways in which U.S. agencies obtain feedback and input from U.S. industry and civil society, we conducted several informational interviews with industry associations and civil society organizations, such as think tanks and other environmental groups. To identify these organizations, we reviewed witness lists at relevant congressional and agency hearings, panel lists at energy-related conferences, and recommendations from agency officials. In addition, as we interviewed representatives of these organizations, we asked them to identify other groups that might provide further information. Using this approach, we interviewed representatives from seven civil society groups and 10 industry associations, including organizations based in Mexico and Canada. However, our sample was judgmentally selected and their opinions are not generalizable to all private industry and civil society stakeholders. To report on steps suggested by U.S., Canadian, and Mexican officials to further facilitate North American energy integration, we interviewed officials in each country who were responsible for energy-related cooperation and asked them to suggest additional steps or options that the United States, in collaboration with Canada and Mexico, could take to facilitate building a more integrated and secure energy market in North America. In the United States, we spoke with officials from DOE, Interior, Commerce, State, DOT, FERC, USAID, Treasury, and DHS. We also spoke with Canadian and Mexican embassy officials in Washington, D.C. Additionally, in Canada, we spoke with officials from Natural Resources Canada, Global Affairs Canada, and the Alberta provincial government. In Mexico, we spoke with officials from Mexico’s Secretariat of Energy, National Hydrocarbons Commission, Energy Regulatory Commission, National Gas Control Center, National Center for Energy Control, and Agency Energy and Environment Safety. We analyzed responses provided by officials in the three countries and identified four steps suggested by one or more officials in each of the countries: (1) aligning energy regulatory cooperation, (2) streamlining the presidential permitting process, (3) facilitating cross-border movement of equipment and workers, and (4) involving states and provinces in energy integration efforts. After identifying these four steps, we elaborated on each one by reviewing related documents and reports and discussing them with private sector representatives and researchers in nongovernmental organizations. We did not elaborate on steps suggested by U.S., Canadian, or Mexican officials that were not suggested by at least one official in all three countries. We conducted this performance audit from April 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: North American Electric Reliability Corporation The North American Electric Reliability Corporation (NERC) is a not-for- profit international corporation that plays a role in regulating and establishing reliability standards for cross-border North American electricity markets. NERC’s mission is to assure the effective and efficient reduction of risks to the reliability and security of the bulk power system in the United States, Canada, and part of Mexico. While not a federal agency, in July 2006, NERC was certified by the Federal Energy Regulatory Commission (FERC) as the electric reliability organization for the United States. Subsequently, compliance with NERC reliability standards became a legal requirement for certain bulk power system owners, operators, and users. NERC is subject to oversight by FERC and governmental authorities in Canada. NERC has a trilateral focus, which enables it to more easily forge partnerships in Canada and Mexico, according to NERC officials. The officials stated that NERC’s work has primarily focused on electrical grid reliability in Canada and the United States, as Mexico’s electricity market was restricted until its recent reforms. NERC identified several activities related to North American energy integration. For example: NERC leads GridEX, a biennial electrical grid security exercise involving industry and government from the United States, Canada, and Mexico. The exercise attempts to execute the electricity sector’s emergency response to simulated cyber and physical security threats and incidents, strengthen utilities’ crisis response functions, and provide input for lessons learned. NERC engages in regulatory cooperation with government entities in Canada and Mexico to improve the reliability of the electric grid. As the electric reliability organization certified by FERC, NERC convenes stakeholders from across the interconnected North American bulk power system to develop continent-wide reliability standards. NERC has entered into a number of memorandums of understanding (MOU) with Canada and Mexico. In September 2006, NERC signed an MOU with Canada’s National Energy Board that committed the parties to work together to promote a reliable bulk electric system in North America through a cooperative relationship. Moreover, NERC officials stated that because electricity is the domain of Canadian provinces, NERC signed an MOU with the responsible organization in a number of provinces. Further, the Mexican government recently began to engage with NERC to bring certain areas into compliance with NERC standards, and officials reported that in March 2017, NERC and Mexico’s Energy Regulatory Commission and National Center for Energy Control signed an MOU as a framework for, and to facilitate, cooperation. According to NERC officials, the MOU with these Mexico energy agencies defines roles and responsibilities, states Mexico’s general commitment for Mexico to use NERC standards as a basis for Mexico’s electric reliability system, and identifies some technical areas in which NERC will provide capacity-building assistance. Appendix III: U.S. Agencies’ North American Energy Integration Activities, 2014-1017 Table 2 describes activities related to North American energy integration in 2014 through 2017 reported by eight U.S. agencies—the Departments of Energy, Interior, Commerce, State, Transportation, and the Treasury; the U.S. Agency for International Development; and the Federal Energy Regulatory Commission. Appendix IV: Comments from U.S. Agency for International Development Appendix V: GAO Contact and Staff Acknowledgements GAO Contact Kimberly Gianopoulos, (202) 512-8612 or gianopoulosk@gao.gov. Staff Acknowledgements In addition to the contact named above, Kim Frankena (Assistant Director), Francisco M. Enriquez (Analyst-in-Charge), Brian Tremblay, Martin De Alteriis, Philip Farah, Christopher Keblitis, Reid Lowe, Grace Lui, Franklin Rusco, and Sarah Veale made key contributions to this report.
Why GAO Did This Study According to a U.S. government study, increased U.S. energy trade with Canada and Mexico—two of the United States' top energy trade partners—is viewed as a major contributor to U.S. economic prosperity and energy security. In recent years, North American energy production has experienced changes. For example, the United States has become the world's top oil producer, Canada has substantially increased its oil outputs, and Mexico has implemented energy reforms. To address energy production and trade issues, the public sector and private sector stakeholders have advocated for further integration of the three North American countries' energy sectors. GAO was asked to review the role of U.S. agencies in supporting energy integration in North America. This report examines (1) ways in which the U.S., Canadian, and Mexican governments cooperate on North American energy integration; (2) U.S. agencies' activities to facilitate North American energy integration; (3) U.S. agencies' efforts to coordinate among themselves on North American energy integration; (4) ways in which U.S. agencies receive feedback from U.S. industry and civil society regarding North American energy integration; and (5) steps that U.S., Canadian, and Mexican officials suggested to further facilitate North American energy integration. GAO reviewed bilateral and trilateral cooperation activities and mechanisms; surveyed U.S. agencies involved in energy integration; and interviewed U.S., Canadian, and Mexican energy officials. GAO is not making any recommendations in this report. What GAO Found Cooperation. The United States cooperates with Canada and Mexico on integrating North American energy markets and infrastructure (energy integration). Cooperation occurs at the presidential and ministerial levels (e.g., the countries' secretaries or ministers of energy) for strategic issues and at the agency level for technical issues. However, progress on some strategic issues has been limited. For example, development of a North American energy strategy, which the U.S. Department of Energy (DOE) proposed in March 2017, was suspended later that year because of disagreement about its scope. Discussions of the strategy resumed in 2018, according to DOE officials. Agency activities. Eight U.S. agencies have engaged in multiple efforts to facilitate North American energy integration. DOE generally serves as the lead agency on energy integration issues, while the Department of State—the lead agency on foreign policy—also leads some bilateral and trilateral efforts. Other agencies play roles in areas such as regulatory compliance or efforts to open energy markets. Agency officials GAO surveyed and interviewed identified 81 energy integration–related activities conducted in 2014 through 2017, including international agreements and other instruments, research and development, technical forums and assistance, regulatory cooperation, and trade promotion. Interagency coordination. U.S. agency officials reported coordinating on energy integration through high-level U.S. interagency meetings, summits, and other means. For example, agencies participating in a National Security Council–led working group share information, provide advice, and coordinate on activities. Agency officials also reported using mechanisms such as stakeholder forums and staff discussions to coordinate on energy integration issues. Stakeholder feedback. U.S. agencies receive feedback on energy integration issues from the private sector and civil society through formal mechanisms such as comments in the Federal Register and public–private advisory entities. For example, the U.S.–Mexico Energy Business Council is designed to capture private sector feedback. Informal feedback comes through activities such as emails, phone calls, and letters. Steps suggested by U.S., Canadian, and Mexican officials . Officials in the three countries expressed general satisfaction with intergovernmental cooperation on energy integration and said cooperative activities had helped foster integration. They also suggested further work in areas such as aligning energy regulations. Source: GAO analysis of information provided by U.S, Canadian, and Mexican officials. | GAO-18-575
gao_GAO-18-622
gao_GAO-18-622_0
Background IT systems supporting federal agencies and our nation’s critical infrastructures are inherently at risk. These systems are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the numerous operating systems, applications, and devices comprising the systems and networks. Compounding the risk, federal systems and networks are also often interconnected with other internal and external systems and networks, including the Internet. This increases the number of avenues of attack and expands their attack surface. As systems become more integrated, cyber threats will pose an increasing risk to national security, economic well-being, and public health and safety. Advancements in technology, such as data analytics software for searching and collecting information, have also made it easier for individuals and organizations to correlate data (including PII) and track it across large and numerous databases. For example, social media has been used as a mass communication tool where PII can be gathered in vast amounts. In addition, ubiquitous Internet and cellular connectivity makes it easier to track individuals by allowing easy access to information pinpointing their locations. These advances—combined with the increasing sophistication of hackers and others with malicious intent, and the extent to which both federal agencies and private companies collect sensitive information about individuals—have increased the risk of PII being exposed and compromised. Cybersecurity incidents continue to impact entities across various critical infrastructure sectors. For example, in its 2018 annual data breach investigations report, Verizon reported that 53,308 security incidents and 2,216 data breaches were identified across 65 countries in the 12 months since its prior report. Further, the report noted that cybercriminals can often compromise a system in just a matter of minutes—or even seconds, but that it can take an organization significantly longer to discover the breach. Specifically, the report stated nearly 90 percent of the reported breaches occurred within minutes, while nearly 70 percent went undiscovered for months. These concerns are further highlighted by the number of information security incidents reported by federal executive branch civilian agencies to DHS’s U.S. Computer Emergency Readiness Team (US-CERT). For fiscal year 2017, 35,277 such incidents were reported by the Office of Management and Budget (OMB) in its 2018 annual report to Congress, as mandated by the Federal Information Security Modernization Act (FISMA). These incidents include, for example, web-based attacks, phishing, and the loss or theft of computing equipment. Different types of incidents merit different response strategies. However, if an agency cannot identify the threat vector (or avenue of attack), it could be difficult for that agency to define more specific handling procedures to respond to the incident and take actions to minimize similar future attacks. In this regard, incidents with a threat vector categorized as “other” (which includes avenues of attacks that are unidentified) made up 31 percent of the various incidents reported to US-CERT. Figure 1 shows the percentage of the different types of incidents reported across each of the nine threat vector categories for fiscal year 2017, as reported by OMB. These incidents and others like them can pose a serious challenge to economic, national, and personal privacy and security. The following examples highlight the impact of such incidents: In March 2018, the Mayor of Atlanta, Georgia, reported that the city was victimized by a ransomware cyberattack. As a result, city government officials stated that customers were not able to access multiple applications that are used to pay bills or access court related information. In response to the attack, the officials noted that they were working with numerous private and governmental partners, including DHS, to assess what occurred and determine how best to protect the city from future attacks. In March 2018, the Department of Justice reported that it had indicted nine Iranians for conducting a massive cybersecurity theft campaign on behalf of the Islamic Revolutionary Guard Corps. According to the department, the nine Iranians allegedly stole more than 31 terabytes of documents and data from more than 140 American universities, 30 U.S. companies, and five federal government agencies, among other entities. In March 2018, a joint alert from DHS and the Federal Bureau of Investigation (FBI) stated that, since at least March 2016, Russian government actors had targeted the systems of multiple U.S. government entities and critical infrastructure sectors. Specifically, the alert stated that Russian government actors had affected multiple organizations in the energy, nuclear, water, aviation, construction, and critical manufacturing sectors. In July 2017, a breach at Equifax resulted in the loss of PII for an estimated 148 million U.S. consumers. According to Equifax, the hackers accessed people’s names, Social Security numbers (SSN), birth dates, addresses and, in some instances, driver’s license numbers. In April 2017, the Commissioner of the Internal Revenue Service (IRS) testified that the IRS had disabled its data retrieval tool in early March 2017 after becoming concerned about the misuse of taxpayer data. Specifically, the agency suspected that PII obtained outside the agency’s tax system was used to access the agency’s online federal student aid application in an attempt to secure tax information through the data retrieval tool. In April 2017, the agency began notifying taxpayers who could have been affected by the breach. In June 2015, OPM reported that an intrusion into its systems had affected the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate, but related, incident had compromised its systems and the files related to background investigations for 21.5 million individuals. In total, OPM estimated 22.1 million individuals had some form of PII stolen, with 3.6 million being a victim of both breaches. Federal Information Security Included on GAO’s High-Risk List Since 1997 Safeguarding federal IT systems and the systems that support critical infrastructures has been a long-standing concern of GAO. Due to increasing cyber-based threats and the persistent nature of information security vulnerabilities, we have designated information security as a government-wide high-risk area since 1997. In 2003, we expanded the information security high-risk area to include the protection of critical cyber infrastructure. At that time, we highlighted the need to manage critical infrastructure protection activities that enhance the security of the cyber and physical public and private infrastructures that are essential to national security, national economic security, and/or national public health and safety. We further expanded the information security high-risk area in 2015 to include protecting the privacy of PII. Since then, advances in technology have enhanced the ability of government and private sector entities to collect and process extensive amounts of PII, which has posed challenges to ensuring the privacy of such information. In addition, high- profile PII breaches at commercial entities, such as Equifax, heightened concerns that personal privacy is not being adequately protected. Our experience has shown that the key elements needed to make progress toward being removed from the High-Risk List are top-level attention by the administration and agency leaders grounded in the five criteria for removal, as well as any needed congressional action. The five criteria for removal that we identified in November 2000 are as follows: Leadership Commitment. Demonstrated strong commitment and top leadership support. Capacity. The agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root cause, solutions, and provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. Figure 2 shows the five criteria and illustrative actions taken by agencies to address the criteria. Importantly, the actions listed are not “stand alone” efforts taken in isolation from other actions to address high- risk issues. That is, actions taken under one criterion may be important to meeting other criteria as well. For example, top leadership can demonstrate its commitment by establishing a corrective action plan including long-term priorities and goals to address the high-risk issue and using data to gauge progress—actions which are also vital to monitoring criteria. As we reported in the February 2017 high-risk report, the federal government’s efforts to address information security deficiencies had fully met one of the five criteria for removal from the High-Risk List— leadership commitment—and partially met the other four, as shown in figure 3. We plan to update our assessment of this high-risk area against the five criteria in February 2019. Ten Critical Actions Needed to Address Major Cybersecurity Challenges Based on our prior work, we have identified four major cybersecurity challenges: (1) establishing a comprehensive cybersecurity strategy and performing effective oversight, (2) securing federal systems and information, (3) protecting cyber critical infrastructure, and (4) protecting privacy and sensitive data. To address these challenges, we have identified 10 critical actions that the federal government and other entities need to take (see figure 4). The four challenges and the 10 actions needed to address them are summarized following the table. In addition, we also discuss in more detail each of the 10 actions in appendices II through XI. Establishing a Comprehensive Cybersecurity Strategy and Performing Effective Oversight The federal government has been challenged in establishing a comprehensive cybersecurity strategy and in performing effective oversight as called for by federal law and policy. Specifically, we have previously reported that the federal government has faced challenges in establishing a comprehensive strategy to provide a framework for how the United States will engage both domestically and internationally on cybersecurity related matters. We have also reported on challenges in performing oversight, including monitoring the global supply chain, ensuring a highly skilled cyber workforce, and addressing risks associated with emerging technologies. The federal government can take four key actions to improve the nation’s strategic approach to, and oversight of, cybersecurity. Develop and execute a more comprehensive federal strategy for national cybersecurity and global cyberspace. In February 2013 we reported that the government had issued a variety of strategy- related documents that addressed priorities for enhancing cybersecurity within the federal government as well as for encouraging improvements in the cybersecurity of critical infrastructure within the private sector; however, no overarching cybersecurity strategy had been developed that articulated priority actions, assigned responsibilities for performing them, and set time frames for their completion. In October 2015, in response to our recommendation to develop an overarching federal cybersecurity strategy that included all key elements of the desirable characteristics of a national strategy, the Director of OMB and the Federal Chief Information Officer issued a Cybersecurity Strategy and Implementation Plan for the Federal Civilian Government. The plan directed a series of actions to improve capabilities for identifying and detecting vulnerabilities and threats, enhance protections of government assets and information, and further develop robust response and recovery capabilities to ensure readiness and resilience when incidents inevitably occur. The plan also identified key milestones for major activities, resources needed to accomplish milestones, and specific roles and responsibilities of federal organizations related to the strategy’s milestones. Since that time, the executive branch has made progress toward outlining a federal strategy for confronting cyber threats. For example, a May 2017 presidential executive order required federal agencies to take a variety of actions, including better manage their cybersecurity risks and coordinate to meet reporting requirements related to cybersecurity of federal networks, critical infrastructure, and the nation. Additionally, the December 2017 National Security Strategy cites cybersecurity as a national priority and identifies related needed actions, such as including identifying and prioritizing risk, and building defensible government networks. Further, DHS issued a cybersecurity strategy in May 2018, which articulated seven goals the department plans to accomplish in support of its mission related to managing national cybersecurity risks. The strategy is intended to provide DHS with a framework to execute its cybersecurity responsibilities during the next 5 years to keep pace with the evolving cyber risk landscape by reducing vulnerabilities and building resilience; countering malicious actors in cyberspace; responding to incidents; and making the cyber ecosystem more secure and resilient. These efforts provide a good foundation toward establishing a more comprehensive strategy, but more effort is needed to address all of the desirable characteristics of a national strategy that we have previously recommended. The recently issued executive branch strategy documents did not include key elements of desirable characteristics that can enhance the usefulness of a national strategy as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. Specifically, the documents generally did not include milestones and performance measures to gauge results, nor did they describe the resources needed to carry out the goals and objective. Further, most of the strategy documents lacked clearly defined roles and responsibilities for key agencies, such as DHS, the Department of Defense (DOD), and OMB, who contribute substantially to the nation’s cybersecurity programs. Ultimately, a more clearly defined, coordinated, and comprehensive approach to planning and executing an overall strategy would likely lead to significant progress in furthering strategic goals and lessening persistent weaknesses. For more information on this action area, see appendix II. Mitigate global supply chain risks. The global, geographically disperse nature of the producers and suppliers of IT products is a growing concern. We have previously reported on potential issues associated with IT supply chain and risks originating from foreign- manufactured equipment. For example, in July 2017, we reported that the Department of State had relied on certain device manufacturers, software developers, and contractor support which had suppliers that were reported to be headquartered in a cyber-threat nation (e.g., China and Russia). We further pointed out that the reliance on complex, global IT supply chains introduces multiple risks to federal agencies, including insertion of counterfeits, tampering, or installation of malicious software or hardware. In July 2018, we testified that if such global IT supply chain risks are realized, they could jeopardize the confidentiality, integrity, and availability of federal information systems. Thus, the potential exists for serious adverse impact on an agency’s operations, assets, and employees. These factors highlight the importance and urgency of federal agencies appropriately assessing, managing, and monitoring IT supply chain risk as part of their agency-wide information security programs. For more information on this action area, see appendix III. Address cybersecurity workforce management challenges. The federal government faces challenges in ensuring that the nation’s cybersecurity workforce has the appropriate skills. For example, in June 2018, we reported on federal efforts to implement the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015. We determined that most of the Chief Financial Officers (CFO) Act agencies had not fully implemented all statutory requirements, such as developing procedures for assigning codes to cybersecurity positions. Further, we have previously reported that DHS and DOD had not addressed cybersecurity workforce management requirements set forth in federal laws. In addition, we have reported in the last 2 years that federal agencies (1) had not identified and closed cybersecurity skills gaps, (2) had been challenged with recruiting and retaining qualified staff, and (3) had difficulty navigating the federal hiring process. A recent executive branch report also discussed challenges associated with the cybersecurity workforce. Specifically, in response to Executive Order 13800, the Department of Commerce and DHS led an interagency working group exploring how to support the growth and sustainment of future cybersecurity employees in the public and private sectors. In May 2018, the departments issued a report that identified key findings, including: the U.S. cybersecurity workforce needs immediate and sustained improvements; the pool of cybersecurity candidates needs to be expanded through retraining and by increasing the participation of women, minorities, and veterans; a shortage exists of cybersecurity teachers at the primary and secondary levels, faculty in higher education, and training instructors; and comprehensive and reliable data about cybersecurity workforce position needs and education and training programs are lacking. The report also included recommendations and proposed actions to address the findings, including that private and public sectors should (1) align education and training with employers’ cybersecurity workforce needs by applying the National Initiative for Cybersecurity Education Cybersecurity Workforce Framework; (2) develop cybersecurity career model paths; and (3) establish a clearinghouse of information on cybersecurity workforce development education, training, and workforce development programs and initiatives. In addition, in June 2018, the executive branch issued a government reform plan and reorganization recommendations that included, among other things, proposals for solving the federal cybersecurity workforce shortage. In particular, the plan notes that the administration intends to prioritize and accelerate ongoing efforts to reform the way that the federal government recruits, evaluates, selects, pays, and places cyber talent across the enterprise. The plan further states that, by the end of the first quarter of fiscal year 2019, all CFO Act agencies, in coordination with DHS and OMB, are to develop a critical list of vacancies across their organizations. Subsequently, OMB and DHS are to analyze these lists and work with OPM to develop a government-wide approach to identifying or recruiting new employees or reskilling existing employees. Regarding cybersecurity training, the plan notes that OMB is to consult with DHS to standardize training for cybersecurity employees, and should work to develop an enterprise-wide training process for government cybersecurity employees. For more information on this action area, see appendix IV. Ensure the security of emerging technologies. As the devices used in daily life become increasingly integrated with technology, the risk to sensitive data and PII also grows. Over the last several years, we have reported on weaknesses in addressing vulnerabilities associated with emerging technologies, including: IoT devices, such as fitness trackers, cameras, and thermostats, that continuously collect and process information are potentially vulnerable to cyber-attacks; IoT devices, such as those acquired and used by DOD employees or that DOD itself acquires (e.g., smartphones), may increase the security risks to the department; vehicles that are potentially susceptible to cyber-attack through technology, such as Bluetooth; the unknown impact of artificial intelligence cybersecurity; and advances in cryptocurrencies and blockchain technologies. Executive branch agencies have also highlighted the challenges associated with ensuring the security of emerging technologies. Specifically, in a May 2018 report issued in response to Executive Order 13800, the Department of Commerce and DHS issued a report on the opportunities and challenges in reducing the botnet threat. The opportunities and challenges are centered on six principal themes, including the global nature of automated, distributed attacks; effective tools; and awareness and education. The report also provides recommended actions, including that federal agencies should increase their understanding of what software components have been incorporated into acquired products and establish a public campaign to support awareness of IoT security. For more information on this action area, see appendix V. In our previously discussed reports related to this cybersecurity challenge, we made a total of 50 recommendations to federal agencies to address the weaknesses identified. As of August 2018, 48 recommendations had not been implemented. These outstanding recommendations include 8 priority recommendations, meaning that we believe that they warrant priority attention from heads of key departments and agencies. These priority recommendations include addressing weaknesses associated with, among other things, agency-specific cybersecurity workforce challenges and agency responsibilities for supporting mitigation of vehicle network attacks. Until our recommendations are fully implemented, federal agencies may be limited in their ability to provide effective oversight of critical government-wide initiatives, address challenges with cybersecurity workforce management, and better ensure the security of emerging technologies. In addition to our prior work related to the federal government’s efforts to establish key strategy documents and implement effective oversight, we also have several ongoing reviews related to this challenge. These include reviews of: the CFO Act agencies’ efforts to submit complete and reliable baseline assessment reports of their cybersecurity workforces; the extent to which DOD has established training standards for cyber mission force personnel, and efforts the department has made to achieve its goal of a trained cyber mission force; and selected agencies’ ability to implement cloud service technologies and notable benefits this might have on agencies. Securing Federal Systems and Information The federal government has been challenged in securing federal systems and information. Specifically, we have reported that federal agencies have experienced challenges in implementing government-wide cybersecurity initiatives, addressing weaknesses in their information systems and responding to cyber incidents on their systems. This is particularly concerning given that the emergence of increasingly sophisticated threats and continuous reporting of cyber incidents underscores the continuing and urgent need for effective information security. As such, it is important that federal agencies take appropriate steps to better ensure they have effectively implemented programs to protect their information and systems. We have identified three actions that the agencies can take. Improve implementation of government-wide cybersecurity initiatives. Specifically, in January 2016, we reported that DHS had not ensured that the National Cybersecurity Protection System (NCPS) had fully satisfied all intended system objectives related to intrusion detection and prevention, information sharing, and analytics. In addition, in February 2017, we reported that the DHS National Cybersecurity and Communications Integration Center’s (NCCIC) functions were not being performed in adherence with the principles set forth in federal laws. We noted that, although NCCIC was sharing information about cyber threats in the way it should, the center did not have metrics to measure that the information was timely, relevant and actionable, as prescribed by law. For more information on this action area, see appendix VI. Address weaknesses in federal information security programs. We have previously identified a number of weaknesses in agencies’ protection of their information and information systems. For example, over the past 2 years, we have reported that: most of the 24 agencies covered by the CFO Act had weaknesses in each of the five major categories of information system controls (i.e., access controls, configuration management controls, segregation of duties, contingency planning, and agency-wide security management); three agencies—the Securities Exchange Commission, the Federal Deposit Insurance Corporation, and the Food and Drug Administration—had not effectively implemented aspects of their information security programs, which resulted in weaknesses in these agencies’ security controls; information security weaknesses in selected high-impact systems at four agencies—the National Aeronautics and Space Administration, the Nuclear Regulatory Commission, OPM, and the Department of Veterans Affairs—were cited as a key reason that the agencies had not effectively implemented elements of their information security programs; DOD’s process for monitoring the implementation of cybersecurity guidance had weaknesses and resulted in the closure of certain tasks (such as completing cyber risk assessments) before they were fully implemented; and agencies had not fully defined the role of their Chief Information Security Officers, as required by FISMA. We also recently testified that, although the government had acted to protect federal information systems, additional work was needed to improve agency security programs and cyber capabilities. In particular, we noted that further efforts were needed by agencies to implement our prior recommendations in order to strengthen their information security programs and technical controls over their computer networks and systems. For more information on this action area, see appendix VII. Enhance the federal response to cyber incidents. We have reported that certain agencies have had weaknesses in responding to cyber incidents. For example, as of August 2017, OPM had not fully implemented controls to address deficiencies identified as a result of its 2015 cyber incidents; DOD had not identified the National Guard’s cyber capabilities (e.g., computer network defense teams) or addressed challenges in its exercises; as of April 2016, DOD had not identified, clarified, or implemented all components of its support of civil authorities during cyber incidents; and as of January 2016, DHS’s NCPS had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. For more information on this action area, see appendix VIII. In the public versions of the reports previously discussed for this challenge area, we made a total of 101 recommendations to federal agencies to address the weaknesses identified. As of August 2018, 61 recommendations had not been implemented. These outstanding recommendations include 14 priority recommendations to address weaknesses associated with, among other things, the information security programs at the National Aeronautics and Space Administration, OPM, and the Security Exchange Commission. Until these recommendations are implemented, these federal agencies will be limited in their ability to ensure the effectiveness of their programs for protecting information and systems. In addition to our prior work, we also have several ongoing reviews related to the federal government’s efforts to protect its information and systems. These include reviews of: Federal Risk and Authorization Management Program (FedRAMP) implementation, including an assessment of the implementation of the program’s authorization process for protecting federal data in cloud environments; the Equifax data breach, including an assessment of federal oversight of credit reporting agencies’ collection, use, and protection of consumer PII; the Federal Communication Commission’s Electronic Comment Filing System security, to include a review of the agency’s detection of and response to a May 2017 incident that reportedly impacted the system; DOD’s efforts to improve the cybersecurity of its major weapon DOD’s whistleblower program, including an assessment of the policies, procedures, and controls related to the access and storage of sensitive and classified information needed for the program; IRS’s efforts to (1) implement security controls and the agency’s information security program, (2) authenticate taxpayers, and (3) secure tax information; and the federal approach and strategy to securing agency information systems, to include federal intrusion detection and prevention capabilities and the intrusion assessment plan. Protecting Cyber Critical Infrastructure The federal government has been challenged in working with the private sector to protect critical infrastructure. This infrastructure includes both public and private systems vital to national security and other efforts, such as providing the essential services that underpin American society. As the cybersecurity threat to these systems continues to grow, federal agencies have millions of sensitive records that must be protected. Specifically, this critical infrastructure threat could have national security implications and more efforts should be made to ensure that it is not breached. To help address this issue, the National Institute of Standards and Technology (NIST) developed the cybersecurity framework—a voluntary set of cybersecurity standards and procedures for industry to adopt as a means of taking a risk-based approach to managing cybersecurity. However, additional action is needed to strengthen the federal role in protecting the critical infrastructure. Specifically, we have reported on other critical infrastructure protection issues that need to be addressed. For example: DHS did not track vulnerability reduction from the implementation and verification of planned security measures at the high-risk chemical facilities that engage with the department, as a basis for assessing performance. Entities within the 16 critical infrastructure sectors reported encountering four challenges to adopting the cybersecurity framework, such as being limited in their ability to commit necessary resources towards framework adoption and not having the necessary knowledge and skills to effectively implement the framework. DOD and the Federal Aviation Administration identified a variety of operations and physical security risks that could adversely affect DOD missions. Major challenges existed to securing the electricity grid against cyber threats. These challenges included monitoring implementation of cybersecurity standards, ensuring security features are built into smart grid systems, and establishing metrics for cybersecurity. DHS and other agencies needed to enhance cybersecurity in the maritime environment. Specifically, DHS did not include cyber risks in its risk assessments that were already in place nor did it address cyber risks in guidance for port security plans. Sector-specific agencies were not properly addressing progress or metrics to measure their progress in cybersecurity. For more information on this action area, see appendix IX. We made a total of 21 recommendations to federal agencies to address these weaknesses and others. These recommendations include, for example, a total of 9 recommendations to 9 sector-specific agencies to develop methods to determine the level and type of cybersecurity framework adoption across their respective sectors. As of August 2018, all 21 recommendations had not been implemented. Until these recommendations are implemented, the federal government will continue to be challenged in fulfilling its role in protecting the nation’s critical infrastructure. In addition to our prior work related to the federal government’s efforts to protect critical infrastructure, we also have several ongoing reviews focusing on: the physical and cybersecurity risks to pipelines across the country responsible for transmitting oil, natural gas, and other hazardous liquids; the cybersecurity risks to the electric grid; and the privatization of utilities at DOD installations. Protecting Privacy and Sensitive Data The federal government has been challenged in protecting privacy and sensitive data. Advances in technology, including powerful search technology and data analytics software, have made it easy to correlate information about individuals across large and numerous databases, which have become very inexpensive to maintain. In addition, ubiquitous Internet connectivity has facilitated sophisticated tracking of individuals and their activities through mobile devices such as smartphones and fitness trackers. Given that access to data is so pervasive, personal privacy hinges on ensuring that databases of PII maintained by government agencies or on their behalf are protected both from inappropriate access (i.e., data breaches) as well as inappropriate use (i.e., for purposes not originally specified when the information was collected). Likewise, the trend in the private sector of collecting extensive and detailed information about individuals needs appropriate limits. The vast number of individuals potentially affected by data breaches at federal agencies and private sector entities in recent years increases concerns that PII is not being properly protected. Federal agencies should take two types of actions to address this challenge area. In addition, we have previously proposed two matters for congressional consideration aimed toward better protecting PII. Improve federal efforts to protect privacy and sensitive data. We have issued several reports noting that agencies had deficiencies in protecting privacy and sensitive data that needed to be addressed. For example: The Department of Health and Human Services’ (HHS) Centers for Medicare and Medicaid Services (CMS) and external entities were at risk of compromising Medicare Beneficiary Data due to a lack of guidance and proper oversight. The Department of Education’s Office of Federal Student Aid had not properly overseen its school partners’ records or information security programs. HHS had not fully addressed key security elements in its guidance for protecting the security and privacy of electronic health information. CMS had not fully protected the privacy of users’ data on state- based marketplaces. Poor planning and ineffective monitoring had resulted in the unsuccessful implementation of government initiatives aimed at eliminating the unnecessary collection, use, and display of SSNs. For more information on this action area, see appendix X. Appropriately limit the collection and use of personal information and ensure that it is obtained with appropriate knowledge or consent. We have issued a series of reports that highlight a number of the key concerns in this area. For example: The emergence of IoT devices can facilitate the collection of information about individuals without their knowledge or consent; Federal laws for smartphone tracking applications have not generally been well enforced; The FBI has not fully ensured privacy and accuracy related to the use of face recognition technology. For more information on this action area, see appendix XI. We have previously suggested that Congress consider amending laws, such as the Privacy Act of 1974 and the E-Government Act of 2002, because they may not consistently protect PII. Specifically, we found that while these laws and guidance set minimum requirements for agencies, they may not consistently protect PII in all circumstances of its collection and use throughout the federal government and may not fully adhere to key privacy principles. However, revisions to the Privacy Act and the E-Government Act have not yet been enacted. Further, we also suggested that Congress consider strengthening the consumer privacy framework and review issues such as the adequacy of consumers’ ability to access, correct, and control their personal information; and privacy controls related to new technologies such as web tracking and mobile devices. However, these suggested changes have not yet been enacted. We also made a total of 29 recommendations to federal agencies to address the weaknesses identified. As of August 2018, 28 recommendations had not been implemented. These outstanding recommendations include 6 priority recommendations to address weaknesses associated with, among other things, publishing privacy impact assessments and improving the accuracy of the FBI’s face recognition services. Until these recommendations are implemented, federal agencies will be challenged in their ability to protect privacy and sensitive data and ensure that its collection and use is appropriately limited. In addition to our prior work, we have several ongoing reviews related to protecting privacy and sensitive data. These include reviews of: IRS’s taxpayer authentication efforts, including what steps the agency is taking to monitor and improve its authentication methods; the extent to which the Department of Education’s Office of Federal Student Aid’s policies and procedures for overseeing non-school partners’ protection of federal student aid data align with federal requirements and guidance; data security issues related to credit reporting agencies, including a review of the causes and impacts of the August 2017 Equifax data breach; the extent to which Equifax assessed, responded to, and recovered from its August 2017 data breach; federal agencies’ efforts to remove PII from shared cyber threat indicators; and how the federal government has overseen Internet privacy, including the roles of the Federal Communications Commission and the Federal Trade Commission, and strengths and weaknesses of the current oversight authorities. Continued Implementation of Our Recommendations Is Needed to Address Cybersecurity Weaknesses In conclusion, since 2010, we have made over 3,000 recommendations to agencies aimed at addressing the four cybersecurity challenges. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part because many of these recommendations have not been implemented. Of the roughly 3,000 recommendations made since 2010, nearly 1,000 had not been implemented as of August 2018. We have also designated 35 as priority recommendations, and as of August 2018, 31 had not been implemented. The federal government and the nation’s critical infrastructure are dependent on IT systems and electronic data, which make them highly vulnerable to a wide and evolving array of cyber-based threats. Securing these systems and data is vital to the nation’s security, prosperity, and well-being. Nevertheless, the security over these systems and data is inconsistent and urgent actions are needed to address ongoing cybersecurity and privacy challenges. Specifically, the federal government needs to implement a more comprehensive cybersecurity strategy and improve its oversight, including maintaining a qualified cybersecurity workforce; address security weaknesses in federal systems and information and enhance cyber incident response efforts; bolster the protection of cyber critical infrastructure; and prioritize efforts to protect individual’s privacy and PII. Until our recommendations are addressed and actions are taken to address the four challenges we identified, the federal government, the national critical infrastructure, and the personal information of U.S. citizens will be increasingly susceptible to the multitude of cyber-related threats that exist. We are sending copies of this report to the appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Nick Marinos at (202) 512-9342 or marinosn@gao.gov or Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XII. Appendix I: Related GAO Reports Critical Infrastructure Protection: DHS Should Take Actions to Measure Reduction in Chemical Facility Vulnerability and Share Information with First Responders. GAO-18-538. Washington, D.C.: August 8, 2018. High-Risk Series: Urgent Actions Are Needed to Address Cybersecurity Challenges Facing the Nation. GAO-18-645T. Washington, D.C.: July 25, 2018. Information Security: Supply Chain Risks Affecting Federal Agencies. GAO-18-667T. Washington, D.C.: July 12, 2018. Information Technology: Continued Implementation of High-Risk Recommendations Is Needed to Better Manage Acquisitions, Operations, and Cybersecurity. GAO-18-566T. Washington, D.C.: May 23, 2018. Cybersecurity: DHS Needs to Enhance Efforts to Improve and Promote the Security of Federal and Private-Sector Networks, GAO-18-520T. Washington, D.C.: April 24, 2018. Electronic Health Information: CMS Oversight of Medicare Beneficiary Data Security Needs Improvement. GAO-18-210. Washington, D.C.: March 6, 2018. Technology Assessment: Artificial Intelligence, Emerging Opportunities, Challenges, and Implications. GAO-18-142SP. Washington, D.C.: March 28, 2018. GAO Strategic Plan 2018-2023: Trends Affecting Government and Society. GAO-18-396SP. Washington, D.C.: February 22, 2018. Critical Infrastructure Protection: Additional Actions Are Essential for Assessing Cybersecurity Framework Adoption. GAO-18-211. Washington, D.C.: February 15, 2018. Cybersecurity Workforce: Urgent Need for DHS to Take Actions to Identify Its Position and Critical Skill Requirements. GAO-18-175. Washington, D.C.: February 6, 2018. Homeland Defense: Urgent Need for DOD and FAA to Address Risks and Improve Planning for Technology That Tracks Military Aircraft. GAO-18-177. Washington, D.C.: January 18, 2018. Federal Student Aid: Better Program Management and Oversight of Postsecondary Schools Needed to Protect Student Information. GAO-18-121. Washington, D.C.: December 15, 2017. Defense Civil Support: DOD Needs to Address Cyber Incident Training Requirements. GAO-18-47. Washington, D.C.: November 30, 2017. Federal Information Security: Weaknesses Continue to Indicate Need for Effective Implementation of Policies and Practices. GAO-17-549. Washington, D.C.: September 28, 2017. Information Security: OPM Has Improved Controls, but Further Efforts Are Needed. GAO-17-614. Washington, D.C.: August 3, 2017. Defense Cybersecurity: DOD’s Monitoring of Progress in Implementing Cyber Strategies Can Be Strengthened. GAO-17-512. Washington, D.C.: August 1, 2017. State Department Telecommunications: Information on Vendors and Cyber-Threat Nations. GAO-17-688R. Washington, D.C.: July 27, 2017. Internet of Things: Enhanced Assessments and Guidance Are Needed to Address Security Risks in DOD. GAO-17-668. Washington, D.C.: July 27, 2017. Information Security: SEC Improved Control of Financial Systems but Needs to Take Additional Actions. GAO-17-469. Washington, D.C.: July 27, 2017. Information Security: Control Deficiencies Continue to Limit IRS’s Effectiveness in Protecting Sensitive Financial and Taxpayer Data. GAO-17-395. Washington, D.C.: July 26, 2017. Social Security Numbers: OMB Actions Needed to Strengthen Federal Efforts to Limit Identity Theft Risks by Reducing Collection, Use, and Display. GAO-17-553. Washington, D.C.: July 25, 2017. Information Security: FDIC Needs to Improve Controls over Financial Systems and Information. GAO-17-436. Washington, D.C.: May 31, 2017. Technology Assessment: Internet of Things: Status and implications of an increasingly connected world. GAO-17-75. Washington, D.C.: May 15, 2017. Cybersecurity: DHS’s National Integration Center Generally Performs Required Functions but Needs to Evaluate Its Activities More Completely. GAO-17-163. Washington, D.C.: February 1, 2017. High-Risk Series: An Update. GAO-17-317. Washington, D.C.: February 2017. IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps. GAO-17-8. Washington, D.C.: November 30, 2016. Electronic Health Information: HHS Needs to Strengthen Security and Privacy Guidance and Oversight. GAO-16-771. Washington, D.C.: September 26, 2016. Defense Civil Support: DOD Needs to Identify National Guard’s Cyber Capabilities and Address Challenges in Its Exercises. GAO-16-574. Washington, D.C.: September 6, 2016. Information Security: FDA Needs to Rectify Control Weaknesses That Place Industry and Public Health Data at Risk. GAO-16-513. Washington, D.C.: August 30, 2016. Federal Chief Information Security Officers: Opportunities Exist to Improve Roles and Address Challenges to Authority. GAO-16-686. Washington, D.C.: August 26, 2016. Federal Hiring: OPM Needs to Improve Management and Oversight of Hiring Authorities. GAO-16-521. Washington, D.C.: August 2, 2016. Information Security: Agencies Need to Improve Controls over Selected High-Impact Systems. GAO-16-501. Washington, D.C.: May 18, 2016. Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy. GAO-16-267. Washington, D.C.: May 16, 2016. Smartphone Data: Information and Issues Regarding Surreptitious Tracking Apps That Can Facilitate Stalking. GAO-16-317. Washington, D.C.: May 9, 2016. Vehicle Cybersecurity: DOT and Industry Have Efforts Under Way, but DOT Needs to Define Its Role in Responding to a Real-world Attack. GAO-16-350. Washington, D.C.: April 25, 2016. Civil Support: DOD Needs to Clarify Its Roles and Responsibilities for Defense Support of Civil Authorities during Cyber Incidents. GAO-16-332. Washington, D.C.: April 4, 2016. Healthcare.gov: Actions Needed to Enhance Information Security and Privacy Controls. GAO-16-265. Washington, D.C.: March 23, 2016. Information Security: DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System. GAO-16-294. Washington, D.C.: January 28, 2016. Critical Infrastructure Protection: Sector-Specific Agencies Need to Better Measure Cybersecurity Progress. GAO-16-79. Washington, D.C.: November 19, 2015. Critical Infrastructure Protection: Cybersecurity of the Nation’s Electricity Grid Requires Continued Attention. GAO-16-174T. Washington, D.C.: October 21, 2015. Maritime Critical Infrastructure Protection: DHS Needs to Enhance Efforts to Address Port Cybersecurity. GAO-16-116T. Washington, D.C.: October 8, 2015. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2014. Information Resellers: Consumer Privacy Framework Needs to Reflect Changes in Technology and the Marketplace. GAO-13-663. Washington, D.C.: September 25, 2013. Privacy: Alternatives Exist for Enhancing Protection of Personally Identifiable Information. GAO-08-536. Washington, D.C.: May 19, 2008. Appendix II: Action 1—Develop and Execute a More Comprehensive Federal Strategy for National Cybersecurity and Global Cyberspace Federal law and policy call for a risk-based approach to managing cybersecurity within the government, as well as globally. We have previously reported that the federal government has faced challenges in establishing a comprehensive strategy to provide a framework for how the United States will engage both domestically and internationally on cybersecurity related matters. More specifically, in February 2013, we reported that the government had issued a variety of strategy-related documents that addressed priorities for enhancing cybersecurity within the federal government as well as for encouraging improvements in the cybersecurity of critical infrastructure within the private sector; however, no overarching cybersecurity strategy had been developed that articulated priority actions, assigned responsibilities for performing them, and set time frames for their completion. Accordingly, we recommended that the White House Cybersecurity Coordinator in the Executive Office of the President develop an overarching federal cybersecurity strategy that included all key elements of the desirable characteristics of a national strategy including, among other things, milestones and performance measures for major activities to address stated priorities; cost and resources needed to accomplish stated priorities; and specific roles and responsibilities of federal organizations related to the strategy’s stated priorities. In response to our recommendation, in October 2015, the Director of OMB and the Federal Chief Information Officer, issued a Cybersecurity Strategy and Implementation Plan for the Federal Civilian Government. The plan directed a series of actions to improve capabilities for identifying and detecting vulnerabilities and threats, enhance protections of government assets and information, and further develop robust response and recovery capabilities to ensure readiness and resilience when incidents inevitably occur. The plan also identified key milestones for major activities, resources needed to accomplish milestones, and specific roles and responsibilities of federal organizations related to the strategy’s milestones. Since that time, the executive branch has made progress toward outlining a federal strategy for confronting cyber threats. Table 1 identifies these recent efforts and a description of their related contents. These efforts provide a good foundation toward establishing a more comprehensive strategy, but more effort is needed to address all of the desirable characteristics of a national strategy that we recommended. The recently issued executive branch strategy documents did not include key elements of desirable characteristics that can enhance the usefulness of a national strategy as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. Specifically: Milestones and performance measures to gauge results were generally not included in strategy documents. For example, although the DHS Cybersecurity Strategy stated that its implementation would be assessed on an annual basis, it did not describe the milestones and performance measures for tracking the effectiveness of the activities intended to meet the stated goals (e.g., protecting critical infrastructure and responding effectively to cyber incidents). Without such performance measures, DHS will lack a means to ensure that the goals and objectives discussed in the document are accomplished and that responsible parties are held accountable. According to officials from DHS’s Office of Cybersecurity and Communications, the department is developing a plan for implementing the DHS Cybersecurity Strategy and expects to issue the plan by the end of calendar year 2018. The officials stated that the plan is expected to identify milestones, roles, and responsibilities across DHS to inform the prioritization of future efforts. The strategy documents generally did not include information regarding the resources needed to carry out the goals and objectives. For example, although the DHS Cybersecurity Strategy identified a variety of actions the agency planned to take to perform their cybersecurity mission, it did not articulate the resources needed to carry out these actions and requirements. Without information on the specific resources needed, federal agencies may not be positioned to allocate such resources and investments and, therefore, may be hindered in their ability meet national priorities. Most of the strategy documents lacked clearly defined roles and responsibilities for key agencies, such as DHS, DOD, and OMB. These agencies contribute substantially to the nation’s cybersecurity programs. For example, although the National Security Strategy discusses multiple priority actions needed to address the nation’s cybersecurity challenges (e.g., building defensible government networks, and deterring and disrupting malicious cyber actors), it does not describe the roles, responsibilities, or the expected coordination of any specific federal agencies, including DHS, DOD, or OMB, or other non-federal entities needed to carry out those actions. Without this information, the federal government may not be able foster effective coordination, particularly where there is overlap in responsibilities, or hold agencies accountable for carrying out planned activities. Ultimately, a more clearly defined, coordinated, and comprehensive approach to planning and executing an overall strategy would likely lead to significant progress in furthering strategic goals and lessening persistent weaknesses. Appendix III: Action 2—Mitigate Global Supply Chain Risks The exploitation of information technology (IT) products and services through the supply chain is an emerging threat. IT supply chain-related threats can be introduced in the manufacturing, assembly, and distribution of hardware, software, and services. Moreover, these threats can appear at each phase of the system development life cycle, when an agency initiates, develops, implements, maintains, and disposes of an information system. As a result, the compromise of an agency’s IT supply chain can degrade the confidentiality, integrity, and availability of its critical and sensitive networks, IT-enabled equipment, and data. Federal regulation and guidance issued by the National Institute of Standards and Technology (NIST) set requirements and best practices for mitigating supply chain risks. The Federal Acquisition Regulation established codification and publication of uniform policies and procedures for acquisition by all executive branch agencies. Agencies are required by the Federal Acquisition Regulation to ensure that contracts include quality requirements that are determined necessary to protect the government’s interest. In addition, the NIST guidance on supply chain risk management practices for federal information systems and organizations intends to assist federal agencies with identifying, assessing, and mitigating information and communications technology supply chain risks at all levels of their organizations. We have previously reported on risks to the IT supply chain and risks originating from foreign-manufactured equipment. For example: In July 2018, we testified that if global IT supply chain risks are realized, they could jeopardize the confidentiality, integrity, and availability of federal information systems. Thus, the potential exists for serious adverse impact on an agency’s operations, assets, and employees. We further stated that in 2012 we determined that four national security-related agencies—the Departments of Defense, Justice, Energy, Homeland Security (DHS)—varied in the extent to which they had addressed supply chain risks. We recommended that three agencies take eight actions, as needed, to develop and document policies, procedures, and monitoring capabilities that address IT supply chain risk. The agencies generally concurred with the recommendations and subsequently implemented seven recommendations and partially implemented the eighth recommendation. In July 2017, we reported that, based on a review of a sample of organizations within the Department of State’s telecommunications supply chain, we were able to identify instances in which device manufacturers, software developers and contractor support were reported to be headquartered in a leading cyber-threat nation. For example, of the 52 telecommunications device manufacturers and software developers in our sample, we were able to identify 12 that had 1 or more suppliers that were reported to be headquartered in a leading cyber-threat nation. We noted that the reliance on complex, global IT supply chains introduces multiple risks to federal agencies, including insertion of counterfeits, tampering, or installation of malicious software or hardware. Figure 5 illustrates possible manufacturing locations of typical network components. Although federal agencies have taken steps to address IT supply chain deficiencies that we previously identified, this area continues to be a potential threat vector for malicious actors to target the federal government. For example, in September 2017, DHS issued a binding operating directive which calls on departments and agencies to identify any use or presence of Kaspersky products on their information systems and to develop detailed plans to remove and discontinue present and future use of the products. DHS expressed concern about the ties between certain Kaspersky officials and Russian intelligence and other government agencies, and requirements under Russian law that allow Russian intelligence agencies to request or compel assistance from Kaspersky and to intercept communications transiting Russian networks. Appendix IV: Action 3—Address Cybersecurity Workforce Management Challenges On May 11, 2017, the President issued an executive order on strengthening the cybersecurity of federal networks and critical infrastructure. The order makes it the policy of the United States to support the growth and sustainment of a workforce that is skilled in cybersecurity and related fields as the foundation for achieving our objectives in cyberspace. It directed the Secretaries of Commerce and Homeland Security (DHS), in consultation with other federal agencies, to assess the scope and sufficiency of efforts to educate and train the American cybersecurity workforce of the future, including cybersecurity- related education curricula, training, and apprenticeship programs, from primary through higher education. Nevertheless, the federal government continues to face challenges in addressing the nation’s cybersecurity workforce. Agencies had not effectively conducted baseline assessments of their cybersecurity workforce or fully developed procedures for coding positions. In June 2018, we reported that 21 of the 24 agencies covered by the Chief Financial Officer’s Act had conducted and submitted to Congress a baseline assessment identifying the extent to which their cybersecurity employees held professional certifications, as required by the Federal Cybersecurity Workforce Assessment Act of 2015. However, we found that the results of these assessments may not have been reliable because agencies did not address all of the reportable information and agencies were limited in their ability to obtain complete and consistent information about their cybersecurity employees and the certifications they held. We determined that this was because agencies had not yet fully identified all members of their cybersecurity workforces or did not have a consistent list of appropriate certifications for cybersecurity positions. Further, 23 of the agencies reviewed had established procedures for identifying and assigning the appropriate employment codes to their civilian cybersecurity positions, as called for by the act. However, 6 of the 23 did not address one or more of 7 activities required by OPM in their procedures, such as reviewing all filled and vacant positions and annotating reviewed position descriptions with the appropriate employment code. Accordingly, we made 30 recommendations to 13 agencies to fully implement two of the act’s requirements on baseline assessments and coding procedures. The extent to which these agencies agreed with the recommendations varied. DHS and the Department of Defense (DOD) had not addressed cybersecurity workforce management requirements set forth in federal laws. In February 2018, we reported that, while DHS had taken actions to identify, categorize, and assign employment codes to its cybersecurity positions, as required by the Homeland Security Cybersecurity Workforce Assessment Act of 2014, its actions were not timely and complete. For example, DHS did not establish timely and complete procedures to identify, categorize, and code its cybersecurity position vacancies and responsibilities. Further, DHS had not yet completed its efforts to identify all of its cybersecurity positions and accurately assign codes to all filled and vacant cybersecurity positions. Table 2 shows DHS’s progress in implementing the requirements of the Homeland Security Cybersecurity Workforce Assessment Act of 2014, as of December 2017. Accordingly, we recommended that DHS take six actions, including ensuring that its cybersecurity workforce procedures identify position vacancies and responsibilities; reported workforce data are complete and accurate; and plans for reporting on critical needs are developed. DHS agreed with our six recommendations, but had not implemented them as of August 2018. Regarding DOD, in November 2017, we reported that instead of developing a comprehensive plan for U.S. Cyber Command, the department submitted a report consisting of a collection of documents that did not fully address the required six elements set forth in Section 1648 of the National Defense Authorization Act for Fiscal Year 2016. More specifically, DOD’s 1648 report did not address an element related to cyber incident training. In addition to not addressing the training element in the report, DOD had not ensured that staff were trained as required by the Presidential Policy Directive on United States Cyber Incident Coordination or DOD’s Significant Cyber Incident Coordination Procedures. Accordingly, we made two recommendations to DOD to address these issues. DOD agreed with one of the recommendations and partially agreed with the other, citing ongoing activities related to cyber incident coordination training it believed were sufficient. However, we continued to believe the recommendation was warranted. As of August 2018, both recommendations had not yet been implemented. Agencies had not identified and closed cybersecurity skills gaps. In November 2016, we reported that five selected agencies had made mixed progress in assessing their information technology (IT) skill gaps. These agencies had started focusing on identifying cybersecurity staffing gaps, but more work remained in assessing competency gaps and in broadening the focus to include the entire IT community. Accordingly, we made a total of five recommendations to the agencies to address these issues. Four agencies agreed and one, DOD, partially agreed with our recommendations citing progress made in improving its IT workforce planning. However, we continued to believe our recommendation was warranted. As of August 2018, all five of the recommendations had not been implemented. Agencies had been challenged with recruiting and retaining qualified staff. In August 2016, we reported on the current authorities chief information security officers (CISO) at 24 agencies. Among other things, CISOs identified key challenges they faced in fulfilling their responsibilities. Several of these challenges were related to the cybersecurity workforce, such as not having enough personnel to oversee the implementation of the number and scope of security requirements. In addition, CISOs stated that they were not able to offer salaries that were competitive with the private sector for candidates with high-demand technical skills. Furthermore, CISOs stated that certain security personnel lacked the skill sets needed or were not sufficiently trained. To assist CISOs in carrying out their responsibilities and better define their roles, we made a total of 34 recommendations to the Office of Management and Budget (OMB) and 13 agencies in our review. Agency responses to the recommendations varied; as of August 2018, 18 of the 34 recommendations had not been implemented. Agencies have had difficulty navigating the federal hiring process. In August 2016, we reported on the extent to which federal hiring authorities were meeting agency needs. Although competitive hiring has been the traditional method of hiring, agencies can use additional hiring authorities to expedite the hiring process or achieve certain public policy goals. Among other things, we noted that agencies rely on a relatively small number of hiring authorities (as established by law, executive order, or regulation) to fill the vast majority of hires into the federal civil service. Further, while OPM collects a variety of data to assess the federal hiring process, neither it nor agencies used this information to assess the effectiveness of hiring authorities. Conducting such assessments would be a critical first step in making more strategic use of the available hiring authorities to more effectively meet their hiring needs. Accordingly, we made three recommendations to OPM to work with agencies to strengthen hiring efforts. OPM generally agreed with the recommendations; however, as of August 2018, two of them had not been implemented. Appendix V: Action 4—Ensure the Security of Emerging Technologies The emergence of new technologies can potentially introduce security vulnerabilities for those technologies which were previous unknown. As we have previously reported, additional processes and controls will need to be developed to potentially address these new vulnerabilities. While some progress has been made to address the security and privacy issues associated with these technologies, such as the Internet of Things (IoT) and vehicle networks, there is still much work to be done. For example: IoT devices that continuously collect and process information are potentially vulnerable to cyber-attacks. In May 2017, we reported that the IoT has become increasingly used to communicate and process vast amounts of information using “smart” devices (such as fitness trackers, cameras, and thermostats). However, we noted that this emerging technology also presents new issues in areas such as information security, privacy, and safety. For example, IoT devices, networks, or the cloud servers where they store data can be compromised in a cyberattack. Table 3 provides examples of cyber- attacks that could affect IoT devices and networks. IoT devices may increase the security risks to federal agencies. In July 2017, we reported that IoT devices, such as those acquired and used by Department of Defense (DOD) employees or that DOD itself acquires (e.g., smartphones), may increase the security risks to the department. We noted that these risks can be divided into two categories, risks with the devices themselves, such as limited encryption, and risks with how they are used, such as unauthorized communication of information. The department has also identified notional threat scenarios, based on input from multiple DOD entities, which exemplify how these security risks could adversely impact DOD operations, equipment, or personnel. Figure 6 highlights a few examples of these scenarios. In addition, we reported that DOD had started to examine the security risks of IoT devices, but that the department had not conducted required assessments related to the security of its operations. Further, DOD had issued policies and guidance for these devices, but these did not clearly address all of the risks relating to these devices. To address these issues, we made two recommendations to DOD. The department agreed with our recommendations; however, as of August 2018, they had not yet been implemented. Vehicles are potentially susceptible to cyber-attack through networks, such as Bluetooth. In March 2016, we reported that many stakeholders in the automotive industry acknowledge that in-vehicle networks pose a threat to the safety of the driver, as an external attacker could gain control to critical systems in the car. Further, these industry stakeholders agreed that critical systems and other vehicle systems, such as a Bluetooth connection, should be separate in-vehicle networks so they could not communicate or interfere with one another. Figure 7 identifies the key interfaces that could be exploited in a vehicle cyber-attack. To enhance the Department of Transportation’s ability to effectively respond in the event of a real-world vehicle cyberattack, we made one recommendation to the department to better define its roles and responsibilities. The department agreed with the recommendation but, as of August 2018, had not yet taken action to implement it. Artificial intelligence holds substantial promise for improving cybersecurity, but also posed new risks. In March 2018, we reported on the results of a forum we convened to discuss emerging opportunities, challenges, and implications associated with artificial intelligence. At the forum, participants from industry, government, academia, and nonprofit organizations discussed the potential implications of this emerging technology, including assisting with cybersecurity by helping to identify and patch vulnerabilities and defending against attacks; creating safer automated vehicles; improving the criminal justice system’s allocation of resources; and improving how financial services govern investments. However, forum participants also highlighted a number of challenges and risks related to artificial intelligence. For example, if the data used by artificial intelligence are biased or become corrupted by hackers, the results could be biased or cause harm. Moreover, the collection and sharing of data needed to train artificial intelligence systems, a lack of access to computing resources, and adequate human capital were also challenges facing the development of artificial intelligence. Finally, forum participants noted that the widespread adoption raises questions about the adequacy of current laws and regulations. Cryptocurrencies provide an alternative to traditional government-issued currencies, but have security implications. In February 2018, we reported on trends affecting government and society, including the increased use of cryptocurrencies—digital representations of value that are not government-issued—that operate online and verify transactions using a public ledger called blockchain. We highlighted the potential benefits of this technology, such as anonymity and lower transaction costs, as well as drawbacks, including making it harder to detect money laundering and other financial crimes. Because of these capabilities and others, we noted the potential for virtual currencies and blockchain technology to reshape financial services and affect the security of critical financial infrastructures. Lastly, we pointed out that the use of blockchain technology could have more security vulnerabilities as computing power increases as a result of new advancements in quantum computing, an area of quantum information science. Appendix VI: Action 5—Improve Implementation of Government-wide Cybersecurity Initiatives In January 2008, the President issued National Security Presidential Directive 54/Homeland Security Presidential Directive 23. The directive established the Comprehensive National Cybersecurity Initiative, a set of projects with the objective of safeguarding federal executive branch government information systems by reducing potential vulnerabilities, protecting against intrusion attempts, and anticipating future threats against the federal government’s networks. Under the initiative, the Department of Homeland Security (DHS) was to lead several projects to better secure civilian federal government networks. Specifically, the agency established the National Cybersecurity and Communications Integration Center (NCCIC), which functions as the 24/7 cyber monitoring, incident response, and management center. Figure 8 depicts the Watch Floor, which functions as a national focal point of cyber and communications incident integration. The United States Computer Emergency Readiness Team (US-CERT), one of several subcomponents of the NCCIC, is responsible for operating the National Cybersecurity Protection System (NCPS), which provides intrusion detection and prevention capabilities to entities across the federal government. Although DHS is fulfilling its statutorily required mission by establishing the NCCIC and managing the operation of NCPS, we have identified challenges in the agency’s efforts to manage these programs: DHS had not ensured that NCPS has fully satisfied all intended system objectives. In January 2016, we reported that NCPS had a limited ability to detect intrusions across all types of network types. In addition, we reported that the system’s intrusion prevention capability was limited and its information-sharing capability was not fully developed. Furthermore, we reported that DHS’s current metrics did not comprehensively measure the effectiveness of NCPS. Accordingly, we made nine recommendations to DHS to address these issues and others. The department agreed with our recommendations and has taken action to address one of them. However, as of August 2018, eight of these recommendations had not been implemented. DHS had been challenged in measuring how the NCCIC was performing its functions in accordance with mandated implementing principles. In February 2017, we reported instances where, with certain products and services, NCCIC had implemented its functions in adherence with one or more of its principles, as required by the National Cybersecurity Protection Act of 2014 and Cybersecurity Act of 2015. For example, consistent with the principle that it seek and receive appropriate consideration from industry sector-specific, academic, and national laboratory expertise, NCCIC coordinated with contacts from industry, academia, and the national laboratories to develop and disseminate vulnerability alerts. However, we also identified instances where the cybersecurity functions were not performed in adherence with the principles. For example, NCCIC is to provide timely technical assistance, risk management support, and incident response capabilities to federal and nonfederal entities, but it had not established measures or other procedures for ensuring the timeliness of these assessments. Further, we reported that NCCIC faces impediments to performing its cybersecurity functions more efficiently, such as tracking security incidents and working across multiple network platforms. Accordingly, we made nine recommendations to DHS related to implementing the requirements identified in the National Cybersecurity Protection Act of 2014 and the Cybersecurity Act of 2015. The department agreed with our recommendations and has taken action to address two of them. However, as of August 2018, the remaining seven recommendations had not been implemented. Appendix VII: Action 6—Address Weaknesses in Federal Agency Information Security Programs The Federal Information Security Modernization Act of 2014 (FISMA) requires federal agencies in the executive branch to develop, document, and implement an information security program and evaluate it for effectiveness. The act retains many of the requirements for federal agencies’ information security programs previously set by the Federal Information Security Management Act of 2002. These agency programs should include periodic risk assessments; information security policies and procedures; plans for protecting the security of networks, facilities, and systems; security awareness training; security control assessments; incident response procedures; a remedial action process, and continuity plans and procedures. In addition, Executive Order 13800 states that the President will hold agency heads accountable for managing cybersecurity risk to their enterprises. In addition, according to the order, it is the policy of the United States to manage cybersecurity risk as an executive branch enterprise because risk management decisions made by agency heads can affect the risk to the executive branch as a whole, and to national security. Over the past several years, we have performed numerous security control audits to determine how well agencies are managing information security risk to federal information systems and data through the implementation of effective security controls. These audits have resulted in the identification of hundreds of deficiencies related to agencies’ implementation of effective security controls. Accordingly, we provided agencies with limited official use only reports identifying technical security control deficiencies for their respective agency. In these reports, we made hundreds of recommendations related to improving agencies’ implementation of those security control deficiencies. In addition to systems and networks maintained by federal agencies, it is also important that agencies ensure the security of federal information systems operated by third party providers, including cloud service providers. Cloud computing is a means for delivering computing services via information technology networks. Since 2009, the government has encouraged agencies to use cloud-based services to store and process data as a cost-savings measure. In this regard, the Office of Management and Budget (OMB) established the Federal Risk and Authorization Management Program (FedRAMP) to provide a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. FedRAMP is intended to ensure that cloud computing services have adequate information security, eliminate duplicative efforts, and reduce costs. Although there are requirements and government-wide programs to assist with ensuring the security of federal information systems maintained by federal agencies and third party providers, we have identified weaknesses in agencies’ implementation of information security programs. Federal agencies continued to experience weaknesses in protecting their information and information systems due to ineffective implementation of information security policies and practices. In September 2017, we reported that most of the 24 agencies covered by the Chief Financial Officers (CFO) Act had weaknesses in each of the five major categories of information system controls (i.e., access controls, configuration management controls, segregation of duties, contingency planning, and agency-wide security management). Weaknesses in these security controls indicate that agencies did not adequately or effectively implement information security policies and practices during fiscal year 2016. Figure 9 identifies the number of agencies with information security weaknesses in each of the five categories. In addition, we found that several agencies had not effectively implemented some aspects of its information security program, which resulted in weaknesses in these agencies’ security controls. In July 2017, we reported that the Security Exchange Commission did not always keep system security plans complete and accurate or fully implement continuous monitoring, as required by agency policy. We made two recommendations to the Security Exchange Commission to effectively manage its information security program. The agency agreed with our recommendations; however, as of August 2018, they had not been implemented. In another July 2017 report, we noted that the Internal Revenue Service (IRS) did not effectively support a risk-based decision to accept system deficiencies; fully develop, document, or update information security policies and procedures; update system security plans to reflect changes to the operating environment; perform effective tests and evaluations of policies, procedures, and controls; or address shortcomings in the agency’s remedial process. Accordingly, we made 10 recommendations to IRS to more effectively implement security-related policies and plans. The agency neither agreed nor disagreed with the recommendations; as of August 2018, all 10 recommendations had not been implemented. In May 2017, we reported that the Federal Deposit Insurance Corporation did not include all necessary information in procedures for granting access to a key financial application; fully address its Inspector General findings that security control assessments of outsourced service providers had not been completed in a timely manner; fully address key previously identified weaknesses related to establishing agency-wide configuration baselines and monitoring changes to critical server files; or complete actions to address the Inspector General’s finding that the Federal Deposit Insurance Corporation had not ensured that major security incidents are identified and reported in a timely manner. We made one recommendation to the agency to more fully implement its information security program. The agency agreed with our recommendation and has taken steps to implement it. In August 2016, we reported that the Food and Drug Administration did not fully implement certain security practices involved with assessing risks to systems; complete or review security policies and procedures in a timely manner; complete and review system security plans annually; always track and fully train users with significant security responsibilities; fully test controls or monitor them; remediate identified security weaknesses in a timely fashion based on risk; or fully implement elements of its incident response program. Accordingly, we issued 15 recommendations to the Food and Drug Administration to fully implement its agency-wide information security program. The agency agreed with our recommendations. As of August 2018, all 15 recommendations had been implemented. In May 2016, we reported that a key reason for the information security weaknesses in selected high-impact systems at four agencies—National Aeronautics and Space Administration, Nuclear Regulatory Commission, the Office of Personnel Management, and Department of Veterans Affairs—was that they had not effectively implemented elements of their information security programs. For example, most of the selected agencies had conducted information security control assessments for systems, but not all assessments were comprehensive. We also reported that remedial action plans developed by the agencies did not include all the required elements, and not all agencies had developed a continuous monitoring strategy. Table 4 identifies the extent to which the selected agencies implemented key aspects of their information security programs. Accordingly, we made 19 recommendations to the four selected agencies to correct these weaknesses. Agency responses to the recommendations varied. Further, as of August 2018, 16 of the 19 recommendations had not been implemented. DOD’s monitoring of progress in implementing cyber strategies varied. In August 2017, we reported that the DOD’s progress in implementing key strategic cybersecurity guidance—the DOD Cloud Computing Strategy, DOD Cyber Strategy, and DOD Cybersecurity Campaign—has varied. More specifically, we determined that the department had implemented the cybersecurity objectives identified in the DOD Cloud Computing Strategy and had made progress in implementing the DOD Cyber Strategy and DOD Cybersecurity Campaign. However, the department’s process for monitoring implementation of the DOD Cyber Strategy had resulted in the closure of tasks as implemented before the tasks were fully implemented. In addition, the DOD Cybersecurity Campaign lacked time frames for completion and a process to monitor progress, which together provide accountability to ensure implementation. We made two recommendations to improve DOD’s process of ensuring its cyber strategies are effectively implemented. The department partially concurred with these recommendations and identified actions it planned to take to address them. We noted that, if implemented, the actions would satisfy the intent of our recommendations. However, as of August 2018, DOD had not yet implemented our recommendations. Agencies had not fully defined the role of their Chief Information Security Officers (CISO), as required by FISMA. In August 2016, we reported that 13 of 24 agencies covered by the CFO Act had not fully defined the role of their CISO. For example, these agencies did not always identify a role for the CISO in ensuring that security controls are periodically tested; procedures are in place for detecting, reporting, and responding to security incidents; or contingency plans and procedures for agency information systems are in place. Thus, we determined that the CISOs’ ability to effectively oversee these agencies’ information security activities can be limited. To assist CISOs in carrying out their responsibilities and better define their roles, we made a total of 34 recommendations to OMB and 13 agencies in our review. Agency responses to the recommendations varied; as of August 2018, 18 of the 34 recommendations had not been implemented. Appendix VIII: Action 7—Enhance the Federal Response to Cyber Incidents Presidential Policy Directive-41 sets forth principles governing the federal government’s response to any cyber incident, whether involving government or private sector entities. According to the directive, federal agencies shall undertake three concurrent lines of effort when responding to any cyber incident: threat response; asset response; and intelligence support and related activities. In addition, when a federal agency is an affected entity, it shall undertake a fourth concurrent line of effort to manage the effects of the cyber incident on its operations, customers, and workforce. We have reviewed federal agencies’ preparation and response to cyber incidents and have identified the following weaknesses: The Office of Personnel Management (OPM) had not fully implemented controls to address deficiencies identified as a result of a cyber incident. In August 2017, we reported that OPM did not fully implement the 19 recommendations made by the Department of Homeland Security’s (DHS) United States Computer Emergency Readiness Team (US-CERT) after the data breaches in 2015. Specifically, we noted that, after breaches of personnel and background investigation information were reported, US-CERT worked with the agency to resolve issues and develop a comprehensive mitigation strategy. In doing so, US-CERT made 19 recommendations to OPM to help the agency improve its overall security posture and, thus, improve its ability to protect its systems and information from security breaches. In our August 2017 report, we determined that OPM had fully implemented 11 of the 19 recommendations. For the remaining 8 recommendations, actions for 4 were still in progress. For the other 4 recommendations, OPM indicated that it had completed actions to address them, but we noted further improvements were needed. Further, OPM had not validated actions taken to address the recommendations in a timely manner. As a result of our review, we made five other recommendations to OPM to improve its response to cyber incidents. The agency agreed with four of these and partially concurred with the one related to validating its corrective action. The agency did not cite a reason for its partial concurrence and we continued to believe that the recommendation was warranted. As of August 2018, three of the five recommendations had not been implemented. The Department of Defense (DOD) had not identified the National Guard’s cyber capabilities (e.g., computer network defense teams) or addressed challenges in its exercises. In September 2016, we reported that DOD had not identified the National Guard’s cyber capabilities or addressed challenges in its exercises. Specifically, DOD had not identified and did not have full visibility into National Guard cyber capabilities that could support civil authorities during a cyber incident because the department has not maintained a database that identifies National Guard cyber capabilities, as required by the National Defense Authorization Act for Fiscal Year 2007. In addition, we identified three types of challenges with DOD’s cyber exercises that could limit the extent to which DOD is prepared to support civilian authorities in a cyber incident: limited access because of classified exercise environments; limited inclusion of other federal agencies and critical infrastructure owners; and inadequate incorporation of joint physical-cyber scenarios. In our September 2016 report, we noted that DOD had not addressed these challenges. Furthermore, we stated that DOD had not addressed its goals by conducting a “tier 1” exercise (i.e., an exercise involving national-level organizations and combatant commanders and staff in highly complex environments), as stated in the DOD Cyber Strategy. Accordingly, we recommended that DOD (1) maintain a database that identifies National Guard cyber capabilities and (2) conduct a tier 1 exercise to prepare its forces in the event of a disaster with cyber effects. The department partially agreed with our recommendations, stating that its current mechanisms and exercises are sufficient to address the issues highlighted in our report. However, we continued to believe the recommendations were valid. As of August 2018, our two recommendations had not been implemented. DOD had not identified, clarified, or implemented all components of its incident response program. In April 2016, we also reported that DOD had not clarified its roles and responsibilities for defense support of civil authorities during cyber incidents. Specifically, we found that DOD’s overarching guidance about how it is to support civil authorities as part of its Defense Support of Civil Authorities mission did not clearly define the roles and responsibilities of key DOD entities, such as DOD components, the supported command, or the dual-status commander, if they are requested to support civil authorities in a cyber incident. Further, we found that, in some cases, DOD guidance provides specific details on other types of Defense Support of Civil Authorities-related responses, such as assigning roles and responsibilities for fire or emergency services support and medical support, but does not provide the same level of detail or assign roles and responsibilities for cyber support. Accordingly, we recommended that DOD issue or update guidance that clarifies DOD roles and responsibilities to support civil authorities in a domestic cyber incident. DOD concurred with the recommendation and stated that the department will issue or update guidance. However, as of August 2018, the department had not implemented our recommendation. DHS’s NCPS had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. In January 2016, we reported that NCPS had a limited ability to detect intrusions across all types of network types. In addition, we reported that the system’s intrusion prevention capability was limited and its information-sharing capability was not fully developed. Furthermore, we reported that DHS’s current metrics did not comprehensively measure the effectiveness of NCPS. Accordingly, we made nine recommendations to DHS to address these issues and others. The department agreed with our recommendations and has taken action to address one of them. However, as of August 2018, eight of these recommendations had not been implemented. Appendix IX: Action 8—Strengthen the Federal Role in Protecting the Cybersecurity of Critical Infrastructure The nation’s critical infrastructure include both public and private systems vital to national security and other efforts including providing the essential services, such as banking, water, and electricity—that underpin American society. The cyber threat to critical infrastructure continues to grow and represents a national security challenge. To address this cyber risk, the President issued Executive Order 13636 in February 2013 to enhance the security and resilience of the nation’s critical infrastructure and maintain a cyber environment that promotes safety, security, and privacy. In accordance with requirements in the executive order which were enacted into law in 2014, the National Institute of Standards and Technology (NIST) facilitated the development of a set of voluntary standards and procedures for enhancing cybersecurity of critical infrastructure. This process, which involved stakeholders from the public and private sectors, resulted in NIST’s Framework for Improving Critical Infrastructure Cybersecurity. The framework is to provide a flexible and risk-based approach for entities within the nation’s 16 critical infrastructure sectors to protect their vital assets from cyber-based threats. Since then, progress has been made to protect the critical infrastructure of the nation but we have reported that challenges to ensure the safety and security of our infrastructure exist. The Department of Homeland Security (DHS) had not measured the impact of its efforts to support cyber risk reduction for high- risk chemical sector entities. In August 2018, we reported that DHS had strengthened its processes for identifying high-risk chemical facilities and assigning them to tiers under its Chemical Facility Anti- Terrorism Standards program. However, we found that DHS’s new performance measure methodology did not measure reduction in vulnerability at a facility resulting from the implementation and verification of planned security measures during the compliance inspection process. We concluded that doing so would provide DHS an opportunity to begin assessing how vulnerability is reduced—and by extension, risk lowered—not only for individual high-risk facilities but for the Chemical Facility Anti-Terrorism Standards program as a whole. We also determined that, although DHS shares some Chemical Facility Anti-Terrorism Standards program information, first responders and emergency planners may not have all of the information they need to minimize the risk of injury or death when responding to incidents at high-risk facilities. This was due to first responders at the local level not having access or widely using a secure interface that DHS developed (known as the Infrastructure Protection Gateway) to obtain information about high-risk facilities and the specific chemicals they process. To address the weaknesses we identified, we recommended that DHS take actions to (1) measure reduction in vulnerability of high-risk facilities and use that data to assess program performance, and (2) encourage access to and wider use of the Infrastructure Protection Gateway among first responders and emergency planners. DHS concurred with both recommendations and outlined efforts underway or planned to address them. The federal government had identified major challenges to the adoption of the cybersecurity framework. In February 2018, we reported that there were four different challenges to adopting the cybersecurity framework, including limited resources and competing priorities, reported by entities within their sectors. We further reported that none of the 16 sector-specific agencies were measuring the implementation by these entities, nor did they have qualitative or quantitative measures of framework adoption. While research had been done to determine the use of the framework in the sectors, these efforts had yielded no real results for sector wide adoption. We concluded that, until sector-specific agencies understand the use of the framework by the implementing entities, their ability to understand implementation efforts would be limited. Accordingly, we made a total of nine recommendations to nine sector-specific agencies to address these issues. Five agencies agreed with the recommendations, while four others neither agreed nor disagreed; as of August 2018, all five recommendations had not been implemented. Agencies had not addressed risks to their systems and the information they maintain. In January 2018, we reported that the Department of Defense (DOD) and Federal Aviation Administration (FAA) identified a variety of operations and physical security risks related to Automatic Dependent Surveillance-Broadcast Out technology that could adversely affect DOD missions. These risks came from information broadcast by the system itself, as well as from potential vulnerabilities to electronic warfare- and cyber-attacks, and from the potential divestment of secondary-surveillance radars. However, DOD and FAA had not approved any solutions to address the risks they identified to the system. Accordingly, we recommended that DOD and FAA, among other things, take action to approve one or more solutions to address Automatic Dependent Surveillance- Broadcast Out-related security risks. DOD and FAA generally agreed with our recommendations; however, as of August 2018, they had not been implemented. Major challenges existed to securing the electricity grid against cyber threats. In October 2015, we testified on the status of the electricity grid’s cybersecurity, reporting that entities associated with the grid have encountered several challenges. We noted that these challenges included implementation monitoring, built-in security features in smart grid systems, and establishing metrics for cybersecurity. We concluded that continued attention to these issues and cyber threats in general was required to help mitigate these risks to the electricity grid. DHS and other agencies needed to enhance cybersecurity in the maritime environment. In October 2015, we testified on the status of the cybersecurity of our nation’s ports, concluding that steps needed to be taken to enhance their security. Specifically, we noted that DHS needed to include cyber risks in its risk assessments that are already in place as well as addressing cyber risks in guidance for port security plans. We concluded that, until DHS and the other stakeholders take steps to address cybersecurity in the ports, risk of a cyber-attack with serious consequences are increased. Sector-specific agencies were not properly addressing progress or metrics to measure their progress in cybersecurity. In November 2015, we reported that sector-specific agencies were not comprehensively addressing the cyber risk to the infrastructure, as 11 of the 15 sectors had significant cyber risk. Specifically, we noted that these entities had taken actions to mitigate their cyber risk; however, most had not identified incentives to promote cybersecurity in their sectors. We concluded that while the sector-specific agencies have successfully disseminated the information they possess, there was still work to be done to properly measure cybersecurity implementation progress. Accordingly, we made seven recommendations to six agencies to address these issues. Four of these agencies agreed with our recommendation, while two agencies did not comment on the recommendations. As of August 2018, all seven recommendations had not been implemented. Appendix X: Action 9—Improve Federal Efforts to Protect Privacy and Sensitive Data Advancements in technology, such as new search technology and data analytics software for searching and collecting information, have made it easier for individuals and organizations to correlate data and track it across large and numerous databases. In addition, lower data storage costs have made it less expensive to store vast amounts of data. Also, ubiquitous Internet and cellular connectivity make it easier to track individuals by allowing easy access to information pinpointing their locations. the effectiveness of these procedures. Based on a survey of the schools, the majority of the schools had policies in place for records retention but the way these policies were implemented was highly varied for paper and electronic records. We also found that the oversight of the school’s programs was lacking, as Federal Student Aid conducts reviews but does not consider information security as a factor for selecting schools. out provisions of the Patient Protection and Affordable Care Act. We made three recommendations to CMS related to defining procedures for overseeing the security of state-based marketplaces and requiring continuous monitoring of state marketplace controls. HHS concurred with our recommendations. As of August 2018, two of the recommendations had not yet been implemented. Poor planning and ineffective monitoring had resulted in the unsuccessful implementation of government initiatives designed to protect federal data. In July 2017, we reported that government initiatives aimed at eliminating the unnecessary collection, use, and display of Social Security numbers (SSN) have had limited success. Specifically, in agencies’ response to our questionnaire on SSN reduction efforts, the 24 agencies covered by the Chief Financial Officers Act reported successfully curtailing the collection, use, and display of SSNs. Nevertheless, all of the agencies continued to rely on SSNs for important government programs and systems, as seen in figure 10. Appendix XI: Action 10—Appropriately Limit the Collection and Use of Personal Information and Ensure That It Is Obtained with Appropriate Knowledge or Consent Given that access to data is so pervasive, personal privacy hinges on ensuring that databases of personally identifiable information (PII) maintained by government agencies or on their behalf are protected both from inappropriate access (i.e., data breaches) as well as inappropriate use (i.e., for purposes not originally specified when the information was collected). Likewise, the trend in the private sector of collecting extensive and detailed information about individuals needs appropriate limits. The vast number of individuals potentially affected by data breaches at federal agencies and private sector entities in recent years increases concerns that PII is not being properly protected. The emergence of IoT devices can facilitate the collection of information about individuals without their knowledge or consent. In May 2017, we reported that the IoT has become increasingly used to communicate and process vast amounts of information using “smart” devices (such as a fitness tracker connected to a smartphone). However, we noted that this emerging technology also presents new issues in areas such as information security, privacy, and safety. Smartphone tracking apps can present serious safety and privacy risks. In April 2016, we reported on smartphone applications that facilitated the surreptitious tracking of a smartphone’s location and other data. Specifically, we noted that some applications could be used to intercept communications and text messages, essentially facilitating the stalking of others. While it is illegal to use these applications for these purposes, stakeholders differed over whether current federal laws needed to be strengthened to combat stalking. We also noted that stakeholders expressed concerns over what they perceived to be limited enforcement of laws related to tracking apps and stalking. In particular, domestic violence groups stated that additional education of law enforcement officials and consumers about how to protect against, detect, and remove tracking apps is needed. The Federal Bureau of Investigation (FBI) has not ensured privacy and accuracy related to the use of face recognition technology. In May 2016, we reported that the Department of Justice had not been timely in publishing and updating privacy documentation for the FBI’s use of face recognition technology. Publishing such documents in a timely manner would better assure the public that the FBI is evaluating risks to privacy when implementing systems. Also, the FBI had taken limited steps to determine whether the face recognition system it was using was sufficiently accurate. We recommended that the department ensure required privacy-related documents are published and that the FBI test and review face recognition systems to ensure that they are sufficiently accurate. Of the six recommendations we made, the Department of Justice agreed with one, partially agreed with two, and disagreed with three. We continued to believe all the recommendations made were valid. As of August 2018, the six recommendations had not been implemented. Appendix XII: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Jon Ticehurst, Assistant Director; Kush K. Malhotra, Analyst-In-Charge; Chris Businsky; Alan Daigle; Rebecca Eyler; Chaz Hubbard; David Plocher; Bradley Roach; Sukhjoot Singh; Di’Mond Spencer; and Umesh Thakkar made key contributions to this report.
Why GAO Did This Study Federal agencies and the nation's critical infrastructures—such as energy, transportation systems, communications, and financial services—are dependent on information technology systems to carry out operations. The security of these systems and the data they use is vital to public confidence and national security, prosperity, and well-being. The risks to these systems are increasing as security threats evolve and become more sophisticated. GAO first designated information security as a government-wide high-risk area in 1997. This was expanded to include protecting cyber critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. This report provides an update to the information security high-risk area. To do so, GAO identified the actions the federal government and other entities need to take to address cybersecurity challenges. GAO primarily reviewed prior work issued since the start of fiscal year 2016 related to privacy, critical federal functions, and cybersecurity incidents, among other areas. GAO also reviewed recent cybersecurity policy and strategy documents, as well as information security industry reports of recent cyberattacks and security breaches. What GAO Found GAO has identified four major cybersecurity challenges and 10 critical actions that the federal government and other entities need to take to address them. GAO continues to designate information security as a government-wide high-risk area due to increasing cyber-based threats and the persistent nature of security vulnerabilities. GAO has made over 3,000 recommendations to agencies aimed at addressing cybersecurity shortcomings in each of these action areas, including protecting cyber critical infrastructure, managing the cybersecurity workforce, and responding to cybersecurity incidents. Although many recommendations have been addressed, about 1,000 have not yet been implemented. Until these shortcomings are addressed, federal agencies' information and systems will be increasingly susceptible to the multitude of cyber-related threats that exist. What GAO Recommends GAO has made over 3,000 recommendations to agencies since 2010 aimed at addressing cybersecurity shortcomings. As of August 2018, about 1,000 still needed to be implemented.
gao_GAO-19-48
gao_GAO-19-48_0
Background Overview of the U.S. Pipeline System The national pipeline system consists of more than 2.7 million miles of networked pipelines transporting oil, natural gas, and other hazardous liquids. Hazardous liquid and natural gas pipelines—primarily buried underground in the continental United States—run under remote and open terrain, as well as densely populated areas. These pipelines are of three main types: Hazardous liquid: About 216,000 miles of hazardous liquid pipeline transport crude oil, diesel fuel, gasoline, jet fuel, anhydrous ammonia, and carbon dioxide. Natural gas transmission and storage: About 319,000 miles of pipeline—mostly interstate—transport natural gas from sources to communities. Natural gas distribution: About 2.2 million miles of pipeline—mostly intrastate—transport natural gas from transmission sites to consumers. Figure 1 depicts the network of hazardous liquid and natural gas transmission pipelines in the United States. More than 3,000 pipeline companies operate the nation’s pipeline systems, which can traverse multiple states and the U.S. borders with Canada and Mexico. Many pipeline systems are comprised of the pipelines themselves, as well as a variety of facilities, such as storage tanks, compressor stations, and control centers. Most pipeline systems are monitored and moderated through automated ICS or Supervisory Control and Data Acquisition (SCADA) systems using remote sensors, signals, and preprogramed parameters to activate and deactivate valves and pumps to maintain flows within tolerances. Federal agencies and pipeline operators determine the criticality of pipeline systems and their facilities based on their importance to the nation’s energy infrastructure; service to installations critical to national defense; or, if attacked, have the potential to cause mass casualties and significant impact on public drinking water affecting major population centers. Accordingly, those determined to be critical merit increased attention to security. However, as we previously reported, the inherent design and operation of U.S. pipeline systems may reduce some potential impacts of lost service. The pipeline sector is generally considered to be resilient and versatile. Historically, pipeline operators have been able to quickly respond to the adverse consequences of an incident—whether it is damage from a major hurricane or a backhoe—and quickly restore pipeline service. Pipeline infrastructure also includes redundancies such as parallel pipelines or interconnections that enable operators to reroute material through the network. Figure 2 depicts the U.S. pipeline system, its basic components, examples of vulnerabilities, and the entities to which it supplies energy and raw materials. These entities include utility companies, airports, military sites, and industrial and manufacturing facilities. Physical and Cyber Threats to Pipeline Systems According to TSA, pipelines are vulnerable to physical attacks—including the use of firearms or explosives—largely due to their stationary nature, the volatility of transported products, and the dispersed nature of pipeline networks spanning urban and outlying areas. The nature of the transported commodity and the potential effect of an attack on national security, commerce, and public health make some pipelines and their assets more attractive targets for attack. Oil and gas pipelines have been and continue to be targeted by terrorists and other malicious groups globally. Terrorists have also targeted U.S. pipelines, but have not succeeded in attacking them. Further, environmental activists and lone actors seeking to halt the construction of new pipelines through sabotage have recently emerged as a new threat to pipelines. For example, in March 2017, activists used blowtorches to cut holes in empty portions of the Dakota Access Pipeline in two states. In February 2017, local law enforcement officers fatally shot a man who used an assault rifle to damage the Sabal Trail Pipeline, a natural gas pipeline under construction in Florida. The sophisticated computer systems that pipeline operations rely on are also vulnerable to various cyber threats. According to DOE, the frequency, scale, and sophistication of cyber threats have increased, and attacks have become easier to launch. NCCIC reported that the energy sector, which includes pipelines, experienced more cyber incidents than any sector from 2013 to 2015, accounting for 35 percent of the 796 incidents reported by all critical infrastructure sectors. In 2016, NCCIC reported that the energy sector was the third most frequently attacked sector. Further, according to DOE, the cost of preventing and responding to cyber incidents in the energy sector is straining the ability of companies to adequately protect their critical cyber systems. For example, a 2015 study by the Ponemon Institute estimated the annualized cost of cyber crime for an average energy company to be about $28 million. Ineffective protection of cyber assets from these threats can increase the likelihood of security incidents and cyber attacks that disrupt critical operations; lead to inappropriate access to and disclosure, modification, or destruction of sensitive information; and threaten national security, economic well-being, and public health and safety. Unintentional or nonadversarial threat sources may include failures in equipment or software due to aging, resource depletion, and errors made by end users. They also include natural disasters and failures of critical infrastructure on which the organization depends, but that are outside of the control of the organization. Intentional or adversarial threats may include corrupt employees, criminal groups, terrorists, and nations that seek to leverage the organization’s dependence on cyber resources (i.e., information in electronic form, information and communications technologies, and the communications and information-handling capabilities provided by those technologies). These threat adversaries vary in terms of their capabilities, their willingness to act, and their motives, which can include seeking monetary gain or seeking an economic, political, or military advantage. Cyber threat adversaries make use of various techniques, tactics, practices, and exploits to adversely affect an organization’s computers, software, or networks, or to intercept or steal valuable or sensitive information. For example, an attacker could infiltrate a pipeline’s operational systems via the internet or other communication pathways to potentially disrupt its service and cause spills, releases, explosions, or fires. Moreover, ICS, which were once largely isolated from the Internet and the company’s information technology systems, are increasingly connected in modern energy systems, allowing cyber attacks to originate in business systems and migrate to operational systems. For example, malicious nation-state actors used spear-phishing and other similar approaches in 2018 against energy sector organizations to gain access to their business systems, conduct reconnaissance, and collect information about their ICS. Similarly, in April 2012, the Industrial Control Systems Cyber Emergency Response Team reported that an unidentified cyber attacker had conducted a series of cyber intrusions into U.S. natural gas pipeline systems beginning in December 2011. Key Critical Infrastructure Protection Guidance and Presidential Directives Federal policy and public-private plans establish roles and responsibilities for the protection of critical infrastructure, including pipelines. These include Presidential Policy Directive 21 (PPD-21), the NIPP, and Executive Order 13636. PPD-21, issued in February 2013, reflects an all- hazards approach to protecting critical infrastructure, including natural disasters, terrorism, and cyber incidents. The directive also identifies the 16 critical infrastructure sectors and assigns roles and responsibilities for each critical infrastructure sector among nine designated federal sector-specific agencies. While PPD-21 identified the critical infrastructure sectors and assigned responsibility for each sector’s sector-specific agency, the NIPP outlines critical infrastructure stakeholder roles and responsibilities regarding critical security and resilience. It describes a voluntary partnership model as the primary means of coordinating government and private sector efforts to protect critical infrastructure. As part of the partnership structure, the designated sector-specific agencies serve as the lead coordinators for security programs of their respective sector. As sector-specific agencies, federal departments or agencies lead, facilitate, or support the security and resilience programs and associated activities of their designated critical infrastructure sector. For example, DHS and DOT are both designated as sector-specific agencies for the transportation systems sector, which includes pipelines. Each sector also has a government coordinating council, consisting of representatives from various levels of government, and many have a sector coordinating council (SCC) consisting of owner-operators of these critical assets or members of their respective trade associations. For example, the Transportation Government Coordinating Council has been established, and the Pipeline Modal SCC has been established to represent pipeline operators. The NIPP also outlines a risk management framework for critical infrastructure protection. As shown in Figure 3, the NIPP uses a risk management framework as a planning methodology intended to inform how decision makers take actions to manage risk. The risk management framework calls for public and private critical infrastructure partners to conduct risk assessments to understand the most likely and severe incidents that could affect their operations and communities, and use this information to support planning and resource allocation. According to DHS, the risk management framework is influenced by the nature and magnitude of a threat, the vulnerabilities to that threat, and the consequences that could result, as shown in Figure 4. Federal policy has encouraged voluntary information-sharing mechanisms between the federal government and critical infrastructure owners and operators. For example, Information Sharing and Analysis Centers (ISAC) are formed by critical infrastructure owners and operators to gather, analyze, appropriately sanitize, and disseminate intelligence and information related to critical infrastructure. They typically collect, analyze and disseminate actionable threat information to their members and provide members with tools to mitigate risks and enhance resiliency. ISACs in which pipeline operators may participate have been formed including the Oil and Natural Gas ISAC, Downstream Natural Gas ISAC, and Electricity ISAC. Finally, in February 2013, the president issued Executive Order 13636, Improving Critical Infrastructure Cybersecurity, which cited repeated cyber intrusions into critical infrastructure as demonstrating the need for improved cybersecurity. Executive Order 13636 outlined actions for improving critical infrastructure cybersecurity, including direction for the National Institute of Standards and Technology (NIST) to lead the development of a voluntary risk-based cybersecurity framework that would comprise a set of industry standards and best practices to help organizations manage cybersecurity risks. NIST issued the framework in 2014 and updated it in April 2018. The order also addressed the need to improve cybersecurity information sharing and collaboratively develop risk-based standards and stated that U.S. policy was to increase the volume, timeliness, and quality of cyber threat information shared with private sector entities so that these entities may better protect and defend themselves against cyber threats. Pipeline Stakeholders’ Security Roles and Responsibilities Protecting the nation’s pipeline systems is a responsibility shared by both the federal government and private industry. As a result, several federal departments, agencies, and the private sector have significant roles in pipeline physical and cyber-related security. These entities include the following: Transportation Security Administration (TSA). TSA, within DHS, has primary oversight responsibility for the physical security and cybersecurity of transmission and distribution pipeline systems. Within TSA, the Security Policy and Industry Engagement’s Pipeline Security Branch is charged with overseeing its pipeline security program. Pursuant to its authority, TSA’s Pipeline Security Branch first issued its voluntary Pipeline Security Guidelines in 2011, and released revised guidelines in March 2018. In accordance with the 9/11 Commission Act, TSA’s Pipeline Security Branch identifies the top 100 critical pipeline systems in the nation. To do so, it uses system annual throughput, which is based on the amount of hazardous liquid or natural gas product transported through a pipeline in 1 year (i.e., annual throughput). TSA also ranks the relative risk among the top 100 critical pipeline systems, discussed later in the report. Additionally, TSA’s Pipeline Security Branch is responsible for conducting voluntary Corporate Security Reviews (CSR) and Critical Facility Security Reviews (CFSR), which assess the extent to which the 100 most critical pipeline systems are following the intent of TSA’s Pipeline Security Guidelines. See figure 5 below for an overview of the CSR and CFSR processes. In addition, TSA Intelligence and Analysis is responsible for collecting and analyzing threat information related to the transportation network, and sharing relevant threat information to pipeline stakeholders. National Cybersecurity and Communications Integration Center (NCCIC). Within DHS, NCCIC assists critical infrastructure owners in addressing cyber incidents and attacks, including those targeting industrial control systems. The NCCIC’s mission is to reduce the likelihood and severity of incidents that may significantly compromise the security and resilience of the nation’s critical information technology and communications networks. NCCIC’s role is to serve as the federal civilian interface for sharing information related to cybersecurity risks, incidents, analysis, and warnings with federal and nonfederal entities, and to provide shared situational awareness to enable real-time actions to address cybersecurity risks and incidents to federal and nonfederal entities. Pipeline and Hazardous Materials Safety Administration (PHMSA). PHMSA, within DOT, is responsible for regulating the safety of hazardous materials transportation and the safety of pipeline systems, some aspects of which can be related to pipeline security. In 2004, PHMSA and TSA entered into a memorandum of understanding regarding their respective roles in all modes of transportation. In 2006, they signed an annex to the memorandum of understanding that further delineates lines of authority and responsibility between TSA and PHMSA on pipeline and hazardous materials transportation security. The annex identifies TSA as the lead federal entity for transportation security, including hazardous materials and pipeline security, and PHMSA as responsible for administering a national program of safety in natural gas and hazardous liquid pipeline transportation, including identifying pipeline safety concerns and developing uniform safety standards. Department of Energy (DOE). DOE is responsible for protecting electric power, oil, and natural gas delivery infrastructure and, in December 2015, was identified in statute as the sector-specific agency for cybersecurity for the energy sector. The Office of Cybersecurity, Energy Security, and Emergency Response is the lead for DOE’s cybersecurity efforts. In addition, DOE operates the National SCADA Test Bed Program, a partnership with Idaho National Laboratory, Sandia National Laboratories, and other national laboratories which addresses control system security challenges in the energy sector. Among its key functions, the program performs control systems testing, research, and development; control systems requirements development; and industry outreach. Federal Energy Regulatory Commission (FERC). FERC regulates the U.S. bulk electric power system, which is increasingly powered by natural gas pipeline systems. FERC has regulatory authority over interstate natural gas pipelines under the Natural Gas Act. However, its role is limited to natural gas pipeline siting and rate regulation. The North American Electric Reliability Corporation is the federally designated U.S. Electric Reliability Organization, and is overseen by FERC. The North American Electric Reliability Corporation, with approval from FERC, has developed mandatory critical infrastructure protection standards for protecting electric utility–critical and cyber-critical assets. Private sector. Although TSA has primary federal responsibility for overseeing interstate pipeline security, private sector pipeline operators are responsible for implementing asset-specific protective security measures. As we previously reported, operators have increased their attention on security by incorporating security practices and programs into their overall business operations. Pipeline operators’ interests and concerns are primarily represented by five major trade associations with ties to the pipeline industry—the Interstate Natural Gas Association of America (INGAA), American Gas Association (AGA), American Public Gas Association, American Petroleum Institute (API), and Association of Oil Pipe Lines. According to TSA officials, pipeline operators, and association representatives, these associations have worked closely with the federal government on a variety of pipeline security-related issues, including collaborating on TSA’s voluntary standards and information sharing. Federal and Nonfederal Pipeline Stakeholders Exchange Risk- Related Security Information All of the pipeline operators and pipeline association representatives we interviewed reported receiving security information from federal and nonfederal entities. Pipeline operators also reported providing security- related information to federal agencies, including TSA, as incidents occur. Multiple federal entities exchange alerts of physical and cybersecurity incidents and other risk-related information with critical infrastructure partners, including pipeline operators. For example, DHS components including TSA’s Intelligence and Analysis and NCCIC share security- related information on physical and cyber threats and incidents with sector stakeholders. Specifically, Intelligence and Analysis provides quarterly intelligence briefings to pipeline operators. NCCIC also issues indicator bulletins, which can contain information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents. In addition, TSA and other federal entities have coordinated to address specific pipeline-related security incidents. For example, TSA officials coordinated with DOT, DOE, the Department of Justice, and FERC through the Oil and Natural Gas subsector SCC to address ongoing incidents of vandalism and sabotage of critical pipeline assets by environmental activists in 2016. In July 2017, according to DOT officials, PHMSA and TSA collaborated on a web-based portal to facilitate sharing sensitive but unclassified incident information among federal agencies with pipeline-related responsibilities. See table 1 for the key federal information sharing entities and programs that exchange security-related or incident information with critical infrastructure stakeholders, including the pipeline sector. Pipeline operators also share security-related information with TSA and the NCCIC. In its Pipeline Security Guidelines, TSA requests that pipeline operators report by telephone or email to its Transportation Security Operations Center (TSOC) any physical security incidents that are indicative of a deliberate attempt to disrupt pipeline operations or activities that could be considered precursors to such an attempt. TSA’s Pipeline Security Guidelines also request that operators report any actual or suspected cyber attacks that could impact pipeline industrial control systems or other information technology-based systems to the NCCIC. According to the TSOC’s operating procedures, if a reported incident meets certain criteria, such as the incident was intended to or resulted in damage or requires a general evacuation of a facility, the TSOC watch officer is then to contact Office of Security and Industry Engagement officials. According to TSA officials, the TSOC does not conduct investigations of the specific security incidents that pipeline operators report. However, TSOC staff do analyze the incident information they receive for national trends and common threats. TSA officials stated that they share their observations with pipeline operators and other critical infrastructure asset owners during monthly and quarterly conference calls that TSA holds with pipeline operators. All the pipeline operators and association representatives we interviewed identified other nonfederal information sharing entities, including ISACs, fusion centers, industry associations, and SCCs, which provide forums for exchanging information about physical and cyber incidents throughout the pipeline sector. See table 2 for nonfederal information sharing entities identified as available to pipeline operators. Operators and TSA officials reported that the current backlog in granting security clearances for some key pipeline operator employees was a significant factor affecting information sharing between TSA and pipeline operators. TSA officials acknowledged that some pipeline operators have had difficulty obtaining security clearances for key employees due to ongoing backlogs in processing requests by the Office of Personnel Management National Background Investigation Bureau, and that TSA’s ability to share timely information with operators whose staff do not have a clearance may be hindered. Three of the 10 pipeline operators we interviewed identified receiving timely classified security information as a specific challenge due, in part, to difficulties staff have had obtaining security clearances. Further, 7 of the 10 pipeline operators that we interviewed reported experiencing delays in obtaining a security clearance or were aware of others who had experienced this issue. However, according to three operators we interviewed, TSA was helpful in facilitating approval of security clearances for the operators’ personnel to access classified information when necessary. This security clearance challenge is not faced by pipeline operators alone. In January 2018, we designated the backlog of investigations for the clearance process and the government-wide personnel security clearance process as a high-risk area. We will continue to monitor agencies’ progress in reducing the backlog and improving the security clearance process. Pipeline Operators Use a Range of Guidelines and Standards to Address Risks, but TSA’s Guidelines Lack Clear Definitions and a Process for Updating Them Pipeline operators that we interviewed reported using a range of guidelines and standards to address their physical and cybersecurity risks, and all of them reported implementing TSA’s voluntary Pipeline Security Guidelines that were applicable to their operations. TSA revised and issued its Pipeline Security Guidelines in March 2018, but the revised guidelines lack a defined process to consider updates to supporting guidance such as to the NIST Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework). Furthermore, TSA has not clearly defined the terms within the criteria that pipeline operators are to use to determine the criticality of their facilities. Pipeline Operators Use a Range of Guidelines and Standards to Address Security Pipeline operators that we interviewed reported using a range of guidelines and standards to address their physical and cybersecurity risks. For example, all 10 of the pipeline operators we interviewed stated they had implemented the voluntary 2011 TSA Pipeline Security Guidelines the operators determined to be applicable to their operations. The guidelines provide TSA’s recommendations for pipeline industry security practices such as establishing a corporate security program and identifying critical facilities among others (see sidebar). Five of the 10 pipeline operators we interviewed characterized the guidelines as generally or somewhat effective in helping to secure their operations, 1 was neutral on their effectiveness, and 4 did not provide an assessment of the guidelines’ effectiveness. However, one operator pointed out that they had not adopted the guidelines’ recommended interval of 36 months or less for conducting security vulnerability assessments due to staffing limitations. Also, another pipeline operator noted that they were working to implement the guidelines in the operations of a newly acquired asset that they determined was not using the guidelines in the same manner as their company. All of the pipeline operators we interviewed reported using other guidelines or standards to address pipeline systems’ security risks. For example, pipeline operators reported using and industry association representatives reported that their members use INGAA’s Control Systems Cyber Security Guidelines for the Natural Gas Pipeline Industry, API’s Pipeline SCADA Security standard, and the NIST Cybersecurity Framework as sources of cybersecurity standards, guidelines, and practices that may be scaled and applied to address a pipeline operator’s cybersecurity risks. Further, pipeline operators are required to adhere to regulations related to pipeline safety and, depending upon their assets, operations, and location, may be required to adhere to regulations for electrical utilities, chemical storage facilities, and locations near waterways. For example, all pipeline operators must adhere to DOT’s PHMSA safety regulations. In addition, pipeline operators whose systems include chemical facilities may be required to comply with DHS’s Chemical Facility Anti-Terrorism Standards (CFATS). Pipeline operators whose systems include a terminal located on a U.S. port may be required to comply with Maritime Transportation Security Act regulations. For a listing of federal and industry guidelines identified as applicable to security by the pipeline operators, see appendix I. TSA Does Not Have a Documented Process for Updating Its Pipeline Security Guidelines to Reflect Revisions to Supporting Standards TSA’s Pipeline Security Branch issued its revised Pipeline Security Guidelines in March 2018, but TSA has not established a documented process to ensure that revisions occur and fully capture updates to supporting standards and guidance. The guidelines were revised to, among other things, reflect the dynamic threat environment and to incorporate cybersecurity principles and practices from the NIST Cybersecurity Framework, which were initially issued in February 2014. To revise the guidelines and incorporate feedback, according to Pipeline Security Branch officials, they incorporated outcomes from pipeline modal threat assessments and best practices from security reviews, and collaborated with pipeline sector stakeholders—including industry associations and other federal agencies with a role in pipeline security. Officials from the industry associations we interviewed confirmed that they provided input to the revised pipeline guidelines, including meeting with and consolidating comments from member pipeline operators. See figure 6 for a timeline of events pertinent to federal pipeline security guidelines. TSA’s Pipeline Security Smart Practice Observations for pipeline operators states that security plans should have a documented process to include security plan reviews and updates on a periodic and an as- needed basis. Standards for Internal Control in the Federal Government states that periodic review of policies, procedures, and related control activities should occur to determine their continued relevance and effectiveness in achieving identified objectives or addressing related risks. The NIPP and NIST also emphasize the need to provide updates on incident response guidance and security procedures, respectively. Moreover, other pipeline industry guidance cited by TSA’s guidelines also has a prescribed interval for review and revision. For example, API reviews its standards at least every 5 years. However, TSA has not instituted a documented process to consider the need to update the Pipeline Security Guidelines on a regular basis. Pipeline Security Branch officials acknowledged the value of having a defined process for reviewing and, if necessary, revising TSA’s Pipeline Security Guidelines at regular defined intervals to ensure it includes, among other things, newly identified best practices and updated industry guidance that are relevant to pipeline operators, such as the elements of the latest version of NIST’s Cybersecurity Framework. For example, TSA’s revisions to its guidelines incorporated some, but not all of the elements of the NIST Cybersecurity Framework version 1. Specifically, to improve incident response, the NIST framework recommends implementing an incident response analysis and feedback function to a security program. However, TSA’s Pipeline Security Guidelines do not include similar steps for pipelines operators to include in their pipeline security programs. Further, because NIST released version 1.1 of the Cybersecurity Framework in April 2018, the guidelines that TSA released in March 2018 do not incorporate cybersecurity elements that NIST added to the latest Cybersecurity Framework such as the Supply Chain Risk Management category. Pipeline Security Branch officials said that they have not instituted a review process on a regular basis because they intended to review and revise TSA’s guidelines on an as-needed basis in response to updated supporting guidance, but could provide no timeline for doing so. Without a documented process defining how frequently Pipeline Security Branch staff are to review and revise its guidelines, TSA cannot ensure that its guidelines reflect the latest known standards and best practices for physical and cybersecurity, or address the persistent and dynamic security threat environment currently facing the nation’s pipeline system. Pipeline Security Guidelines Lack Clear Definitions to Ensure Pipeline Operators Consistently Apply TSA’s Criteria for Identifying Critical Facilities Under TSA’s Pipeline Security Guidelines, pipeline operators are to self- identify the critical facilities within their system and report their critical facilities to TSA. TSA’s Pipeline Security Branch conducts CFSRs at the critical facilities that pipeline operators have identified. However, our analysis of TSA’s data found that at least 34 of the top 100 critical pipeline systems deemed highest risk indicated that they had no critical facilities. Accordingly, TSA would not conduct a CFSR at any of these systems’ facilities because their operators identified none of them as critical. The fact that pipeline operators of about one third of the highest risk systems identified no critical facilities may be due, in part, to the Pipeline Security Branch not clearly defining the criteria outlined in the Pipeline Security Guidelines that pipeline operators are to use to determine the criticality of their facilities. Three of the 10 operators we interviewed stated that some companies reported to TSA that they had no critical facilities, and may possibly be taking advantage of the guidelines’ lack of clarity. Accordingly, operators that report no critical facilities would avoid TSA’s reviews of their facilities. service or deliverability resulting in a state or local government's inability to provide essential public services and emergency response for an extended period of time; Significantly damage or destroy national intended use of major rivers, lakes, or waterways (e.g., public drinking water for large populations or disruption of major commerce or public transportation routes); service or deliverability to a significant number of customers or individuals for an extended period of time; operations for an extended period of time (i.e., business critical facilities). Our review of the eight criteria included in TSA’s Pipeline Security Guidelines (see sidebar) found that no additional examples or clarification are provided to help operators determine criticality. Although we previously noted that 5 of the 10 operators we interviewed generally found TSA’s Guidelines as a whole helpful in addressing pipeline security, more than half of the operators we interviewed identified TSA’s criticality criteria as a specific area for improvement. Specifically, 3 of the 10 pipeline operators that we interviewed stated that TSA had not clearly defined certain terms within the criteria, and 3 additional operators of the 10 reported that additional consultation with TSA was necessary to appropriately apply the criteria and determine their facilities’ criticality. For example, 2 operators told us that individual operators may interpret TSA’s criterion, “cause mass casualties or significant health effect,” differently. One of these operators that we interviewed stated that this criterion could be interpreted either as a specific number of people affected or a sufficient volume to overwhelm a local health department, which could vary depending on the locality. Another operator reported that because TSA’s criteria were not clear, they created their own criteria which helped the operator identify two additional critical facilities. Pipeline Security Branch officials acknowledged there are companies that report having no critical facilities in their pipeline systems. According to Pipeline Security Branch officials, pipeline operators are in the best position to determine which of their facilities are critical, and the companies that have determined that their pipeline systems have no critical facilities also have reported sufficient redundancies to make none of their facilities critical to the continuity of their operations. According to these officials, they have had extensive discussions with pipeline company officials to assess the validity of their criticality determinations, and have closely questioned companies to ensure they have properly applied TSA’s criteria. However, according to TSA’s Pipeline Security Guidelines, operators should use a consistent set of criteria for determining the criticality of their facilities. In addition, Standards for Internal Control in the Federal Government states that management should define objectives clearly to enable the identification of risks. To achieve this, management generally defines objectives in specific and measurable terms and ensures the terms are fully and clearly set forth so they can be easily understood. Pipeline Security Branch officials acknowledged that the criticality definitions in the Pipeline Security Guidelines could be clarified to be more specific. Additionally, an industry association representative reported that the association, in consultation with TSA, has been developing supplementary guidance for its members to clarify certain terms in TSA’s critical facility criteria. As of October 2018 this guidance is still under review at the association and has not been made available to the association’s members. Pipeline Security Branch officials confirmed they worked with the industry association on its supplementary guidance, but also acknowledged that the supplementary guidance may only be distributed to the association’s membership. Without clearly defined criteria for determining pipeline facilities’ criticality, TSA cannot ensure that pipeline operators are applying its guidance uniformly. Further, because TSA selects the pipeline facilities on which to conduct CFSRs based on operators’ determinations, TSA cannot fully ensure that all of the critical facilities across the pipeline sector have been identified using the same criteria, or that their vulnerabilities have been identified and addressed. TSA Assesses Pipeline Risk and Conducts Security Reviews, but Limited Workforce Planning and Shortfalls in Assessing Risk Present Challenges TSA’s Intelligence and Analysis identifies security risks to pipeline systems through various assessments. Additionally, TSA’s Pipeline Security Branch conducts security reviews to assess pipeline operators’ implementation of TSA’s Pipeline Security Guidelines, but gaps in staffing and lack of a workforce plan may affect its ability to carry out effective reviews. The Pipeline Security Branch also developed a pipeline risk assessment to rank relative risk of the top 100 critical pipeline systems and to prioritize its security reviews of pipeline companies, but shortfalls in its calculations of threat, vulnerability, and consequence may limit its ability to accurately identify pipeline systems with the highest risk. Finally, the pipeline risk assessment has not been peer reviewed to validate the assessment’s data and methodology, which we previously reported as a best practice in risk management. TSA Conducts Assessments of Pipeline Security Risks TSA’s Intelligence and Analysis produces assessments related to pipeline security risks, including Pipeline Modal and Cyber Modal Threat Assessments and the Transportation Sector Security Risk Assessment. The Pipeline and Cyber Modal Threat Assessments are issued on a semiannual basis; TSA Intelligence and Analysis may also issue additional situation-based products on emerging threats. The Pipeline Modal and Cyber Modal Threat Assessments evaluate, respectively, physical and cyber threats to pipelines. The pipeline modal threat assessment evaluates terrorist threats to hazardous liquid and natural gas pipelines, and the cyber modal threat assessment evaluates cyber threats to transportation, including pipelines. Both assessments specifically analyze the primary threat actors, their capabilities, and activities—including attacks occurring internationally—as well as other characteristics of threat. The Transportation Sector Security Risk Assessment assesses threat, vulnerability, and consequence for various attack scenarios across the five transportation modes for which TSA is responsible. The scenarios define a type of threat actor—including homegrown violent extremists and transnational extremists, such as al Qaeda and its affiliates—a target, and an attack mode. For example, a scenario might assess the risk of attacks using varying sizes of improvised explosive devices on pipeline system assets. As part of the assessment process, TSA engages with subject matter experts from TSA and industry stakeholder representatives to compile vulnerabilities for each mode, and TSA analyzes both direct and indirect consequences of the various attack scenarios. According to Pipeline Security Branch officials, the assessments produced by TSA’s Intelligence and Analysis provide key information to inform the pipeline security program’s efforts. TSA Conducts Pipeline Security Reviews to Assess Implementation of Pipeline Guidelines, but Does Not Have a Strategic Workforce Plan to Address Staffing Challenges According to TSA officials, TSA conducts pipeline security reviews— Corporate Security Reviews (CSRs) and Critical Facility Security Reviews (CFSRs)—to assess pipeline vulnerabilities and industry implementation of TSA’s Pipeline Security Guidelines. However, as shown by Figure 7 below, the number of CSRs and CFSRs completed by TSA has varied during the last five fiscal years, ranging from zero CSRs conducted in fiscal year 2014 to 23 CSRs conducted in fiscal year 2018, as of July 31, 2018. TSA officials reported that staffing limitations have prevented TSA from conducting more reviews. As shown in table 3, TSA Pipeline Security Branch staffing levels (excluding contractor support) have varied significantly over the past 9 years ranging from 14 full-time equivalents (FTEs) in fiscal years 2012 and 2013 to one FTE in fiscal year 2014. They stated that, while contractor support has assisted with conducting CFSRs, there were no contractor personnel providing CSR support from fiscal years 2010 through 2017, but that has now increased to two personnel in fiscal year 2018. TSA prioritizes reviewing and collecting information on the nation’s top 100 critical pipeline systems. According to TSA officials, they would need to conduct 46 CSRs in order to review the top 100 critical pipeline systems. In July 2018, TSA officials stated that TSA’s current target was to assess each pipeline company every 2 to 3 years; this would equate to about 15 to 23 CSRs per year. TSA officials stated that they expect to complete 20 CSRs and 60 CFSRs per fiscal year with Pipeline Security Branch employees and contract support, and have completed 23 CSRs through July 2018 for fiscal year 2018. Given the ever-increasing cybersecurity risks to pipeline systems, ensuring that the Pipeline Security Branch has the required cybersecurity skills to effectively evaluate pipeline systems’ cybersecurity is essential. Pipeline operators we interviewed emphasized the importance of cybersecurity skills among TSA staff. Specifically, 6 of the 10 pipeline operators and 3 of the 5 industry representatives we interviewed reported that the level of cybersecurity expertise among TSA staff and contractors may challenge the Pipeline Security Branch’s ability to fully assess the cybersecurity portions of its security reviews. TSA officials stated that Security Policy and Industry Engagement staff are working with DHS’s National Protection and Programs Directorate to help address cyber- related needs, including identifying specific cybersecurity skills and competencies required for the pipeline security program. The officials were uncertain, however, whether TSA would use contractor support or support from the National Protection and Programs Directorate to provide identified skills and competencies. TSA officials also stated that Security Policy and Industry Engagement staff work with TSA’s human resource professionals to identify critical skills and competencies needed for Pipeline Security Branch personnel, and helps its workforce maintain professional expertise by providing training and education for any identified skill or competency gaps. Our previous work has identified principles that a strategic workforce planning process should follow including developing strategies tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. Workforce planning efforts, linked to an agency’s strategic goals and objectives, can enable it to remain aware of and be prepared for its needs, including the size of its workforce, its deployment across the organization, and the knowledge, skills, and abilities needed for it to pursue its mission. Agencies should consider how hiring, training, staff development, performance management, and other human capital strategies can be aligned to eliminate gaps and improve the long-term contribution of skills and competencies identified as important for mission success. TSA has not established a workforce plan for its Security Policy and Industry Engagement or its Pipeline Security Branch that identifies staffing needs and skill sets such as the required level of cybersecurity expertise among TSA staff and contractors. When asked for TSA strategic workforce planning documents used to inform staffing allocations related to the pipeline security program, TSA officials acknowledged they do not have a strategic workforce plan. Rather, according to these officials, TSA determines agency-level staffing allocations through the Planning, Programming, Budgeting and Execution process, which is used to decide policy, strategy, and the development of personnel and capabilities to accomplish anticipated missions. According to TSA officials, when they use this process they look at existing resources and then set priorities based on the TSA Administrator’s needs. However, a strategic workforce plan allows an agency to identify and prepare for its needs, such as the size of its workforce, its deployment across the organization, and the knowledge, skills, and abilities needed to pursue its mission. TSA officials stated that the agency has a detailed allocation plan for strategically aligning resources to screen passengers at TSA-regulated airports, but not for the entire agency. By establishing a strategic workforce plan, TSA can help ensure it has identified the knowledge, skills, and abilities that the future workforce of TSA’s Pipeline Security Branch may need in order to meet its mission of reducing pipeline systems’ vulnerabilities to physical and cybersecurity risks, especially in a dynamic and evolving threat environment. Further, as greater emphasis is placed on cybersecurity, determining the long- term staffing needs of the Pipeline Security Branch will be essential. Furthermore, a workforce plan could enable TSA to determine the number of personnel it needs to meet its stated goals for conducting CSRs and CFSRs. TSA Calculates Relative Risk of Pipeline Systems, but Its Ranking Tool Does Not Include Current Data or Align with DHS Priorities to Help Prioritize Security Reviews After TSA identifies the top 100 critical pipeline systems based on throughput, the Pipeline Security Branch uses the Pipeline Relative Risk Ranking Tool (risk assessment), which it developed in 2007, to assess various security risks of those systems. We previously reported, in 2010, that the Pipeline Security Branch was the first of TSA’s surface transportation modes to develop a risk assessment model that combined all three components of risk—threat, vulnerability, and consequence—to generate a risk score. The risk assessment generates a risk score for each of the 100 most critical pipeline systems and ranks them according to risk. The risk assessment calculates threat, vulnerability, and consequence for each pipeline system on variables such as the amount of throughput in the pipeline system and the number critical facilities. The risk assessment combines data collected from pipeline operators, as well as other federal agencies, such as the Departments of Transportation and Defense, to generate the risk score. However, the last time the Pipeline Security Branch calculated relative risk among the top 100 critical pipeline systems using the risk assessment was in 2014. Pipeline Security Branch officials told us that they use the pipeline risk assessment to rank relative risk of the top 100 critical pipeline systems, and the standard operating procedures for conducting CSRs state the results of the risk ranking are the primary factor considered when prioritizing corporate security reviews of pipeline companies. According to Pipeline Security Branch officials, the risk assessment has not changed since 2014 because the Pipeline Security Branch is still conducting CSRs based on the 2014 ranking of pipeline systems. As outlined in table 4 below, we identified several factors that likely limit the usefulness of the current risk assessment in calculating threat, vulnerability, and consequence to allow the Pipeline Security Branch to effectively prioritize reviews of pipeline systems. For example, because the risk assessment has not changed since 2014, information on threat may be outdated. Additionally, sources of data and underlying assumption and judgments regarding certain threat and vulnerability inputs to the assessment are not fully documented. For example, threats to cybersecurity are not specifically accounted for in the description of the risk assessment methodology, making it unclear if cybersecurity is part of the assessment’s threat factor. Further, the risk assessment does not include information that is consistent with the NIPP and other DHS priorities for critical infrastructure risk mitigation, such as information on natural hazards and the ability to measure risk reduction (feedback data). According to Pipeline Security Branch officials, the risk ranking assessment is not intended to be a fully developed risk model detailing all pipeline factors influencing risk. Rather, officials said they are primarily interested in assessing risk data that impacts security. However, because TSA’s Pipeline Security Program is designed to enhance the security preparedness of the pipeline systems, incorporating additional factors that enhance security into their risk calculation would better align their efforts with PPD-21. For example, PPD-21 calls for agencies to integrate and analyze information to prioritize assets and manage risks to critical infrastructure, as well as anticipate interdependencies and cascading impacts. For a more detailed discussion of the shortfalls we identified, refer to appendix II. TSA’s Pipeline Risk Assessment Has Not Been Peer Reviewed to Help Validate the Data and Methodology In addition to the shortfalls identified above, the risk assessment has not been peer reviewed since its conception in 2007. In our past work, we reported that independent, external peer reviews are a best practice in risk management and that independent expert review panels can provide objective reviews of complex issues. According to the National Research Council of the National Academies, external peer reviews should, among other things, address the structure of the assessment, the types and certainty of the data, and how the assessment is intended to be used. The National Research Council has also recommended that DHS improve its risk analyses for infrastructure protection by validating the assessments and submitting them to independent, external peer review. Other DHS components have implemented our prior recommendations to conduct peer reviews of their risk assessments. For example, in April 2013, we reported on DHS’s management of its Chemical Facility Anti- Terrorism Standards (CFATS) program and found that the approach used to assess risk did not consider all of the elements of consequence, threat, and vulnerability associated with a terrorist attack involving certain chemicals. The Infrastructure Security Compliance Division, which manages the CFATS program conducted a multiyear effort to improve their risk assessment methodology and included commissioning a peer review by the Homeland Security Studies and Analysis Institute, which resulted in multiple recommendations. As part of the implementation of some of the peer review’s recommendations, DHS conducted peer reviews and technical reviews with government organizations and facility owners and operators, and worked with Sandia National Laboratories to verify and validate the CFATS program’s revised risk assessment methodology, which was completed in January 2017. According to Pipeline Security Branch officials, they are considering updates to the risk assessment methodology including changes to the vulnerability and consequence factors. These officials said the risk assessment was previously reviewed within the past 18 months by industry experts and they consider input from several federal partners including DHS, DOT, and the Department of Defense. Officials also said they will consider input from industry experts and federal partners while working on updating the risk assessment. However, most of the proposed changes to the risk assessment methodology officials described are ones that have been deliberated since our last review in 2010. Therefore, an independent, external peer review would provide the opportunity for integration and analysis of additional outside expertise across the critical infrastructure community. While independent, external peer reviews cannot ensure the success of a risk assessment approach, they can increase the probability of success by improving the technical quality of projects and the credibility of the decision-making process. According to the National Research Council of the National Academies, independent, external peer reviews should include validation and verification to ensure that the structure of the risk assessment is both accurate and reliable. Thus, an independent, external peer review would provide better assurance that the Pipeline Security Branch can rank relative risk among pipeline systems using the most comprehensive and accurate threat, vulnerability, and consequence information. TSA Has Established Performance Measures, but Limitations Hinder TSA’s Ability to Determine Pipeline Security Program Effectiveness TSA has established performance measures, as well as databases to monitor pipeline security reviews and analyze their results. However, weaknesses in its performance measures and its efforts to record pipeline security review recommendations limit its ability to determine the extent that its pipeline security program has reduced pipeline sector risks. Furthermore, we identified data reliability issues in the information that TSA collects to track the status of pipeline security review recommendations, such as missing data, inconsistent data entry formats, and data entry errors. TSA Has Established Performance Measures but Faces Challenges in Assessing the Effectiveness of Its Efforts to Reduce Pipeline Security Risks TSA has three sets of performance measures for its pipeline efforts: the Pipeline Security Plan in the 2018 Biennial National Strategy for Transportation Security (NSTS), a management measure in the DHS fiscal year 2019 congressional budget justification, and summary figures in their CSR and CFSR databases. As a result of our 2010 work, TSA established performance measures and linked them to Pipeline Security Plan goals within the Surface Security Plan of the 2018 NSTS. See table 5 below for the 2018 NSTS Pipeline Security Plan performance measures. As shown in table 6 below, DHS also included a management measure in its fiscal year 2019 congressional budget justification to track the annual number of completed pipeline security reviews. Finally, TSA Pipeline Security Branch officials said they use summary figures in the CFSR status database and the CSR goals and priorities database as performance measures. For example, these include the percentage of CFSR recommendations implemented and the average percentage compliance with the guidelines by fiscal year. We previously found that results-oriented organizations set performance goals to clearly define desired program outcomes and develop performance measures that are clearly linked to the performance goals. Performance measures should focus on whether a program has achieved measurable standards toward achieving program goals, and allow agencies to monitor and report program accomplishments on an ongoing basis. Our previous work on performance metrics identified 10 attributes of effective performance. Table 7 identifies each key attribute of effective performance measures along with its definition. We evaluated the current performance measures included in the 2018 NSTS, the DHS fiscal year 2019 congressional budget justification, the CSR goals and priorities database, and the CFSR status database related to TSA’s Pipeline Security Branch. We primarily focused on key attributes which could be applied to individual measures. These include clarity, linkage, measurable targets, objectivity, reliability, and baseline and trend data. Our prior work on performance measurement found that all performance measure attributes are not equal and failure to have a particular attribute does not necessarily indicate that there is a weakness in that area or that the measure is not useful; rather, it may indicate an opportunity for further refinement. Based on our evaluation, the TSA-identified measures do not possess attributes that we have identified as being key to successful performance measures. As a result, TSA cannot fully determine the extent to which the Pipeline Security Branch has achieved desired outcomes, including the effectiveness of its efforts to reduce risks to pipelines. Specifically, many of TSA’s measures cover agency goals and mission, but they generally lack clarity and measurable targets, provide significantly overlapping information, and do not include baseline and trend data. Clarity. The pipeline-related measures in the 2018 NSTS are not clear because they do not describe the methodology used to calculate them, and the names and definitions are not clearly described. For example, NSTS goal 1 includes an objective to conduct training of employees responding to terrorist attacks. The desired outcome is to improve the capability of industry employees to respond and recover from terrorist attacks. However, the performance measure is the percentage of critical pipeline systems implementing the TSA Pipeline Security Guidelines. It is not clear if this measure is specific to the sections of the guidelines related to employee training or overall implementation of the guidelines. The CFSR status database measures include the percentage of recommendations implemented by topic, such as “Site Specific Security Measures,” “Signage,” or “Miscellaneous.” However, the database does not specifically define these topics or explain the methodology for calculating the measures. Unclear measures could be confusing and misleading to users. Core program activities. The pipeline-related measures in the 2018 NSTS cover some of the agency’s core program activities, such as conducting security exercises with the pipeline industry and providing intelligence and information products to the industry. However, the NSTS Pipeline Security Plan measures do not specifically include some core program activities, such as updating the TSA Pipeline Security Guidelines or the results of conducting CSRs and CFSRs in order to collect the information necessary for the existing performance measures. The CSR goals and priorities database and the CFSR status database include measures intended to track some of the results of pipeline security reviews, such as the average percentage compliance with the guidelines by fiscal year and the percentage of CFSR recommendations implemented. If core program activities are not covered, there may not be enough information available in those areas to managers and stakeholders. Limited overlap. The pipeline-related measures in the 2018 NSTS do not have limited overlap. As discussed previously, four of the five NSTS measures are based on the percentage of critical pipeline systems implementing TSA’s Pipeline Security Guidelines. The management measure is based on the number of complete pipeline security reviews. The CFSR status database measures are based on the percentage of recommendations implemented overall and by groups. Finally, the CSR goals and priorities database measures are based on the average compliance percentage of companies that had CSRs conducted in fiscal years 2016 and 2017. This is similar to four of the five NSTS measures. Significantly overlapping measures may lead to redundant, costly information that does not add value for TSA management. Linkage. The pipeline-related measures in the 2018 NSTS generally exhibited this key attribute. For example, all of the NSTS measures were arranged by agency strategic goals and risk-based priorities. However, the management measure in DHS’s fiscal year 2019 congressional budget justification and the CFSR status database measures did not specify the TSA goals and priorities to which they were aligned. If measures are not aligned with division and agency- wide goals and mission, the behaviors and incentives created by these measures do not support achieving those goals or mission. Measurable target. TSA’s measures generally did not include measurable targets in the form of a numerical goal and none of the NSTS measures had measurable targets. For example, the NSTS measure under the Security Planning priority, which tracks the percentage of critical pipeline systems implementing TSA’s Pipeline Security Guidelines, does not state what specific percentages would be considered an improvement in industry security plans. However, the management measure did include target numbers of pipeline security reviews by fiscal year. Both the CFSR status database measures and CSR goals and priorities database measures did not include measurable targets. Without measurable targets, TSA cannot tell if performance is meeting expectations. Objectivity. Because the pipeline-related measures in the 2018 NSTS, the CFSR status database, and the CSR goals and priorities database generally lack clarity and measurable targets, TSA cannot ensure its measures are free from bias or manipulation, and therefore, are not objective. If measures are not objective, the results of performance assessments may be systematically overstated or understated. Reliability. Because the pipeline-related measures in the 2018 NSTS, the CFSR status database, and the CSR goals and priorities database generally lack clarity, measurable targets, and baseline and trend data, it is not clear if TSA’s measures produce the same result under similar conditions; therefore, the pipeline-related measures are unreliable. If measures are not reliable, reported performance data may be inconsistent and add uncertainty. Baseline and trend data. TSA’s measures generally did not include baseline and trend data. For example, none of the NSTS measures included past results and compared them to measurable targets. TSA officials were unable to identify measures or goals to assess the extent to which pipeline operators have fully implemented the guidelines or increased pipeline security, but did say developing a feedback mechanism to measure progress in closing vulnerability gaps was important. However, the management measure did include the number of completed pipeline security reviews for each fiscal year from 2014 through 2017, as well as numerical goals. The CFSR status database includes information on CFSRs conducted from May 22, 2012, through June 29, 2017, but the measures are calculated for the entire time period rather than year-by-year. The CSR goals and priorities database measures include percentage compliance with the guidelines for CSRs conducted in fiscal years 2016 and 2017, as well as a combined measure. However, baseline and trend data are not tracked or reported in either database. Collecting, tracking, developing, and reporting baseline and trend data allows agencies to better evaluate progress being made and whether or not goals are being achieved. Pipeline Security Branch officials explained that in addition to the measures reported in the 2018 NSTS Pipeline Security Plan, they primarily rely on measures assessing CSR and CFSR implementation for assessing the value of its pipeline security program. TSA officials reported that they collect and analyze data and information collected from CSRs and CFSRs to, among other things, determine strengths and weaknesses at critical pipeline facilities, areas to target for risk reduction strategies, and pipeline industry implementation of the voluntary Pipeline Security Guidelines. For example, TSA officials reported that they analyzed information from approximately 734 CFSR recommendations that were made during fiscal years 2012 through 2016. They found that pipeline operators had made the strongest improvements in security training, public awareness outreach and law enforcement coordination, and site specific security measures. The most common areas in need of improvement were 24x7 monitoring, frequency of security vulnerability assessments, and proper signage. However, as described above, we found those measures also did not comport with key attributes for successful measures and we report below on reliability concerns for underlying data supporting those measures. In addition, while the Pipeline Security Branch may not rely on the measures included in the 2018 NSTS Pipeline Security Plan and the fiscal year 2019 congressional budget justification, they are important for reporting the status of pipeline security efforts to TSA as a whole and to external stakeholders such as Congress. Taking steps to ensure that the pipeline security program performance measures exhibit key attributes of successful performance measures could allow TSA to better assess the program’s effectiveness at reducing pipeline physical and cybersecurity risks. This could include steps such as modifying its suite of measures so they are clear, have measurable targets, and add baseline and trend data. Further examples include the following: Adding measurable targets consisting of numerical goals could allow TSA to better determine if the pipeline security program is meeting expectations. For example, measurable targets could be added to TSA’s existing measures by developing annual goals for the percentage of recommendations implemented to the CFSR status database and then reporting annual results. To make measures clearer, TSA could verify that each measure has a clearly stated name, definition, and methodology for how the measure is calculated. For example, the NSTS objective for security training mentioned above could have more specific language explaining how the measure is calculated and whether it applies to pipeline operators’ implementation of the training-related portions of the TSA Pipeline Security Guidelines or overall implementation. Finally, adding baseline and trend data could allow TSA to identify, monitor, and report changes in performance and help ensure that performance is viewed in context. For example, the NSTS measures, CFSR status database measures, and CSR goals and priorities database measures could have annual results from prior years. This could help TSA and external stakeholders evaluate the effectiveness of the pipeline security program and whether it is making progress toward its goals. TSA Does Not Track the Implementation Status of Past CSR Recommendations, and Supporting Data Are Not Sufficiently Reliable According to TSA officials, the primary means for assessing the effectiveness of the agency’s efforts to reduce pipeline security risks is through conducting pipeline security reviews— Corporate Security Reviews (CSRs) and Critical Facility Security Reviews (CFSRs). However, TSA has not tracked the status of CSR recommendations for over 5 years and related security review data are not sufficiently reliable. When conducting CSRs and CFSRs, TSA staff makes recommendations to operators, if appropriate. For example, a CSR recommendation might include a suggestion to conduct annual security-related drills and exercises, and a CFSR recommendation might include a suggestion to install barbed wire on the main gate of a pipeline facility. In response to recommendations that we made in our 2010 report, TSA developed three databases to track CSR and CFSR recommendations and their implementation status by pipeline facility, system, operator, and product type. In addition, the agency recently developed a fourth database to collect and analyze information gathered from pipeline operators’ responses to CSR questions. TSA officials reported that they use this database to assess the extent that TSA’s pipeline security program has met NSTS goals and Pipeline Security Branch priorities. TSA officials stated that they use the CSR goals and priorities database for follow-up on recommendations, indications of improvement in pipeline security, and as an input into TSA performance goals and measures, including the performance measures for the 2018 NSTS Pipeline Security Plan. We found several problems with the databases that indicate that the pipeline security program data are not sufficiently reliable and do not provide quality information that is current, complete, and accurate. First, the CSR recommendations database only included information for reviews conducted from November 2010 through February 2013. TSA officials stated that the agency stopped capturing CSR recommendations and status information in 2014. A TSA official stated that one factor was that the pipeline staffing level was one FTE in fiscal year 2014. However, the Pipeline Security Branch did not resume entering CSR recommendation-related information when staffing levels rose to 6 FTEs in the following year and beyond. As a result, TSA is missing over 5 years of data for the recommendations it made to pipeline operators when conducting CSRs. The agency collected some information from CSRs conducted in fiscal years 2016 and 2017 in the separate CSR goals and priorities database. However, this database does not include all of the information that TSA collects when conducting CSRs. Specifically, the CSR goals and priorities database does not state which companies were reviewed, what specific recommendations were made, or the current status of those recommendations, and only records operators’ responses to 79 of the 222 CSR questions. Second, our review identified instances of missing data, inconsistent data entry formats, and data entry errors in the four databases. For example: The CSR recommendations database had missing data in all 13 of the included fields and a data entry error shifted 50 observations into the wrong fields, impacting both the Status Date and Completion Code fields. The CSR goals and priorities database had seven entries with inconsistent data formatting and five of those entries were not taken into account when calculating summary figures. The CFSR recommendations database had missing data in 3 of 9 fields. There was also inconsistent data entry formats in 3 fields. The CFSR status database had missing data in 7 of 29 fields and inconsistent data entry formats in 4 fields. Finally, TSA has not documented its data entry and verification procedures, such as in a data dictionary or user manual, and does not have electronic safeguards for out-of-range or inconsistent entries for any of the databases it uses to track the status of CSR or CFSR recommendations and analyze operator responses to the CSR. TSA Pipeline Security Branch officials told us that they had not documented data entry and verification procedures and did not have electronic safeguards. This was for two reasons. First, the officials stated that the databases are small and maintained in a commercial spreadsheet program that does not allow for electronic safeguards. However, based on our review of the databases, the spreadsheet program does allow for a variety of electronic safeguards. For example, entries can be restricted to only allow selections from a drop-down list or only allow dates to be entered. Second, only a small number of TSA employees enter information into these databases. TSA officials explained that typically one TSA employee is responsible for entering information from pipeline security reviews, and another individual, usually whoever conducted the review, is tasked to verify the accuracy of the data entered. As a result, according to the officials, any errors would be self-evident and caught during these TSA employees’ reviews. Our work has emphasized the importance of quality information for management to make informed decisions and evaluate agencies’ performance in achieving key objectives and addressing risks. The Standards for Internal Control in the Federal Government states that management should use quality information to achieve agency objectives, where “quality” means, among other characteristics, current, complete, and accurate. In addition, DHS’s Information Quality Guidelines state that all DHS component agencies should treat information quality as integral to every step of the development of information, including creation, collection, maintenance, and dissemination. The guidelines also state that agencies should substantiate the quality of the information disseminated through documentation or other appropriate means. Without current, complete, and accurate information, it is difficult for TSA to evaluate the performance of the pipeline security program. Until TSA monitors and records the status of these reviews’ recommendations, it will be hindered in its efforts to determine whether its recommendations are leading to significant reduction in risk. By entering information on CSR recommendations and monitoring and recording their status, developing written documentation of its data entry and verification procedures and electronic safeguards, and improving the quality of its pipeline security program data, TSA could better ensure it has the information necessary to effectively monitor pipeline operators’ progress in improving their security posture, and evaluate its pipeline security program’s effectiveness in reducing security risks to pipelines. Conclusions A successful pipeline attack could have dire consequences on public health and safety, as well as the U.S. economy. Recent coordinated campaigns by environmental activists to disrupt pipeline operations, and the successful attempts by nation-state actors to infiltrate and obtain sensitive information from pipeline operators’ business and operating systems, demonstrate the dynamic and continuous threat to the security of our nation’s pipeline network. To help ensure the safety of our pipelines throughout the nation, it is important for TSA to address weaknesses in the management of its pipeline security program. TSA’s Pipeline Security Branch revised its security guidelines in March 2018 to, among other things, reflect the dynamic threat environment and incorporate NIST’s Cybersecurity Framework cybersecurity principles and practices. However, without a documented process defining how frequently TSA is to review and, if deemed necessary, revise its guidelines, TSA cannot ensure that its guidelines reflect the latest known standards and best practices for physical and cybersecurity, or address the persistent and dynamic security threat environment currently facing the nation’s pipeline system. Further, without clearly defined criteria for determining pipeline facilities’ criticality, TSA cannot ensure that pipeline operators are applying guidance uniformly and that all of the critical facilities across the pipeline sector have been identified; or that their vulnerabilities have been identified and addressed. TSA could improve its ability to conduct pipeline security reviews and the means that it uses to prioritize which pipeline systems to review based on their relative risk ranking. Establishing a strategic workforce plan could help TSA ensure that it has identified the necessary skills, competencies, and staffing allocations that the Pipeline Security Branch needs to carry out its responsibilities, including conducting security reviews of critical pipeline companies and facilities, as well as their cybersecurity posture. Better considering threat, vulnerability, and consequence elements in its risk assessment and incorporating an independent, external peer review in its process would provide more assurance that the Pipeline Security Branch ranks relative risk among pipeline systems using comprehensive and accurate data and methods. Recommendations for Executive Action We are making 10 recommendations to TSA: The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to implement a documented process for reviewing, and if deemed necessary, for revising TSA’s Pipeline Security Guidelines at regular defined intervals. (Recommendation 1) The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to clarify TSA’s Pipeline Security Guidelines by defining key terms within its criteria for determining critical facilities. (Recommendation 2) The TSA Administrator should develop a strategic workforce plan for its Security Policy and Industry Engagement’s Surface Division, which could include determining the number of personnel necessary to meet the goals set for its Pipeline Security Branch, as well as the knowledge, skills, and abilities, including cybersecurity, that are needed to effectively conduct CSRs and CFSRs. (Recommendation 3) The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to update the Pipeline Relative Risk Ranking Tool to include up-to-date data to ensure it reflects industry conditions, including throughput and threat data. (Recommendation 4) The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to fully document the data sources, underlying assumptions and judgments that form the basis of the Pipeline Relative Risk Ranking Tool, including sources of uncertainty and any implications for interpreting the results from the assessment. (Recommendation 5) The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to identify or develop other data sources relevant to threat, vulnerability, and consequence consistent with the NIPP and DHS critical infrastructure risk mitigation priorities and incorporate that data into the Pipeline Relative Risk Ranking Tool to assess relative risk of critical pipeline systems, which could include data on prior attacks, natural hazards, feedback data on pipeline system performance, physical pipeline condition, and cross-sector interdependencies. (Recommendation 6) The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to take steps to coordinate an independent, external peer review of its Pipeline Relative Risk Ranking Tool, after the Pipeline Security Branch completes enhancements to its risk assessment approach. (Recommendation 7) The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to ensure that it has a suite of performance measures which exhibit key attributes of successful performance measures, including measurable targets, clarity, and baseline and trend data. (Recommendation 8) The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to take steps to enter information on CSR recommendations and monitor and record their status. (Recommendation 9) The TSA Administrator should direct the Security Policy and Industry Engagement’s Surface Division to improve the quality of its pipeline security program data by developing written documentation of its data entry and verification procedures, implementing standardized data entry formats, and correcting existing data entry errors. (Recommendation 10) Agency Comments and Our Evaluation We provided a draft of this report to DHS, DOE, DOT, and FERC. DHS provided written comments which are reproduced in appendix III. In its comments, DHS concurred with our recommendations and described actions planned to address them. DHS, DOE, DOT, FERC, also provided technical comments, which we incorporated as appropriate. We also provided draft excerpts of this product to the American Petroleum Institute (API), the Association of Oil Pipe Lines, the American Gas Association (AGA), the Interstate Natural Gas Association of America (INGAA), the American Public Gas Association, and the selected pipeline operators that we interviewed. For those who provided technical comments, we incorporated them as appropriate. With regard to our first recommendation, that TSA implement a documented process for reviewing, and if deemed necessary, for revising its Pipeline Security Guidelines at regular defined intervals, DHS stated that TSA will implement a documented process for reviewing and revising its Pipeline Security Guidelines at regular defined intervals, as appropriate. DHS estimated that this effort would be completed by March 31, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our second recommendation, that TSA clarify its Pipeline Security Guidelines by defining key terms within its criteria for determining critical facilities, DHS stated that TSA will clarify its Pipeline Security Guidelines by defining key terms within its criteria for determining critical facilities. DHS estimated that this effort would be completed by May 31, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our third recommendation, that TSA develop a strategic workforce plan for its Security Policy and Industry Engagement's Surface Division, DHS stated that TSA will develop a strategic workforce plan for the division, which includes determining the number of personnel necessary to meet the goals set for the Pipeline Security Branch, as well as the knowledge, skills, and abilities, including cybersecurity, that are needed to effectively conduct CSRs and CFSRs. DHS estimated that this effort would be completed by June 30, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our fourth recommendation, that TSA update the Pipeline Relative Risk Ranking Tool to include up-to-date data in order to ensure it reflects industry conditions, including throughput and threat data, DHS stated that TSA will update the Pipeline Relative Risk Ranking Tool to include up-to-date data in order to ensure it reflects industry conditions, including throughput and threat data. DHS estimated that this effort would be completed by February 28, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our fifth recommendation, that TSA fully document the data sources, underlying assumptions, and judgements that form the basis of the Pipeline Relative Risk Ranking Tool, including sources of uncertainty and any implications for interpreting the results from the assessment, DHS stated that TSA will fully document the data sources, underlying assumptions, and judgements that form the basis of the Pipeline Relative Risk Ranking Tool. According to DHS, this will include sources of uncertainty and any implications for interpreting the results from the assessment. DHS estimated that this effort would be completed by February 28, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our sixth recommendation, that TSA identify or develop other data sources relevant to threat, vulnerability, and consequence consistent with the NIPP and DHS critical infrastructure risk mitigation priorities and incorporate that data into the Pipeline Relative Risk Ranking Tool to assess relative risk of critical pipeline systems, DHS stated that TSA will identify and/or develop other sources relevant to threat, vulnerability, and consequence consistent with the NIPP and DHS critical infrastructure risk mitigation priorities. DHS also stated that TSA will incorporate that data into the Pipeline Risk Ranking Tool to assess relative risk of critical pipeline systems, which could include data on prior attacks, natural hazards, feedback data on pipeline system performance, physical pipeline condition, and cross-sector interdependencies. DHS estimated that this effort would be completed by June 30, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our seventh recommendation, that TSA take steps to coordinate an independent, external peer review of its Pipeline Relative Risk Ranking Tool, after the Pipeline Security Branch completes enhancements to its risk assessment approach, DHS stated that, after completing enhancements to its risk assessment approach, TSA will take steps to coordinate an independent, external peer review of its Pipeline Relative Risk Ranking Tool. DHS estimated that this effort would be completed by November 30, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our eighth recommendation, that TSA ensure that the Security Policy and Industry Engagement's Surface Division has a suite of performance measures which exhibit key attributes of successful performance measures, including measurable targets, clarity, baseline, and trend data, DHS stated that TSA’s Surface Division’s Pipeline Section will develop both physical and cyber security performance measures, in consultation with pipeline stakeholders, to ensure that it has a suite of performance measures which exhibit key attributes of successful performance measures, including measurable targets, clarity, baseline, and trend data. DHS estimated that this effort would be completed by November 30, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our ninth recommendation, that TSA take steps to enter information on CSR recommendations and monitor and record their status, DHS stated that TSA will enter information on CSR recommendations and monitor and record their status. DHS estimated that this effort would be completed by October 31, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our tenth recommendation, that TSA take steps to improve the quality of its pipeline security program data by developing written documentation of its data entry and verification procedures, implementing standardized data entry formats, and correcting existing data entry errors, DHS stated that TSA will develop written documentation of its data entry and verification procedures, implementing standardized data entry formats, and correcting existing data entry errors. DHS estimated that this effort would be completed by July 31, 2019. This action, if fully implemented, should address the intent of the recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until one day from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Energy, Homeland Security, and Transportation; the Executive Director of the Federal Energy Regulatory Committee; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Chris Currie at (404) 679-1875 or curriec@gao.gov, and Nick Marinos at (202) 512-9342 or marinosn@gao.gov. Key contributors to this report are listed in appendix IV. Appendix I: Federal and Industry Security Guidelines and Standards for the Pipeline Sector This appendix lists security guidance and guidance-related tools that the pipeline operators and industry association officials we interviewed identified as adopted or available in order to secure their physical and cyber operations. This list should not be considered to include all physical and cybersecurity guidance that may be available or used by all pipeline operators nor do all operators use all guidance listed. Appendix II: Description of Areas for Improvement in the Pipeline Security Branch’s Pipeline Relative Risk Ranking Tool The Transportation Security Administration’s (TSA) Pipeline Security Branch developed the Pipeline Relative Risk Ranking Tool (risk assessment) in 2007. The risk assessment calculates threat, vulnerability, and consequence on variables such as the amount of throughput in the pipeline system (consequence input). Pipeline Security Branch officials told us that they use the pipeline risk assessment to rank relative risk of the top 100 critical pipeline systems, and the standard operating procedures for conducting Corporate Security Reviews (CSR) state the results of the risk ranking are the primary factor considered when prioritizing CSRs of pipeline companies. However, we identified several factors that likely limit the usefulness of the current assessment in calculating threat, vulnerability, and consequence to allow the Pipeline Security Branch to effectively prioritize reviews of pipeline systems. For example, because the risk assessment has not changed since 2014, information on threat may be outdated. Additionally, sources of data and underlying assumption and judgments regarding certain threat and vulnerability inputs to the assessment are not fully documented. For example, threats to cybersecurity are not specifically accounted for in the description of the risk assessment methodology, making it unclear if cybersecurity is part of the assessment’s threat factor. Further, the risk assessment does not include information that is consistent with the National Infrastructure Protection Plan (NIPP) and other Department of Homeland Security (DHS) priorities for critical infrastructure risk mitigation, such as information on natural hazards and the ability to measure risk reduction (feedback data). According to Pipeline Security Branch officials, the risk ranking assessment is not intended to be a fully developed risk model detailing all pipeline factors influencing risk. Rather, officials said they are primarily interested in assessing risk data that impacts security. However, because TSA’s Pipeline Security Program is designed to enhance the security preparedness of the pipeline systems, incorporating additional factors that enhance security into their risk calculation of the most critical pipeline systems would better align their efforts with Presidential Policy Directive 21 (PPD-21). For example, PPD-21 calls for agencies to integrate and analyze information to prioritize assets and manage risks to critical infrastructure, as well as anticipate interdependencies and cascading impacts. Below we present the various shortfalls in the risk assessment—outdated data, limited description of sources and methodology, and opportunities to better align with the NIPP and other DHS priorities for critical infrastructure risk mitigation—in the context of the components that comprise a risk assessment: threat, vulnerability, and consequence. Whereas in 2010 we made recommendations to improve the consequence component in the pipeline relative risk ranking tool, we have currently identified shortfalls that cut across all risk components: threat, vulnerability, and consequence. Threat We identified several shortfalls in the pipeline risk assessment’s calculation of threat. First, while the risk assessment assesses consequence and vulnerability by pipeline system through use of multiple variables, it currently ranks threat for pipeline systems equally. Second, the evolving nature of threats to pipelines may not be reflected, since the risk assessment was last updated in 2014. Third, the threat calculation does not take into account natural hazards. Pipeline Security Branch officials said they currently rank threat equally across pipeline systems because they do not have granular enough threat information to distinguish threat by pipeline. However, ranking threat equally effectively has no effect on the risk calculation for pipeline systems. Further, this judgment is not documented in the risk assessment’s methodology. According to the NIPP, a risk assessment’s methodology must clearly document what information is used and how it is synthesized to generate a risk estimate, including any assumptions and judgments. Additionally, our analysis of the pipeline risk assessment found that it includes at least one field that TSA could use to differentiate threat by pipeline. Specifically, the risk assessment includes a field that accounts for whether a pipeline experienced a previous security threat (including failed attacks), and information provided by Pipeline Security Branch suggests some pipeline systems have experienced such threats. However, the Pipeline Security Branch did not capture these events in the risk assessment’s calculation, which Pipeline Security Branch officials said should be part of the threat calculation, but could not account for why they were not calculated for the systems in the risk assessment. These officials also clarified that incidents such as suspicious photography or vandalism do not constitute an attack to be accounted for in the threat calculation. Documenting such assumptions, judgments, or decisions to exclude information could provide increased transparency to those expected to interpret or use the results. Pipeline Security Branch officials also said that they ranked threat equally because TSA Intelligence and Analysis data show that threats to the oil and natural gas sector have been historically low, and Intelligence and Analysis does not conduct specific threat analysis against individual pipeline systems. However, the Pipeline Security Branch has not updated the risk assessment since June 2014; therefore, the threat information it used to determine threat calculations—and decide to rank threat equally—may be outdated and not reflect the threats to the industry that have emerged in recent years. In fact, pipeline operators we interviewed indicated that the types of threats that concern pipeline operators have evolved. For example, 5 of the 10 operators we interviewed indicated that environmental activists were an increased threat to the pipeline industry because they use sabotage techniques, such as valve turning and cutting in service pipelines with blow torches, against pipelines. Additionally, 6 of 10 pipeline operators we interviewed said cyber attacks from nation-state actors were a primary threat to their industry. Further, when TSA issued its revised Pipeline Security Guidelines in March 2018, it stated that its revisions to the guidelines were made to reflect the ever-changing threat environment in both the physical and cybersecurity realms. However, threats to cybersecurity are not specifically accounted for in the description of the risk assessment methodology. Recent Pipeline Modal and Cyber Modal Threat Assessments include cyber threats to the pipeline industry, but the description of the pipeline risk assessment’s methodology does not specify what types of threat assessments (sources) are used to calculate its threat score. To better align with the guidance in the NIPP for documenting sources of information when conducting risk assessments, the Pipeline Security Branch should document the information used. Keeping the risk assessment updated with current information, as well as documenting those data sources, could help the Pipeline Security Branch ensure it is using its limited resources to review the pipeline systems with greater risk. Natural Hazard Threats to Pipelines The Transportation Systems Sector, of which pipelines are a part, is critical to the Pacific Northwest, but also at risk from natural hazards, like earthquakes. For example, according to the Department of Homeland Security, an earthquake in the Puget Sound region—which relies on the transportation of crude oil from Alaska—could cripple the ports of Seattle and Tacoma, as well as the Olympic and Williams Pipelines greatly impacting the Pacific Northwest Economic Region. Hurricanes are the most frequent disruptive natural hazard for the oil and natural gas subsector and can cause the shutdown of facilities in an area, even when the facilities themselves are not directly affected by the storms. For example, according to the U.S. Energy Information Administration, the flow of petroleum into the New York area via pipeline from the Gulf Coast relies on the ability to move it through major terminals. In August 2017, Hurricane Harvey caused major disruptions to crude oil and petroleum product supply chains, including those to New York Harbor from Houston, Texas via the Colonial Pipeline. Due to the hurricane, decreased supplies of petroleum products available for the pipeline in Houston forced Colonial Pipeline to limit operations temporarily. Finally, another shortfall in the current pipeline risk assessment methodology is that it does not account for natural hazards in its threat calculation, even though DHS’s definition of threat includes natural hazards, and security and resilience of critical infrastructure are often presented in the context of natural hazards. According to the NIPP, threat is a natural or manmade occurrence, individual, entity, or action that has or indicates the potential to harm life, information, operations, the environment, and/or property. As such, along with terrorism, criminal activity and cybersecurity, natural disasters are a key element of DHS’s critical infrastructure security and resilience mission. According to Pipeline Security Branch officials, there is not sufficient historical data available that would indicate a significant impact from natural disasters on specific pipeline systems. However, we identified possible sources of data for the Pipeline Security Branch to consider. For example, a 2016 RAND Corporation study examined national infrastructure systems’ exposure to natural hazards, including pipelines. Additionally, the Federal Emergency Management Agency (FEMA) has collaborated with stakeholders to develop the National Risk Index to, among other things, establish a baseline of natural hazards risk for the United States While there may not be historical data of natural hazard impact for every pipeline system, consulting other sources or experts could provide regional data or analysis to build a more comprehensive threat picture to help distinguish threats by pipeline system. According to the NIPP, hazard assessments should rely not only on historical information, but also future predictions about natural hazards to assess the likelihood or frequency of various hazards. Vulnerability We also identified multiple shortfalls in the vulnerability factors used in the risk assessment methodology, such as the potential uncertainty of the number of critical facilities and incorporating a feedback mechanism to calculate overall risk reduction. Other considerations for vulnerability calculations include physical condition of the pipeline system, cybersecurity activities, and interdependencies among sectors. The number of critical facilities a pipeline system has identified is used as an input for its vulnerability calculation in the Pipeline Security Branch’s risk assessment methodology. As discussed earlier, we identified deficiencies in TSA’s criteria for identifying critical facilities, and found that well-defined criteria and consistent application of the criteria for identifying critical facilities could improve the results of the Pipeline Security Branch’s risk assessment. Nevertheless, communicating in the risk assessment the uncertainty that may be inherent in this self-reported information would better align the risk assessment with the NIPP. Measuring Effectiveness in a Voluntary Environment According to the National Infrastructure Protection Plan, the use of performance metrics is an important step in the critical infrastructure risk management process to enable assessment of improvements in critical infrastructure security and resilience. The metrics provide a basis for the critical infrastructure community to establish accountability, document actual performance, promote effective management, and provide a feedback mechanism to inform decision making. By using metrics to evaluate the effectiveness of voluntary partnership efforts to achieve national and sector priorities, critical infrastructure partners can adjust and adapt their security and resilience approaches to account for progress achieved, as well as changes in the threat and other relevant environments. Metrics are used to focus attention on areas of security and resilience that warrant additional resources or other changes through an analysis of challenges and priorities at the national, sector, and owner/operator levels. Metrics also serve as a feedback mechanism for other aspects of the critical infrastructure risk management approach. Another shortfall in the risk assessment is its inability to reliably measure the progress a pipeline system made in addressing vulnerability gaps between security reviews. The current risk assessment includes a CSR score as part of its vulnerability calculation, which was developed in part in response to our 2010 recommendation to use more reliable data to measure a pipeline system’s vulnerability gap. However, during our review, Pipeline Security Branch officials said they plan to remove pipeline companies’ CSR scores from the risk assessment calculations, because they and industry partners do not have confidence that the score appropriately measures a pipeline system’s vulnerability. For example, Pipeline Security Branch officials explained that pipeline companies consider security factors differently, which can lead to variation in implementing risk reduction activities and by extension lead to different CSR scores. However, removing the CSR score eliminates the only feedback mechanism in the risk assessment from a pipeline company’s actual security review conducted by the Pipeline Security Branch. The NIPP and DHS’s Risk Management fundamentals emphasize the important role that such feedback mechanisms play in risk management. Officials from the Pipeline Security Branch agree on the importance of a feedback mechanism tying results of reviews to a revised vulnerability metric, but said they need a better measure than the current CSR score which is unreliable for comparative and analytic purposes. Developing a feedback mechanism based on implementation of TSA’s Pipeline Security Guidelines could be an important input to the risk assessment’s vulnerability calculation. This information would also inform the amount of risk pipeline companies are reducing by implementing the guidelines and could be used to inform overall risk reduction. The physical and cyber environments in which the pipeline sector operates also present vulnerabilities not accounted for in the pipeline risk assessment. In recent years, DHS has listed the potential for catastrophic losses to dramatically increase the overall risk associated with failing infrastructure and highlighted risks due to climate change and natural hazards to pipelines. For example, DHS reported extreme temperatures—such as higher and lower temperatures over prolonged periods of time—increase vulnerability to the critical infrastructure by causing elements to break and cease to function. Pipelines that freeze and then rupture can affect the energy and transportation systems sectors. As noted above, according to the NIPP, a natural or man-made occurrence or action with the potential to harm life is considered a threat, whereas vulnerability is defined as a physical feature or operational attribute that renders an entity open to exploitation or susceptible to a given threat or hazard. While pipeline physical condition is typically thought of in context of safety, pipeline condition or location (such as above or below ground) could touch upon pipeline security as it relates to system vulnerability. For example, a pipeline system or segment of a system with a compromised physical condition due to corrosion or age could affect the system’s vulnerability to threats and affect its ability to recover from such threats by potentially increasing the time a system is offline. According to the Transportation Systems Sector-Specific Plan, vulnerabilities to damage in aging transportation infrastructure—of which pipelines are a part—are projected to increase with the continued effects of climate change. Further, according to TSA’s Pipeline Security and Incident Recovery Protocol Plan, pipeline integrity efforts—including the design, construction, operation, and maintenance of pipelines—are important to pipeline security because well-maintained, safe pipelines are more likely to tolerate a physical attack. The Pipeline Security Branch already collects information from the Pipeline and Hazardous Materials Safety Administration (PHMSA) for its risk assessment, specifically information on High Consequence Area and High Threat Urban Area mileage. By considering additional information PHMSA collects on pipeline integrity, the Pipeline Security Branch could also use the information to help pipeline operators identify security measures to help reduce the consequences related to the comparatively higher vulnerability of an aging or compromised system. This would align with the Pipeline Security Branch’s efforts to improve security preparedness of pipeline systems and could better inform its vulnerability calculations for relative risk ranking of pipeline systems. Capturing cybersecurity in the risk assessment is also an area for improvement. Pipeline Security Branch officials told us they consulted with the National Cybersecurity and Communications Integration Center to revise TSA’s Pipeline Security Guidelines to align with the National Institute of Standards and Technology (NIST) Cybersecurity Framework and that absent data specific to pipelines on their cybersecurity vulnerabilities, they are unable to include a pipelines’ vulnerability to cyber attack in the risk assessment. However, the Pipeline Security Branch recently updated the security review questions asked of pipeline operators during corporate and critical facility reviews based on the recently updated Pipeline Security Guidelines. Using these updated questions related to companies’ cybersecurity posture, the Pipeline Security Branch could collect additional information on cybersecurity vulnerabilities which could inform the risk assessment. This could be an element of the feedback mechanism described above and emphasized in the NIPP. Additionally, NIST identified several supply chain vulnerabilities associated with cybersecurity, which are not currently accounted for in TSA’s Pipeline Security Guidelines. As pipeline operators implement increasing levels of network technologies to control their systems, the Pipeline Security Branch may not be fully accounting for pipeline systems’ cybersecurity posture by not including the cybersecurity-related vulnerabilities in its risk assessment inputs. Finally, we identified shortfalls in cross-sector interdependencies, which could affect vulnerability calculations. According to the NIPP, understanding and addressing risks from cross-sector dependencies and interdependencies is essential to enhancing critical infrastructure security and resilience. The Pipeline Security Branch’s pipeline risk assessment currently considers the effects of a pipeline system’s ability to service assets such as major airports, the electric grid, and military bases. However, consequence is calculated on the loss or disruption of the pipeline system to these other assets and does not capture the dependency of the pipeline system on other energy sources, such as electricity. Weather events such as Gulf of Mexico hurricanes and Superstorm Sandy highlighted the interdependencies between the pipeline and electrical sectors. Specifically, according to a 2015 DHS annual report on critical infrastructure, power failures during Superstorm Sandy in 2012 closed major pipelines for 4 days, reducing regional oil supplies by 35 to 40 percent. The report goes on to say that the interconnected nature of infrastructure systems can lead to cascading impacts and are increasing in frequency. Pipeline Security Branch officials are considering cross-sector interdependencies and said they discuss these factors with operators as they relate to system resiliency. Considering interdependencies of sectors in both directions—such as calculating the likelihood that an input like electricity could fail and cause disruptions to critical pipelines—could improve the calculations in the pipeline risk assessment. Consequence As previously discussed, the Pipeline Security Branch last calculated relative risk among the top 100 pipeline systems in 2014. When doing so, it used pipeline systems’ throughput data from 2010 to assess relative risk. According to Pipeline Security Branch officials, the amount of throughput in pipeline systems does not change substantially year to year. However, Standards for Internal Control in the Federal Government calls for management to use quality information to achieve the entity’s objectives, including using relevant data from reliable sources obtained in a timely manner. The Pipeline Security Branch uses throughput data as a consequence factor in the risk assessment to determine a pipeline system’s relative risk score. Throughput changes could affect relative risk ranking and the Pipeline Security Branch’s ability to accurately prioritize reviews based on relative risk. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Chris P. Currie at (404) 679-1875 or curriec@gao.gov Nick Marinos at (202) 512-9342 or marinosn@gao.gov. Staff Acknowledgments In addition to the contacts named above, Ben Atwater, Assistant Director; Michael W. Gilmore, Assistant Director; and Michael C. Lenington, Analyst-in-Charge, managed this assignment. Chuck Bausell, David Blanding, Dominick Dale, Eric Hauswirth, Kenneth A. Johnson, Steve Komadina, Susanna Kuebler, Thomas Lombardi, David Plocher, and Janay Sam made significant contributions to this report.
Why GAO Did This Study More than 2.7 million miles of pipeline transport and distribute oil, natural gas, and other hazardous products throughout the United States. Interstate pipelines run through remote areas and highly populated urban areas, and are vulnerable to accidents, operating errors, and malicious physical and cyber-based attack or intrusion. The energy sector accounted for 35 percent of the 796 critical infrastructure cyber incidents reported to DHS from 2013 to 2015. Several federal and private entities have roles in pipeline security. TSA is primarily responsible for the oversight of pipeline physical security and cybersecurity. GAO was asked to review TSA's efforts to assess and enhance pipeline security and cybersecurity. This report examines, among other objectives: (1) the guidance pipeline operators reported using to address security risks and the extent that TSA ensures its guidelines reflect the current threat environment; (2) the extent that TSA has assessed pipeline systems' security risks; and (3) the extent TSA has assessed its effectiveness in reducing pipeline security risks. GAO analyzed TSA documents, such as its Pipeline Security Guidelines ; evaluated TSA pipeline risk assessment efforts; and interviewed TSA officials, 10 U.S. pipeline operators—selected based on volume, geography, and material transported—and representatives from five industry associations. What GAO Found Pipeline operators reported using a range of guidelines and standards to address physical and cybersecurity risks, including the Department of Homeland Security's (DHS) Transportation Security Administration's (TSA) Pipeline Security Guidelines , initially issued in 2011. TSA issued revised guidelines in March 2018 to reflect changes in the threat environment and incorporate most of the principles and practices from the National Institute of Standards and Technology's Framework for Improving Critical Infrastructure Cybersecurity . However, TSA's revisions do not include all elements of the current framework and TSA does not have a documented process for reviewing and revising its guidelines on a regular basis. Without such a documented process, TSA cannot ensure that its guidelines reflect the latest known standards and best practices for physical security and cybersecurity, or address the dynamic security threat environment that pipelines face. Further, GAO found that the guidelines lack clear definitions to ensure that pipeline operators identify their critical facilities. GAO's analysis showed that operators of at least 34 of the nation's top 100 critical pipeline systems (determined by volume of product transported) deemed highest risk had identified no critical facilities. This may be due, in part, to the guidelines not clearly defining the criteria to determine facilities' criticality. To assess pipeline security risks, TSA conducts pipeline security reviews—Corporate Security Reviews and Critical Facility Security Reviews—to assess pipeline systems' vulnerabilities. However, GAO found that the number of TSA security reviews has varied considerably over the last several years, as shown in the table on the following page. TSA officials stated that staffing limitations have prevented TSA from conducting more reviews. Staffing levels for TSA's Pipeline Security Branch have varied significantly since fiscal year 2010 with the number of staff ranging from 14 full-time equivalents in fiscal years 2012 and 2013 to 1 in 2014. Further, TSA does not have a strategic workforce plan to help ensure it identifies the skills and competencies—such as the required level of cybersecurity expertise—necessary to carry out its pipeline security responsibilities. By establishing a strategic workforce plan, TSA can help ensure that it has identified the necessary skills, competencies, and staffing. GAO also identified factors that likely limit the usefulness of TSA's risk assessment methodology for prioritizing pipeline system reviews. Specifically, TSA has not updated its risk assessment methodology since 2014 to reflect current threats to the pipeline industry. Further, its sources of data and underlying assumptions and judgments regarding certain threat and vulnerability inputs are not fully documented. In addition, the risk assessment has not been peer reviewed since its inception in 2007. Taking steps to strengthen its risk assessment, and initiating an independent, external peer review would provide greater assurance that TSA ranks relative risk among pipeline systems using comprehensive and accurate data and methods. TSA has established performance measures to monitor pipeline security review recommendations, analyze their results, and assess effectiveness in reducing risks. However, these measures do not possess key attributes—such as clarity, and having measurable targets—that GAO has found are key to successful performance measures. By taking steps to ensure that its pipeline security program performance measures exhibit these key attributes, TSA could better assess its effectiveness at reducing pipeline systems' security risks. Pipeline Security Branch officials also reported conducting security reviews as the primary means for assessing the effectiveness of TSA's efforts to reduce pipeline security risks. However, TSA has not tracked the status of Corporate Security Review recommendations for the past 5 years. Until TSA monitors and records the status of these reviews' recommendations, it will be hindered in its efforts to determine whether its recommendations are leading to significant reduction in risk. What GAO Recommends GAO makes 10 recommendations to TSA to improve its pipeline security program management (many are listed on the next page), and DHS concurred. GAO recommends, among other things, that the TSA Administrator take the following actions: implement a documented process for reviewing, and if deemed necessary, for revising TSA's Pipeline Security Guidelines at defined intervals; clarify TSA's Pipeline Security Guidelines by defining key terms within its criteria for determining critical facilities; develop a strategic workforce plan for TSA's Security Policy and Industry Engagement‘s Surface Division; update TSA's pipeline risk assessment methodology to include current data to ensure it reflects industry conditions and threats; fully document the data sources, underlying assumptions and judgments that form the basis of TSA's pipeline risk assessment methodology; take steps to coordinate an independent, external peer review of TSA's pipeline risk assessment methodology; ensure the Security Policy and Industry Engagement‘s Surface Division has a suite of performance measures which exhibit key attributes of successful performance measures; and enter information on Corporate Security Review recommendations and monitor and record their status.
gao_GAO-19-212
gao_GAO-19-212_0
Background Federal acquisition regulations require certain contractors who do business with the government to maintain acceptable business systems that reduce risk to the government and taxpayer. Contractors may have up to six major business systems that require review. DOD’s acquisition regulation establishes criteria for each of the six types of contractor business systems, which are implemented by the inclusion of certain contract clauses. Where a contract includes these clauses, the contractor’s business systems generally must meet the criteria. Factors such as the type of contract and the dollar value determine whether the clauses are included in a contract (see table 1). In certain cases, the absence of an adequate system may preclude the government from using a particular contract type or may require additional oversight or analysis. For example, the FAR states that: A cost-reimbursement contract may be used only when, among other things, contractors’ accounting systems are adequate for determining costs applicable to the contracts or orders; an adequate accounting system is also required for the use of progress payments. Without an approved purchasing system, contractors may require additional oversight of their subcontracting decisions. Significant deficiencies with contractors’ estimating systems shall be considered during negotiation. Alternatively, an adequate estimating system may reduce the scope of reviews to be performed on individual proposals, expedite the negotiation process, and increase the reliability of proposals. DCMA and DCAA are responsible for providing contracting and audit support to the military departments and are responsible for conducting business system reviews, along with a host of other responsibilities (see table 2). Under DCMA’s November 2013 instruction, the final determination of adequacy for all of the contractor business systems resides with the DCMA administrative contracting officers (ACO). An ACO may have responsibility for all or a portion of a single large business or may be responsible for a number of smaller contractors within a particular region. To help inform their system determinations, an ACO can request that either DCMA or DCAA conduct business system reviews or audits when needed. Among other responsibilities, ACOs are responsible for taking actions to impose consequences when contractors do not comply with business system standards. Prior Reports by GAO, Other Accountability Organizations, and Legislative Actions Throughout the last 10 years, GAO and other accountability organizations have reported on challenges DOD faces when conducting CBS reviews or other critical contracting audits, such as incurred cost audits. Over this time Congress has also taken actions through various NDAAs to initiate changes to the CBS review process. In 2009, the Commission on Wartime Contracting and GAO highlighted significant concerns about how DOD was conducting CBS reviews at that time. For example: The Commission reported that billions of dollars in contingency- contract costs in Iraq and Afghanistan could not be verified by government auditors and that inadequate internal controls over contractor business systems hampered the government’s insight into cost errors and material misstatements. The report highlighted instances where DCMA and DCAA came to different conclusions when reviewing the same contracts and had inadequate resources to complete business system reviews. It also stated that DCMA was not aggressive in motivating contractors to improve their business systems because it accepted corrective action plans as sufficient progress to address deficiencies. The commission made recommendations to address each of these issues. We found issues with independence of auditors, sufficiency of evidence, and incomplete reporting of DCAA’s findings. As result, we made 17 recommendations to DOD to help improve the quality of DCAA’s audits, most of which the agency has implemented. Since then, subsequent GAO and DOD Inspector General (IG) reports have pointed to other issues with the CBS review process and DCAA’s incurred cost audit process. Namely, In November 2011, we found that DCAA could not complete the number of CBS reviews needed to be consistent with its guidelines because it was focused on higher priority areas—such as incurred cost audits—and, as a result, DCMA contracting officers maintained systems’ determinations as adequate even though the systems had not been audited by DCAA in a number of years. Among our recommendations, we proposed that DCMA and DCAA identify options, such as hiring external auditors, to assist in the conduct of CBS reviews until DCAA could adequately fulfill those responsibilities with its own workforce. In July 2014, DOD published a proposal to change the DFARS to allow public accounting firms to perform reviews of accounting, estimating, and material management and accounting systems. According to DPC officials, however, the department’s IG raised concerns about consistency between the proposed change and statutory and regulatory requirements for IG oversight of outside audit services. Further, the private sector expressed concerns that CBS audit criteria did not align with generally accepted accounting principles used in the private sector. As result of these challenges, DOD did not implement the proposed regulation change. In December 2012, we found that DCAA’s backlog of incomplete incurred cost audits was a contributing factor in DOD’s inability to close out contracts in a timely manner. To address this backlog, DCAA began implementing a new, risk-based approach that was expected to shift DCAA’s resources to focus on incurred cost audits involving high-dollar value and high risk proposals. In October 2015, the DOD IG found that DCMA contracting officers did not always comply with requirements to report business system deficiencies and found instances where CBS determinations based on DCAA-led reviews were not reported within required timeframes. The IG concluded that this likely caused delays in correcting significant business system deficiencies and lengthened the time the government was unable to rely on data generated by those business systems. In September 2017, we found that despite efforts by DCAA to reduce the backlog of incurred cost proposals awaiting audit, the agency was not able to meet its goals to eliminate the backlog by fiscal year 2016 and that it was unlikely to meet a revised goal of fiscal year 2018. We recommended that DCAA assess and implement options for reducing the length of time to begin incurred cost audits and establish related performance measures. DCAA concurred with these recommendations and took actions to reduce the time it takes to begin audits. Most recently, in a January 2018 report, the Advisory Panel on Streamlining and Codifying Acquisition Regulations—commonly referred to as the Section 809 panel after the legislative provision that created it— reiterated the importance of business system internal controls. Noting that DOD’s CBS reviews are untimely and inconsistent, the Panel made several recommendations that seek to complete reviews, especially for accounting systems, in a more timely way. Among these recommendations are the use of public accounting firms to supplement the DOD audit workforce, a change to accounting system review standards and criteria, and the development of new guidance for the conduct of business system reviews. During the past 10 years, Congress also enacted three provisions related to improving how DOD conducts business system reviews and incurred cost audits. Specifically, Section 893 of the NDAA for Fiscal Year 2011 directed the Secretary of Defense to initiate a program to improve contractor business systems so that the systems provide timely and reliable information. The NDAA required that this program, among other things, establish requirements for each system and a process for identifying significant deficiencies within systems. It also required that DOD identify those officials responsible for approval and disapproval of a system, and that approval or disapproval of a system would be based on whether the system has a significant deficiency. Further, the law authorized DOD to withhold up to 10 percent of contract progress payments, interim payments, and performance-based payments from certain contracts when systems are disapproved based on a significant deficiency. Contractors that require review—or “covered contractors”—were defined as those subject to the cost accounting standards. Section 893 of the NDAA for Fiscal Year 2017 amended the fiscal year 2011 NDAA provisions by (1) revising the definition of a “covered contractor” to generally mean those with government contracts subject to the cost accounting standards accounting for more than 1 percent of the contractor’s total gross revenue and (2) allowing public accounting firms to conduct contractor business system assessments. Section 803 of the NDAA for Fiscal Year 2018 required DOD to be compliant with certain standards of risk and materiality in the performance of incurred cost audits for its contracts. It also required that DOD use public accounting firms to, among other things, perform a sufficient number of incurred cost audits to eliminate the incurred cost audit backlog by October 1, 2020 and to allow DCAA to allocate resources to higher-risk and more complicated audits. Figure 1 below summarizes these reports and congressional actions related to contractor business system activities over the last decade. DOD Revised Its Policies and Procedures Related to the Contractor Business System Review Process Since 2011, DOD has taken actions to (1) clarify the roles and responsibilities of DCMA and DCAA in conducting CBS reviews and consolidate the number of reviews to be performed; (2) clarify how often DOD should conduct CBS reviews; (3) establish what criteria are used to evaluate a contractor’s business system; (4) establish timeframes by which ACOs are to make a determination on the adequacy of the contractors’ business systems; and (5) implement the use of payment withholds for contractors that are found to have significant deficiencies in their contractor business systems. DCMA and DCAA officials noted that these changes were implemented primarily to address the 2011 statutory provisions. Our review of six selected contractors’ business system reviews found that the whole process from the review or audit, to the follow up and resolution, can be lengthy. In three out of six selected cases we reviewed, it took 4 or more years for a contractor’s system to be approved. DOD Clarified DCMA and DCAA’s Roles and Responsibilities and Consolidated the Number of Business System Reviews Prior to 2011, DCAA conducted a series of 10 internal control audits on a cyclical basis, while DCMA performed more targeted testing on three systems. During that time, both DCMA and DCAA could review a contractor’s purchasing or earned value management (EVM) system but would evaluate different aspects of each system. As a result, DCMA and DCAA reviewers could issue deficiency reports based on their separate reviews of the same contractor business systems for the consideration of ACOs. As reported in August 2009 by the Commission on Wartime Contracting, these overlapping reviews led to instances where DCMA and DCAA came to different conclusions about the adequacy of the same business system. To address this issue and clarify roles and responsibilities, in November 2013 DCMA established policies that guide oversight and implementation of the CBS review process, to include approval responsibilities and procedures for the conduct and reporting of reviews. DCMA has separate instructions for each type of contractor business system with the exception of accounting. These separate instructions provide more details about appropriate stakeholders for specific reviews, noting particular functional experts such as offices within DCMA or DCAA that are to lead the conduct of the reviews. DCAA issued a separate memorandum in April 2012 that details changes made to accounting system reviews as a result of changes from the NDAA for fiscal year 2011. Under these revised processes, DCMA now has responsibility for reviewing three contractor business systems and DCAA is responsible for the other three. In all cases, the DCMA ACO makes the final determination on whether a system is approved or disapproved. Further, the revised process consolidated the number of audits that DCAA conducts on the adequacy of the contractor’s accounting system from five separate audits to one comprehensive system audit. According to DCAA, this consolidation was based on a comprehensive reassessment of the processes for assessing accounting systems and combined elements from previous internal control reviews. Figure 2 shows DCMA and DCAA responsibilities before and after the changes implemented from the NDAA for Fiscal Year 2011. Revised Process Clarified Specific Timeframes for How Often DOD Should Conduct Business System Reviews The revised DCMA instructions and related DCAA memorandums for the CBS review process also clarified timeframes for how often a contractor’s business system must be reviewed. Generally, each system should be reviewed every 3 years unless the ACO makes a determination that a review is not necessary based on a risk assessment or other factors (see table 3). DFARS Revisions Established Specific Criteria for Business Systems DOD also revised the DFARS in 2012 to provide definitions for acceptable contractor business systems and established individual DFARS clauses that define the criteria for each of the six business systems. As appropriate, these clauses are included in contracts and generally require the contractor to maintain adequate business systems, allow for the government to withhold payments when systems are found to have significant deficiencies, and list the criteria that the systems must meet. The number of criteria varies by system. For example, the DFARS clause for accounting systems includes 18 criteria used to evaluate system features such as proper segregation of direct and indirect costs, timekeeping, and exclusion of unallowable costs. For EVM systems, a contractor’s system must comply with private, institutional standards and includes procedures that generate timely, reliable, and verifiable reports. To test how DCAA-led audits were being implemented under these new criteria, DCAA began a pilot program in 2014 comprised of a team of dedicated auditors to conduct CBS reviews who, in turn, were to recommend changes in audit plans and other practices. DCAA initially focused on material management and accounting systems audits, then moved to estimating systems, and finally accounting systems. As result of this pilot, DCAA issued new audit guidance for all three systems in 2018, with the latest guidance for accounting system audits issued in October 2018. DCAA officials told us that they are implementing lessons learned from the pilot program and developing training on how to conduct the revised audit plans. DCMA Established Timeframes for ACOs to Make Adequacy Determinations The revised DCMA instructions provide timeframes for ACOs to communicate their initial and final determinations to contractors (see textbox) and define the responsibilities of DCMA management and ACOs for confirming significant deficiencies and resolving disagreements between functional specialists and the ACO. Revised Contractor Business System Review Process Timeframes According to the revised contractor business system review process, when significant deficiencies are found: Administrative Contracting Officers (ACO) have 10 days to communicate an initial determination of business system compliance to the contractor under review. The contractor is requested to respond to the letter within 30 days after that to respond to the letter communicating whether or not it concurs with the determination. The ACO issues a final determination 30 days after receipt of the contractor’s response. According to Defense Contract Management Agency (DCMA) officials, data for fiscal year 2017 indicated that 80 percent of final determination letters were issued within this required timeframes. In instances where deficiencies are found, these findings are reviewed by a panel within DCMA to help ensure standards are consistently applied. When there is disagreement between the ACO and functional specialist concerning the nature or severity of deficiencies found, a DCMA board of review may be requested by the ACO to resolve differences and produce a final determination. According to DCMA officials responsible for maintaining business system review policies, differences between functional specialists and contracting officers are generally resolved without the need for a board discussion. These officials said that only a few board discussions have been convened since implementation of the new review structure. Mandatory Payment Withholds Drive Timely Contractor Response to Significant Deficiencies Section 893 of the NDAA for Fiscal Year 2011 generally established that DOD be allowed to withhold payments under certain contracts when DOD disapproves one or more of a covered contractor’s business systems. DCMA officials previously had the latitude to withhold a portion of the payments owed to contractors as result of deficiencies identified in their reviews, but were not required to do so. From 2011 through 2013, DOD revised the DFARS and related agency instructions to generally require that ACOs apply a 2 to 5 percent contract payment withholding for a single deficient system and a maximum of a 10 percent withhold when multiple systems are found to have significant deficiencies. ACOs are authorized to reduce the amount being withheld after the ACO determines that the contractor has submitted an adequate corrective action plan and began its implementation. Our review of DCMA and DCAA information indicates that for all the CBS reviews conducted between fiscal years 2015 and 2017, DCMA and DCAA often identified significant deficiencies in three business systems. These were the cost estimating, material management and accounting, and purchasing systems. For example, DCAA identified a significant deficiency in nine of the 12 material management and accounting systems reviewed, while DCMA identified significant deficiencies in 260 of the 330 purchasing systems reviewed (see table 4). Because DCMA and DCAA officials do not maintain historical data on payment withholdings, it is not possible to determine the number of payment withholdings that were implemented over these years as a result of these significant deficiencies. The system used to track the status of systems and payment withholdings, CBAR, is updated by ACOs as corrective actions are completed and payment withholdings are removed, and thus shows only a snapshot in time. Our review of CBAR data from July 2018 found that DOD was withholding payments from 11 contractors with a total collective value of approximately $238 million at that time. One third of these payment withholdings were associated with significant deficiencies found in contractors’ estimating systems. DCMA and DCAA officials we spoke with noted that the withhold provision has led to contractors’ increased response to deficiencies, but they did not have data to determine the extent to which contractors’ responsiveness has increased. Some contractors we spoke with stated that because deficiencies will affect the company’s cash flow, senior management and board members have become more engaged in matters of business system compliance. CBS Review and Corrective Action Process Can Be Lengthy Our review of six selected contractors’ business system reviews illustrates the challenges in identifying and resolving deficiencies in a timely manner. Overall, our review of these six cases found that it took from 15 months to 5 years or more to resolve deficiencies initially identified by DCAA or DCMA. Factors contributing to the time it took to resolve these issues included contractors submitting inadequate corrective action plans, DCMA or DCAA identifying additional deficiencies in subsequent reviews or audits, and the use of different auditors to conduct the reviews. While the selected cases are not generalizable to all CBS reviews, they do highlight issues that can arise during the process. For example: In one case it took almost 4 years to resolve deficiencies identified in a contractor’s accounting system. In this case, DCAA issued an audit report in July 2014 that found seven significant deficiencies including inadequate monitoring and adjusting of rates the contractor was billing the government. DCMA subsequently issued an initial determination 7 days later disapproving the system, citing three of the seven deficiencies identified by DCAA. In August 2014, the contractor responded by providing a corrective action plan for the three deficiencies DCMA cited. DCMA sent a second determination letter the next month citing two additional deficiencies identified by DCAA. In October, the assigned ACO for the contractor left and new staff was assigned to the review. Ten days later, the contractor submitted a second corrective action plan to address the two deficiencies identified. Disagreement between the ACO and DCAA on the inclusion of the two remaining deficiencies identified by DCAA for the accounting system resulted in a need to convene a board of review by DCMA. The board decided that the two deficiencies would be included in the final determination. This, in turn, delayed issuance of a final determination until mid-December 2014. According to contractor representatives, over the next 3 years, they submitted various corrective action plans that DCMA determined were inadequate to address the deficiencies. Each time, the ACO requested additional information and follow-up DCAA audits to help assess the adequacy of the contractor’s corrective action plans. Eventually the contractor’s accounting system was approved in June 2018. In another case, a contractor’s estimating system has been disapproved for over 5 years. In June 2013, DCAA identified four significant deficiencies in the contractor’s system, including inadequate support for commerciality determinations. As a result, following a final determination of inadequacy, DCMA implemented a payment withhold of 5 percent. In response, the contractor submitted a corrective action plan in September 2013 addressing the deficiencies that was accepted by DCMA and the withhold was reduced to 2 percent. In a follow-up review in July 2014, DCAA identified two new deficiencies, which the contractor corrected. In March 2015 DCAA reviewed the contractor’s forward pricing rate proposal and identified 11 new deficiencies in the estimating system. By August 2015, the contractor had corrected the new deficiencies but the system remained disapproved because the previous four deficiencies remained uncorrected. Finally, in September 2016, DCAA canceled its audit of the estimating system because these four deficiencies remained. According to officials, the contractor was not ready for re-evaluation. At the time of this review the system remains disapproved. In another case, a contractor’s property management system was disapproved for more than 4 years. In November 2013, DCMA reviewed the contractor’s property management system and, according to officials, identified nine significant deficiencies, including those related to missing records and supporting documentation for all contracts. DCMA issued an initial determination of disapproval. DCMA officials stated that they did not receive an adequate response from the contractor for nearly 7 months, and in June 2014, DCMA issued a final determination of system disapproval. The contractor subsequently submitted a corrective action plan in August to address the deficiencies. A DCMA official stated that they re-analyzed the system in November 2014 and found one outstanding issue. According to the official, the DCMA property administrator in charge of the review elevated the issue to the assigned ACO, but received no response. According to contractor representatives, they requested a follow-up review from the DCMA ACO several times from August 2014 to June 2015 but did not receive a response until after June 2015. According to a DCMA official, this was due to resource issues as the review went dormant because the new ACO assigned to the contractor went overseas. The system was reviewed again in November 2017 and the contractor’s system was approved in January 2018. In another case, an audit of a contractor’s estimating system took DCAA 2 years to complete. The DCAA audit began in November 2014. According to contractor representatives, they were initially told that the review would take 9 to 12 months, but a number of different DCAA auditors were assigned to the review over time and each identified different findings which led to a prolonged process. DCMA approved the contractor’s estimating system in December 2016. In another case, a contractor’s estimating system was disapproved for 15 months. In June 2016 DCMA disapproved a contractor’s estimating system due to three significant deficiencies, including one related to performing adequate price and cost analysis on subcontractor proposals. According to contractor representatives, they submitted a corrective action plan, but after submitting the plan DCAA performed an audit of the contractor’s forward pricing rates and identified additional deficiencies. In December 2016 DCMA officials determined that the corrective action plan the contractor provided was not sufficient. DCMA subsequently approved the contractor’s estimating system in September 2017. DCMA and DCAA officials believe the cases we analyzed were not representative of the length of time needed to complete the CBS review process, but could not provide data to support their views because DCMA and DCAA do not track data on the length of time it takes to complete the entire CBS review process (i.e., from the start of an audit or review to the resolution of system deficiencies and final determination). Our review of selected cases was not intended to be projectable to all reviews and audits conducted by DCMA and DCAA, but rather to be illustrative of the challenges that may be encountered during the review process. From the perspective of program and contracting officers, the status of a contractor’s business system may have an impact on both contract award decisions and contract monitoring, but officials stated that they can mitigate the risks associated with a disapproved system. For example, Army and Air Force program officials noted that a contractor leading certain weapon system development and logistics efforts had a deficient cost estimating system. According to the contracting officials, as the government could not rely on the contractor’s proposed costs to use a fixed-price contract, they awarded a fixed-price incentive contract for the program to better monitor the contractor’s cost reporting compared to under a fixed-price contract. DOD Does Not Have a Mechanism to Monitor and Ensure That Contractor Business System Reviews and Audits Are Conducted in a Timely Manner DCMA and DCAA do not have a mechanism to monitor and ensure that CBS reviews and audits are conducted in a timely manner. DCAA’s data show that it conducted few business system audits in the past 6 years, due, in part, to the need for it to reduce its backlog on completing incurred cost audits. Looking to the future, DCAA has developed plans for the number of CBS audits it intends to perform over the next 3 years and expects that it will be caught up in conducting the audits for which it is responsible by fiscal year 2022. Successfully executing its plan is dependent on several factors, including the ability to shift resources from conducting incurred cost audits to business systems audits, the use of public accounting firms to perform a portion of the incurred cost audits, and the ability of DCAA auditors to use new audit plans and complete the required audits in a timely manner. For its part, DCMA relies on the offices that perform the reviews of the three systems to maintain the information on the reviews completed and to plan for future reviews, but DCMA headquarters does not centrally track its reviews or whether audits conducted by DCAA are being completed within the timeframes described in policy. DCAA Plans to Address Previous Shortfalls in Conducting CBS Audits Are Dependent on Several Factors DCAA officials acknowledged they have not been able to conduct audits of contractor business systems within the timeframes outlined in DCMA instructions. DCAA officials attributed their inability to do so to the need to conduct higher priority audits—such as incurred cost audits—and staffing constraints. For example, in fiscal year 2017, DCAA initially proposed to perform a total of 76 CBS audits for the three business systems in its purview. However, DCAA completed only nine audits after assessing available resources. Further, DCAA estimates that in fiscal year 2017 it spent approximately 44 percent of its resources addressing incurred cost audits, and 17 percent on other audits such as forward pricing rate agreements. In contrast, only 6 percent of its resources were devoted to business system audits and related activities. Recognizing that it cannot perform all of the required CBS audits in a timely fashion to meet current DCMA policy requirements, DCAA officials told us they focus their audits on business systems they identify as high- risk. To do so, DCAA officials consider factors such as the contractor’s current system status, the contractor size in terms of dollars on contract, the amount of cost-type contracts, organizational changes, audit requests by a DOD contracting officer or an ACO, and the types of deficiencies identified and its impact on cost and schedule. DCAA headquarters officials assess the candidates at an annual DCAA planning meeting to determine which audits can be performed given the level of resources available. DCAA officials told us, however, that the current policy requirement—which generally requires review of the systems every three years—would require DCAA to dedicate substantial resources to CBS audits to maintain currency. As of November 2018, DCAA identified 285 systems that require an audit. DCAA officials stated that a risk based approach to reviewing these systems would provide more value than a routine 3 year cycle. DCAA officials stated they are willing to work with others within DOD to develop risk factors that can be used to determine when a business system needs a review. To better assess and plan future workload, DCAA issued a memorandum in January 2017 to introduce a strategic workload resource initiative that will project workload and resource availability in the out-years. Under this process, DCAA field management teams provide information on workload projections in March, and DCAA executive level officials make workload planning recommendations in June that result in an agency-wide plan. DCAA officials noted, however, that the projection for the second year is less accurate, and as a result, the further out year projections are reviewed every six months with adjustments made as needed. DCAA officials also told us that the planning process is currently being expanded to allow the agency to plan three years out. DCAA officials stated that the fiscal year 2021 plans will be tentatively approved by the end of January 2019 and fiscal year 2022 plans will be approved by June 2019. Based on these planning efforts, DCAA plans to conduct a total of 285 CBS audits from fiscal years 2019 through 2022, including 50 audits in fiscal year 2019 and 104 in fiscal year 2020. It also plans to shift some of the hours previously devoted to incurred cost audits to CBS audits (see figure 3). Our analysis indicates that successfully executing this plan is dependent on several factors, including the ability to shift resources from conducting incurred cost audits to business systems audits, the use of public accounting firms to perform a portion of the incurred cost audits, and the ability of DCAA auditors to use new audit plans and complete the required audits in a timely manner. First, the plan is contingent upon DCAA being able to successfully shift resources from incurred cost audits to CBS audits. According to DCAA data, DCAA plans to shift more than 378,000 hours from incurred cost audits to CBS audits between fiscal years 2018 and 2020. DCAA officials noted, however, that although they have made significant progress in addressing incurred cost audits, the fiscal year 2018 NDAA requires DCAA to have all incurred cost audits performed within 12 months. DCAA officials noted that this means it will have to continue to spend significant resources on incurred cost audits in fiscal year 2019 to meet this legislative requirement. Second, DCAA officials stated that these estimates include the resources that are expected to become available to perform CBS audits as DCAA starts using public accounting firms to perform incurred cost audits. In its October 2018 report to Congress on the progress made to implement Section 803 of the Fiscal Year 2018 NDAA, DCAA estimated that public accounting firms would be able to perform 100 incurred cost audits per year for 2019 and 2020, which would then increase to 200 each year for 2021 through 2025. DCAA further projected, for example, that about 147,500 hours would become available in 2020 based on the proposed plan to use public accounting firms. DCAA officials told us they are in the process of developing a solicitation to contract for these services, which they anticipate releasing in the spring of 2019. Lastly, these plans assume that each audit conducted by DCAA can be completed within an average number of hours based on the experiences of the team that developed the revised audit plans released in 2018. DCAA officials noted that these hours assume that DCAA audit teams will experience some challenges conducting the initial set of audits, but will be able to conduct them in fewer hours as they gain more experience in implementing the new audit plans. DCAA officials told us that, if successful, this plan will enable it to be caught up on CBS reviews by 2022. DCMA Headquarters Makes Limited Use of Data Collected by Functional Offices to Assess the CBS Review Process and does not Monitor DCAA’s Progress In Completing Its Audits For the DCMA-led reviews, DCMA relies on its functional offices that perform reviews of their respective systems to monitor the status of CBS reviews, but does not use the information to ensure that all three reviews are conducted within the timeframes established under DCMA’s instructions. The three DCMA functional offices use spreadsheets to manually track reviews their office has completed, and track data on when the next review should be scheduled. Each functional office plans and tracks this data individually. For example, The property management functional office identifies the number of contractor property systems requiring review on a monthly basis, and tracks its progress in completing these reviews. In fiscal year 2018, this functional office completed over 95 percent of the 850 property system reviews required. The EVM system functional office identifies the number of reviews that should be conducted annually. In fiscal year 2018, the office reported completion of 92 percent of the 125 required EVM system reviews. The purchasing functional office uses a rolling process to determine which systems require a review. To do this, the ACO performs a required risk assessment every 3 years to identify whether a full business system review is required and then the purchasing functional office develops a prioritization plan for the systems flagged for review. The exact number of reviews conducted in a single year is dependent upon the risk assessments; however, an official from the purchasing system functional office estimated that their office is staffed to complete approximately 125 reviews per year. The official also noted that they do track to ensure all systems are reviewed in the required timeframes. Officials from the functional offices described to us what information they provide to senior leadership, but DCMA headquarters does not collect or use this information to oversee the CBS review process. For example, a supervisor from the property management functional office told us that the office reports monthly to their supervisors on the status of their reviews and whether they are on schedule, which also serves as a method for requesting additional resources if necessary. EVM system functional officials told us they report the number of planned and completed reviews to a DCMA internal website for senior leadership to review, but did not know what senior leadership does with this information. Purchasing officials said their office provides monthly reports on the status of reviews for specific large contractors, and weekly reports of the number of reviews completed to the agency director and component heads. DCMA headquarters officials stated that they informally share information with ACOs in a variety of ways, including quarterly meetings, but headquarters officials could not provide documentation on how this information is used to monitor and assess whether CBS reviews were being conducted in accordance with the policy timeframes. Further, DCMA officials indicated that they do not formally monitor DCAA’s efforts to complete the audits for which DCAA is responsible. Despite being the agency responsible for issuing the instructions and whose ACOs are responsible for making final determinations of business system compliance, DCMA officials indicated that it is not their responsibility to monitor or assess DCAA’s efforts to complete the reviews in DCAA’s area of responsibility. DCMA and DCAA officials stated, however, that they recently began to hold quarterly meetings, during which time they can discuss CBS issues, including potential revisions to the criteria and timeframes for conducting CBS reviews. But it is uncertain what outcomes will come from this or the extent to which this will contribute to improved management of CBS reviews. According to federal standards for internal controls, an agency should use quality information to help ensure that it achieves its objectives. These internal controls also state that monitoring activities should be conducted to ensure that agency objectives are being met. Developing a mechanism to track and monitor the number of CBS reviews that are outstanding, the risk level assigned to those systems and the resources available to conduct such reviews, would help DCMA and DCAA better manage the CBS review process to ensure that contractor systems that are reviewed and approved in a timely fashion. DOD Has Not Yet Implemented Recent Legislative Provisions to Change the Definition of a Covered Contractor or to Enable the Use of Public Accounting Firms Section 893 of the Fiscal Year 2017 NDAA amended the CBS provisions of the Fiscal Year 2011 NDAA by revising the statutory definition of a covered contractor and by allowing contractors to use registered public accounting firms to review their business systems in place of DOD’s review. As of November 2018, DOD had not yet proposed regulations to implement these legislative changes, and therefore we were unable to fully evaluate the potential effects of these provisions. The Fiscal Year 2017 NDAA did not provide a specific timeframe for DOD to revise its regulations, but the Director of the Defense Acquisition Regulation Council—who is responsible for promulgating proposed and final rule changes to the DFARS— tasked her staff to draft a proposed rule by March 2017. This deadline was subsequently extended to January 23, 2019. In November 2018, Defense Pricing and Contracting (DPC) officials told us that they now expect to issue the proposed rule for public comment in the third or fourth quarter of fiscal year 2019. DPC officials attributed this delay, in part, to a recent executive order that calls for the reduction and control of regulatory costs, as well as the complexity of having public accounting firms perform CBS reviews. Section 893 of the Fiscal Year 2017 NDAA changed the definition of covered contractors—those contractors that may require CBS reviews— from contractors subject to cost accounting standards to generally only those with contracts subject to cost accounting standards that account for more than 1 percent of their gross revenue. DPC officials stated that DOD may require contractors to self-report on their revenue levels to determine whether the contractor’s systems require review. DPC officials told us, however, that they had not yet considered certain aspects of how contractors may calculate revenues. For example, DPC officials had not yet decided whether revenue should be determined based on specific business segments, or whether it should include international sales revenue. These officials also had not yet decided how many years of revenue should be included in the analysis. Further, DPC officials could not yet estimate the potential effect of implementing this provision on contractors. Based on our analysis of publicly available contractor financial data for the 20 contractors that we reviewed, the lowest percentage of total revenue derived from government contracts was 10 percent. Section 893 of the 2017 NDAA also authorized the use of registered public accounting firms to assess compliance with DOD’s CBS requirements. Under this provision, if a registered public accounting firm certifies that a contractor’s business system meets DOD’s requirements, it would eliminate the need for further review by DOD. Some government acquisition officials we spoke with expressed concerns that would need to be addressed to effectively implement the legislation, including: Ensuring that public accounting firms have sufficient understanding of the processes or regulations to conduct the audits and provide conclusions that DOD could rely upon. Encouraging DCMA and DCAA functional experts and auditors to accept public accounting firms’ findings rather than conduct additional reviews and audits on their own, which would undermine the ability to save both government and contractor resources. Determining the potential for the cost of public accounting firm reviews being passed on to the government through the contracts of the businesses under review. The DPC official responsible for implementing this provision stated that they are aware of these concerns. He also stated that, as a first step in implementation, his office has requested that DCMA and DCAA review the criteria and audit plans used by their staff and identify areas where these criteria and plans could be adjusted to make them more consistent with criteria that public accounting firms use in the private sector. Conclusions By clarifying DCMA and DCAA’s roles and responsibilities as well as the timeframes for conducting the audits, DOD has improved the CBS review process. But there are still issues that need to be addressed. DCAA acknowledges it is well behind in its efforts to complete the three CBS audits for which it is responsible but believes that it can be caught up by the end of fiscal year 2022 if significantly more resources are available. In addition, DCMA does not monitor progress of either its functional offices or of DCAA against the policies that the six systems each be reviewed generally every 3 years. This is because DOD currently lacks a mechanism based on relevant and reliable information, such as the number of CBS reviews that are outstanding, the risk level assigned to those systems, and the resources available to conduct such reviews, to ensure CBS reviews are being completed in a timely fashion. Such information could help inform more strategic oversight to determine whether the current CBS review process is achieving intended results, or whether additional changes to the timing of or criteria for conducting CBS reviews are needed. As the agency that is responsible for issuing the overarching policies that govern CBS reviews and is ultimately responsible for approving contractor business systems, DCMA is in the best position to lead the effort to develop this mechanism. As each agency is responsible for executing its mission and managing its resources, however, this effort should be conducted in collaboration with DCAA. Recommendation for Executive Action We recommend that the Director, DCMA, in collaboration with the Director, DCAA, develop a mechanism to monitor and assess whether contractor business systems reviews are being completed in a timely manner. (Recommendation 1) Agency Comments DOD agreed with the recommendation. In an email, a DPC official stated that DCMA and DCAA are collaborating to determine the best way to implement the recommendation. DOD’s comments are reprinted in Appendix I. We are sending copies of this report to the appropriate congressional committees; the Acting Secretary of Defense; the Under Secretary of Defense for Acquisition and Sustainment; the Under Secretary of Defense – Comptroller; the Director, DCMA; the Director, DCAA; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at dinapolit@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report were Tatiana Winger (Assistant Director), Emily Bond, Matthew T. Crosby, Suellen Foth, Sameena Ismailjee, Jean McSween, Ramzi Nemo, Miranda Riemer, Christy Smith, Roxanna Sun, Tom Twambly, and Jacqueline Wade. Appendix I: Comments from the Department of Defense
Why GAO Did This Study Contractor business systems produce critical data that contracting officers use to help negotiate and manage defense contracts. These systems and their related internal controls act as important safeguards against fraud, waste, and abuse of federal funding. Federal and defense acquisition regulations and DOD policies require that DOD take steps to review the adequacy of certain business systems, but GAO and other oversight entities have raised questions about the sufficiency and consistency of DOD's review process. The National Defense Authorization Act for Fiscal Year 2018 contained a provision for GAO to evaluate how DOD implemented legislation intended to improve its business system review process. Among other things, this report examines (1) the changes DOD made to its review process and (2) the extent to which DOD is ensuring timely business system reviews. GAO analyzed DOD acquisition regulations, policies, and procedures for conducting contractor business system reviews and analyzed data on reviews conducted between fiscal years 2013 and 2018. What GAO Found Since 2011, the Department of Defense (DOD) has implemented several changes to its processes for reviewing contractor business systems—which include systems such as accounting, estimating, and purchasing. Among other changes, DOD clarified the roles and responsibilities of the Defense Contract Management Agency (DCMA) and the Defense Contract Audit Agency (DCAA)—the two agencies that are responsible for conducting the reviews; clarified timeframes for business system reviews and established criteria for business systems; and withheld payments from contractors that were found to have significant deficiencies in their business systems. DOD does not have a mechanism to monitor and ensure that these reviews are being conducted in a timely manner. For its part, DCAA has conducted few business system audits since 2013, as it focused its efforts on other types of audits. DCAA plans to significantly increase the number of business system audits over the next 4 years, but its success in doing so depends on its ability to shift resources from other audits; to use public accounting firms to conduct other, non-business system audits; and DCAA staff's ability to execute new audit plans in a timely manner. DCMA relies on the three offices responsible for conducting DCMA-led reviews to manage the reviews, but DCMA does not formally monitor whether these reviews are being conducted consistent with policy nor does it monitor DCAA's efforts to complete the audits for which it is responsible. DCMA is ultimately responsible for approving a contractor's business systems. DCMA currently lacks a mechanism based on relevant and reliable information, such as the number of reviews that are outstanding and the resources available to conduct such reviews, to ensure reviews are being completed in a timely fashion. Such information could help inform more strategic oversight on whether the current review process is achieving its intended results, or whether additional changes to the timing of or criteria for conducting reviews are needed. What GAO Recommends GAO recommends that DCMA, in collaboration with DCAA, develop a mechanism to monitor and ensure contractor business system reviews are conducted in a timely fashion. DOD concurred with the recommendation.
gao_GAO-19-76
gao_GAO-19-76_0
Background State of the Airline Industry In the U.S. commercial airline industry, passengers travel by air on network, low-cost, and regional airlines. With thousands of employees and hundreds of aircraft, network airlines support large, complex hub- and-spoke operations, which provide service at various fare levels to many destinations. Low-cost airlines generally operate less costly point- to-point service using fewer types of aircraft. Regional airlines typically operate small aircraft—turboprops or regional jets with up to 100 seats—and generally provide service to smaller communities on behalf of network airlines. The U.S. airline industry’s financial health has improved greatly in recent years due in part to increased demand for air travel as a result of the improved economy, industry reorganization, and changes in business practices. Starting in 2007, airlines faced a number of major challenges, including volatile fuel prices, the financial crisis, and the ensuing recession of 2007–2009. These events led to a wave of domestic airline bankruptcies, five airline mergers, and changes in airlines’ business practices. In all, these circumstances—such as the improved economy and new airline business practices—contributed to record level profits for airlines. For example, in 2017, U.S. airlines reported an after-tax net profit of $13.4 billion for domestic operations, according to DOT data. As the industry recovered from the recession and passenger traffic began to rebound, airlines began to exercise “capacity restraint” by carefully controlling the number of seats on flights to achieve higher load factors in order to control costs and improve profitability. Because capacity restraint may result in fewer empty seats on many flights, this practice also limits airlines’ ability to rebook passengers if a flight is delayed or cancelled. Airlines have also made changes in their ticket pricing. For example, airlines now generally “unbundle” optional services from the base ticket price and charge ancillary fees for those services. Unbundling may result in passengers paying for services that were previously included in the price of the ticket. Additionally, certain aspects of customer service quality are tied to the class of ticket passengers purchase. For example, purchasing a “basic economy” ticket may include restrictions, such as not allowing passengers to select seats or charging for carry-on bags, that would not apply to a higher priced ticket class. Similarly, the quality of seating varies based on the ticket class purchased—even within the main cabin of the aircraft. Moreover, while the recent airline mergers have resulted in some new service options for passengers in certain markets, they have also reduced consumers’ choice of airlines on some routes and can result in higher ticket prices. At the same time, low-cost airlines provide greater competition in the markets they serve, which may help to keep prices in check. Factors That Affect Passengers’ Satisfaction with Service Many factors—from booking a flight through collecting checked baggage—may contribute to passengers’ level of satisfaction with an airline’s service, according to an airline industry association and market research organizations (see fig.1). For example, one industry survey found that passengers most valued affordable airfare, convenient flight schedules, and reliable on-time departures and arrivals. DOT’s Regulatory, Compliance, and Education Efforts DOT’s regulatory activities include issuing consumer protection regulations. Specifically, DOT may issue or amend consumer protection regulations under its statutory authority to prohibit unfair or deceptive practices, or unfair methods of competition by airlines, among others. As mentioned previously, under this authority DOT has promulgated various regulations to enhance airline consumer protections since 2009 (see table 1). When regulations are promulgated, agency officials must determine how to promote compliance and deter noncompliance. Agencies charged with promoting regulatory compliance, including DOT, usually adopt a program that consists of two types of activities: those that encourage compliance and those that enforce the regulations. Compliance assistance helps regulated entities, such as U.S. airlines, understand and meet regulatory requirements, whereas activities such as monitoring, enforcement, and data reporting deter noncompliance and ensure that entities follow requirements. Agencies choose a mix of compliance activities that will achieve their desired regulatory outcome. DOT promotes airlines’ compliance with consumer protection requirements through a number of activities, and it educates passengers on their rights. For example, DOT has the authority to investigate whether an airline has been, or is engaged, in an unfair or deceptive practice or an unfair method of competition in air transportation or the sale of air transportation. If DOT finds that an airline has violated consumer protection requirements, DOT may take enforcement action against the airline by, for example, assessing civil penalties. In addition to promoting airlines’ compliance with consumer protection requirements, DOT also conducts activities aimed at educating passengers about their rights and the services provided by airlines. For example, DOT has an aviation consumer protection website where it highlights passengers’ rights and describes how to file complaints with DOT, in addition to other consumer resources. Within DOT’s Office of the Secretary (OST), the Office of the Assistant General Counsel for Aviation Enforcement and Proceedings and its Aviation Consumer Protection Division are responsible for these efforts. According to DOT officials, the annual appropriation to OST’s Office of the General Counsel provides funding for DOT’s consumer protection activities, among other things. At the end of fiscal year 2017, DOT employed 38 staff—including 18 attorneys and 15 analysts—to conduct these activities, according to DOT officials. DOT’s Data Provide Mixed Information on Improvement in the Quality of Airline Service; Selected Airlines Indicate They Are Taking Steps Intended to Enhance Service DOT’s Data Provide Mixed Information on the Quality of Airline Services DOT’s data, which include both operational measures of airline service, as well as passenger complaints received by DOT, provide mixed information on whether service improved from 2008 through 2017. DOT requires reporting airlines to provide operational data, including information on late, cancelled, or diverted flights; mishandled baggage; and denied boardings. These data showed some general improvement in the quality of airline service from 2008 through 2017. However, during the same time period, the total number of passenger complaints filed with DOT increased for “selected” airlines. Moreover, while these data may be imperfect measures of service quality, they do provide some indication of the passenger experience. DOT publishes data on both operational performance and passengers’ complaints in its monthly Air Travel Consumer Report to inform the public about the quality of services provided by airlines. Selected Airlines Indicate They Have Taken a Variety of Actions to Enhance Passenger Service Representatives from all 11 selected airlines highlighted actions they took to enhance passenger service since 2013, including in some of the areas discussed above. While customer service is important for airlines, these actions can also be motivated in part by other factors—including compliance with certain consumer protection requirements or DOT consent orders, or competition with other airlines. For example, one airline developed a wheelchair tracking system in response to DOT enforcement, which also contributed to the airline’s goal to improve its services to passengers with disabilities. Additional examples of service improvements are listed below. On-time performance. Representatives we interviewed from almost all selected airlines (10 of 11) reported taking actions intended to improve on-time performance or mitigate challenges associated with flight delays and cancellations. These actions varied across airlines from those intended to improve operational performance to those intended to improve the comfort of passengers. For example, one airline began tracking flights that were “at-risk” of meeting DOT’s definition of a chronically delayed flight, so it could, among other things, swap crews or substitute aircraft and avoid these types of delays. According to DOT regulations, airlines with a chronically delayed flight for more than four consecutive one-month periods are engaging in a form of unrealistic scheduling, which is an unfair or deceptive practice and an unfair method of competition. Airlines have also used technology, such as text-messaging updates, to communicate with passengers during delays and cancellations (8 of 9); increased the number of situations where passengers are compensated during delays and cancellations (5 of 9); and empowered customer service agents to provide food, beverages, and entertainment to passengers during flight delays (1 of 9). For example, one airline e-mails all passengers that experience long delays with an apology and voucher for future travel, regardless of whether the delay was within the airline’s control. While DOT has some requirements for airlines on delays and cancellations, such as on tarmac delays and chronically delayed flights, it generally does not require airlines to compensate passengers for delays. Baggage handling. Representatives we interviewed from almost all network and low-cost airlines (8 of 9) reported investing resources in order to improve baggage-handling efforts and minimize the effects to passengers whose bags are lost or delayed. Among other things, airlines upgraded baggage technology (5 of 9); modernized the claims process, so passengers could complete forms on-line (3 of 9); and instituted replacement baggage programs, where passengers get a replacement bag at the airport (2 of 9). For example, one airline invested several million dollars to use radio frequency identification technology (RFID) to track bags, as well as allowing passengers to track their baggage via an application on their smartphone. Another airline introduced a policy to use FedEx to deliver delayed bags if the airline cannot return them within 24 hours. Since 2011, DOT has required certain airlines to make every reasonable effort to return mishandled baggage within 24 hours. Quality of interaction with airline staff. Representatives we interviewed from almost all selected airlines (10 of 11) reported improving training programs in an attempt to enhance interactions between airline staff and passengers. For example, one airline worked with the Disney Institute to provide training to staff on relating to guests during travel disruptions and de-escalating conflict. While airlines have increased customer service training, representatives from one industry association said that the training would be more beneficial if it was provided on a more regular basis. Two airlines also expanded their customer service departments’ hours to better match when passengers travel. According to DOT officials, airlines are not required to provide customer service training to staff. Passengers with disabilities. Representatives we interviewed from almost all network and low-cost airlines (8 of 9) reported taking actions intended to improve services for passengers with disabilities. These actions included programs to replace damaged or misplaced wheelchairs or other assistive devices (3 of 9); improving seating and access to lavatories in the aircraft (1 of 9); and using RFID technology to track wheelchairs (1 of 9). For example, representatives from one airline told us they have retrofitted their larger single aisle aircraft lavatories to be wheelchair accessible. Two airlines also reported changing policies pertaining to emotional support animals. For example, one airline has an online registration for emotional support animals where passengers must submit all documentation at least 48 hours in advance of the flight; according to representatives, the process allows the airline to validate the required paperwork, while providing relevant information to passengers with emotional support animals and ensuring the safety of everyone onboard the aircraft. Involuntary denied boardings. Representatives we interviewed from network and low-cost airlines (9) reported taking steps to reduce or eliminate involuntary denied boardings. Representatives from three airlines said they have reduced or stopped overbooking flights, and other representatives (5 of 9) said their airlines have begun soliciting volunteers to be “bumped” off a flight (i.e., give up their seat) earlier in the process. Two conduct reverse auctions where they ask passengers what compensation they would accept to take an alternative flight. Airlines are also offering additional incentives to encourage passengers to voluntarily switch to flights with available seats (5 of 9)—including travel vouchers with fewer restrictions or that cover ancillary fees, gift cards for Amazon and other retailers, or large travel credits of up to $10,000. DOT Conducts Multiple Activities to Monitor Airline Compliance, but Opportunities Exist to Improve These Efforts’ Effectiveness DOT promotes and monitors airlines’ compliance with consumer protection requirements and deters noncompliance in five key ways, such as by reviewing passenger complaint data and taking enforcement action where it identifies violations. However, we found that DOT could improve its procedures to provide additional assurances that analysts consistently code passengers’ complaints and properly identify potential consumer protection violations, in addition to more fully utilizing data from DOT’s information systems to inform its compliance program. Further, while DOT has objectives for each of its five key compliance activities, it lacks performance measures for three of these objectives. As a result, DOT is limited in its ability to assess progress toward achieving its goal of promoting airlines’ compliance with consumer protection requirements or to identify and make any needed improvements. DOT Conducts Five Key Compliance Activities DOT conducts five key activities to help airlines understand and comply with consumer protection requirements: (1) providing compliance assistance to airlines, (2) processing complaints from passengers, (3) conducting compliance inspections of airlines at headquarters and airports, (4) conducting airline investigations, and (5) enforcing airlines’ compliance with consumer protection requirements. Collectively, these key compliance activities are intended to help airlines understand and meet consumer protection requirements and deter noncompliance. Providing compliance information to airlines. DOT attorneys assist airlines in meeting consumer protection requirements by developing guidance materials and responding to questions. DOT publishes these materials—such as topic-specific webpages and frequently asked questions—on its website. Attorneys and analysts also informally respond to questions or requests for information from airline representatives. Processing complaints from passengers. As previously stated, passengers may file complaints with DOT via its website, by mail, or through DOT’s telephone hotline. DOT analysts use a web application—the Consumer Complaints Application system—to receive, code, and track passenger complaints. In 2017, DOT’s 15 analysts processed about 18,000 air travel-related complaints. Initial processing involves reviewing the information in the complaint, notifying complainants that their complaint was received, and transmitting the complaint to the relevant airline for action. Analysts assign one of 15 high-level complaint category codes (e.g., “advertising” or “discrimination”) to each complaint as well as more specific lower-level complaint codes and codes indicating a potential violation of consumer protection requirements as necessary. Analysts initially code a complaint based on the passenger’s perception of events and not on an assessment of whether the complaint is a potential violation of consumer protections. According to DOT officials, when initially coding passenger complaints, analysts generally use their judgment to code each passenger’s complaint based on the primary issue. While analysts handle a variety of complaints, DOT may designate specific analysts to handle more complex complaint codes, such as disability complaints. On a monthly basis, DOT provides airlines the opportunity to review the complaints received and the agency’s categorization of each complaint. At that time, airlines have an opportunity to challenge DOT’s categorizations. According to DOT officials, a limited number of complaints are recoded as a result of this process. Conducting compliance inspections of airlines at headquarters and airports. DOT analysts and attorneys inspect airlines at airline headquarters and airports to assess their compliance with consumer protection requirements. From 2008 through 2016, analysts and attorneys conducted compliance inspections of airlines at the airlines’ headquarters, but DOT has not conducted any such inspections since September 2016. Beginning in 2015, DOT initiated compliance inspections of airlines at airports, and DOT continued to conduct these inspections through 2018. According to DOT officials, they have exclusively conducted on-site inspections of airlines at airports in recent years due, in part, to limited resources and budget unpredictability. However, officials stated that they would consider conducting more inspections of airlines at airline headquarters in the future. Inspections of airlines at airlines’ headquarters examine customer service policies and passenger complaints received directly by airlines, among other things. According to DOT officials, these inspections represent a “deep dive” into an airline’s relevant policies and involve collecting and analyzing data prior to and after their weeklong visit, as well as interviewing corporate personnel. DOT analysts and attorneys use the agency’s inspection checklist to assess compliance with a variety of regulated areas such as the inclusion of certain information on the airline’s website and the proper reporting of data to DOT (e.g., mishandled baggage and on-time performance data). According to DOT data, between 2008 and 2016 DOT completed inspections at 33 U.S. airlines’ headquarters. These 33 inspections identified 23 systemic violations, resulting in consent orders. Two inspections resulted in warning letters, and eight did not identify any systemic violations. The assessed penalty amounts for these inspections ranged from $40,000 to $1,200,000. Inspections of airlines at airports examine staff’s knowledge of certain consumer protection requirements and the availability and accuracy of signage and documentation. Such inspections provide DOT the opportunity to examine multiple airlines in one visit. According to DOT officials, during these unannounced inspections, attorneys and analysts focus on assessing compliance through observation and interviews with randomly selected airline employees. For example, analysts and attorneys may confirm the availability of information on compensation for denied boarding from an airline gate agent or review an airline’s required signage on compensation for mishandled baggage to determine whether the information is accurate. According to DOT data, DOT inspected 12 to 14 U.S. airlines annually—most multiple times—at 51 domestic airports from 2015 through 2017. In 2017, DOT conducted inspections at 18 domestic airports that included inspecting 12 U.S. airlines multiple times. In total, from 2015 through 2017, DOT found violations of various consumer protection requirements for 13 airlines that DOT addressed through warning letters. In addition, DOT found violations related to incorrect (e.g., out-of-date) or missing notices regarding baggage liability limits or oversales compensation for 8 airlines that were settled by consent orders with penalties between $35,000 and $50,000. Conducting airline investigations. According to DOT officials, attorneys determine whether to open an investigation by weighing numerous factors, including whether they believe an airline is systematically violating consumer protection requirements. Attorneys may initiate an investigation based on findings from trends in passenger complaints, compliance inspections, monitoring of airline websites and news media, or information supplied by other entities, including other DOT offices or governmental agencies. According to DOT officials, after gathering preliminary information, an attorney may notify the airline of his or her investigation, request information for further analysis, and then determine whether a violation has occurred and which enforcement action, if any, is appropriate. Attorneys document these investigations using DOT’s case management system. From 2008 to 2017, DOT initiated almost 2,500 investigations as shown in table 2 below. Enforcing airlines’ compliance with consumer protection requirements. When investigations result in a determination that a violation occurred, DOT may pursue enforcement action against the airline by, for example; (1) seeking corrective actions through warning letters; (2) consent orders (which may include fines); or (3) commencement of a legal action (see table 2). According to DOT officials, attorneys consider a number of factors in determining the appropriate enforcement action, including whether there is evidence of recidivism or systemic misconduct, and the number of passengers affected. According to DOT data, most investigations result in administrative closures and findings of no violation. According to DOT officials, when attorneys decide to issue a consent order, they work with their managers to arrive at an initial civil penalty level and then negotiate with the airline to arrive at a final settlement agreement and civil penalty amount if applicable. DOT has criteria for setting civil penalties, but officials describe the process as “more art than science” because facts and circumstances always vary. Civil penalties assessed in consent orders often include three parts: mandatory penalties, credits, and potential future penalties (see table 3). A mandatory penalty is the portion of the assessed penalty that must be paid immediately or in installments over a specified period of time. A credit is the portion of the assessed penalty that DOT allows an airline to not pay in order to give credit to the airline for spending funds on passenger compensation or toward specific service improvements, both of which must be above and beyond what is required by existing requirements. A potential future penalty is the portion of the assessed penalty that the airline will pay if DOT determines that the airline violated certain requirements during a specified period of time. Our review of 76 consent orders for our 12 selected airlines where a penalty was assessed found that DOT issued penalties totaling $17,967,000 from 2008 through 2017. Of this, 47 percent ($8,437,700) comprised mandatory penalties paid by the airline. The remaining amounts were either credits or potential future penalties. According to DOT officials, credits are a better way to effect positive change than merely assessing a mandatory penalty. For example, one recent consent order included violations of regulations regarding assistance for passengers with disabilities, among other things. The airline and DOT agreed to an assessed civil penalty amount of $400,000, $75,000 of which was credited to the airline for compensation to customers filing disability-related complaints in certain years and for implementation of an application to provide real-time information and response capabilities to a wheelchair dispatch and tracking system, among other things. However, our review found that consent orders do not always ensure future compliance. Specifically, we found 14 instances where an airline received multiple consent orders for the same regulatory violation. Three of these instances—each for different airlines—related to violations of the “full fare rule,” and two—also for different airlines—related to airlines’ failure to adhere to customer service plans. Improvements to DOT’s Procedures Could Provide Greater Assurance That Passengers’ Complaints Are Consistently Coded and that Consumer Protection Violations are Properly Identified We found that while DOT has some procedures (i.e., guidance documents and on-the-job training) in place for coding passenger complaints, it lacks others that could help ensure that analysts consistently code complaints and that potential consumer protection violations are properly identified. Federal internal control standards state that agencies should design control activities to achieve objectives and establish and operate monitoring activities to evaluate results. By designing and assessing control activities, such as procedures and training, agencies are able to provide management with assurance that the program achieves its objectives, which in this case involve identifying instances of airline noncompliance. DOT has taken some steps to help analysts code passenger complaints and properly identify potential violations of consumer protection requirements: Guidance documents. DOT developed two documents to guide complaint processing and evaluation—a coding sheet that helps analysts determine how to code complaints and identify potential consumer protection violations, and a user guide that describes how analysts should enter complaint information into the web application. However, we found that these documents may not be clear or specific enough to ensure that analysts consistently coded complaints or properly identified potential consumer protection violations. For example, while the coding sheet includes explanatory notes in 9 of the 15 complaint categories, it does not include definitions and examples for each of DOT’s 15 complaint categories that would illustrate appropriate use of a complaint code, a gap that could result in inconsistent coding. On-the-job training. DOT supplements its guidance documents with on-the-job training, which officials told us helps analysts consistently code complaints and identify potential consumer protection violations; however, DOT has not established formal training materials to ensure all new analysts get the same information. DOT pairs each newly- hired analyst with a senior analyst to be their coach and instruct them on how to code complaints. According to DOT officials, senior and supervisory analysts determine when new analysts are able to code and work independently but continue to monitor their work as needed and determined by the senior analyst. DOT officials stated that while the agency does not regularly check the extent to which complaints are consistently coded, supervisory analysts check analysts’ complaint coding on an as-needed basis throughout the year, as well as during semi-annual performance reviews. However, DOT does not provide formal training materials or other guidance to ensure that senior analysts are conveying the same information during these informal, on-the-job training sessions. DOT officials stated that the combination of the existing guidance, procedures, and hands-on training provides adequate assurance that analysts share a common understanding of the complaint categories resulting in complaints being consistently coded. As a result, DOT officials have not developed additional guidance documents or established formal training materials. While DOT officials said they believe their procedures and on-the-job training are sufficient to ensure that complaints are consistently coded and that potential consumer protection violations are properly identified, a recent DOT Office of Inspector General (OIG) report found that DOT analysts did not identify when to code complaints as potential consumer protection violations for a sample of frequent flyer complaints the agency reviewed. As a result, in 2016, the DOT OIG recommended that DOT provide additional training on what constitutes an unfair or deceptive practice to strengthen oversight of airlines’ frequent flyer programs. In response, DOT created a special team to process frequent flyer complaints and developed and provided review team analysts and other members with training on how to review complaints and identify potential violations related to airlines’ frequent flyer programs. Improving DOT’s procedures that analysts use to code complaints and identify potential consumer protection violations could provide DOT with additional assurances that analysts: share a common understanding of the definitions of all the complaint codes, are coding complaints in each category consistently, and are identifying potential consumer protection violations. Consistent coding among analysts is important for a number of reasons. First, according to DOT officials, passengers use complaint data—which are publicly reported in DOT’s Air Travel Consumer Report—to make decisions about air travel, including which airlines to fly. Second, DOT analysts and attorneys use complaint data to guide their compliance activities (e.g., selecting airlines for inspections and investigations, and determining proper enforcement actions). DOT Is Missing Opportunities to Use Its Case Management System to Help Inform Its Compliance Program We found that while DOT’s case management system allows attorneys to track investigations, it lacks functionality that would allow DOT officials to more efficiently use data from the system to inform other key activities, such as making compliance and enforcement decisions. Federal internal control principles state that agencies should design an entity’s information system and related control activities to achieve objectives and respond to risks, which in this case involve using data from DOT’s case management system to inform its compliance activities. Our review of DOT’s case management system identified the following limitations that affect DOT’s ability to use data from its case management system to target resources and accurately monitor trends in violations, compliance activities, and the results of its enforcement actions: Key data are optional. Attorneys are not required to complete certain key data fields in the case management system. For example, attorneys are not required to document the outcome of an investigation in the “enforcement action” field. According to officials, while attorneys do not always complete this field, they often choose to document the outcome of investigations in the case notes. Even if that information is captured in the case notes section, attorneys can only access that information by individually reviewing each case file. Data entries are limited. Attorneys cannot record multiple consumer protection violations for a single investigation in the case management system. As a result, when multiple violations occur, attorneys must use their professional judgement to select the primary violation to record. Our review of the 76 consent orders against selected airlines resulting from airline investigations identified 24 instances—or more than 30 percent—where an airline violated multiple consumer protection regulations. While this is a small subset of all investigations (2,464) DOT completed across our timeframe, it suggests investigations could include violations of multiple consumer protection regulations. Data entries do not reflect DOT’s compliance activities. While the case management system includes a field for attorneys to document the source of investigations, the field’s response options do not fully correspond to DOT’s key compliance activities or align to DOT’s documentation listing the sources of investigations. For example, the field that tracks the source of an investigation includes an option to identify passenger complaints as the source but not an inspection of an airline. Officials told us that, like the outcomes of investigations, attorneys often document the source of an investigation in the case notes. However, as mentioned previously, information captured in the case notes section can only be accessed by individually reviewing each case file. Limited reporting capabilities exist. Attorneys are limited in their ability to run reports to understand trends across multiple investigations, according to DOT officials. For example, the case management system lacks a function to run reports by certain data fields. Specifically, according to DOT officials, attorneys cannot run reports by the airline name data field and must instead type in the airline name to create a report, a process that could produce unreliable results if an airline’s name is inconsistently entered into the database. According to DOT officials, the case management system’s capabilities are limited largely because the database was designed as a mechanism for attorneys to manage ongoing investigations. DOT officials told us that, while the database has successfully fulfilled that role, officials have increasingly used data from the case management system to make enforcement decisions. For example, DOT attorneys use information from the case management system to inform civil penalty amounts. In addition, DOT uses data from the case management system to analyze the results of investigations and inspections, as well as the details of consent orders in order to target future compliance activities. However, because of limited reporting capabilities, attorneys and managers must manually create summary documents from the case management system’s data, work that could be time consuming and subject to manual errors, and that does not address the issue that some data are not entered into various data fields in the first place. Recognizing limitations with the case management system, DOT has taken steps to improve the system. Specifically, starting in June 2018, DOT began working with a contractor to update the case management system’s functionality. Among other things, the updates are intended to improve the system’s ability to run reports, which could enhance DOT’s ability to examine trends in enforcement actions and penalty amounts, and allow the system to track investigation milestones. While DOT’s planned updates may help DOT officials better examine trends in enforcement actions, the planned updates do not fully address the issues we identified above, particularly related to collecting complete data. Collecting complete and comprehensive data in the case management system could allow DOT to better track trends in its investigations, inspections, and enforcement actions and to use that information to make data-driven decisions about future compliance activities and enforcement actions. DOT Lacks Performance Measures for Three of Five of Its Compliance Program Objectives While DOT has five objectives for its key compliance program activities, it has not established performance measures for three of these objectives. Objectives communicate what results the agency seeks to achieve, and performance measures show the progress the agency is making toward achieving those objectives. Federal internal control standards state that agencies should define objectives clearly to enable the identification of risks and define risk tolerances. They further state that management defines objectives in measurable terms, so that performance toward those objectives can be assessed. Additionally, the Government Performance and Results Act of 1993 (GPRA), as enhanced by the GPRA Modernization Act of 2010, requires agencies to develop objective, measurable, and quantifiable performance goals and related measures and to report progress in performance reports in order to promote public and congressional oversight, as well as to improve agency program performance. In fiscal years 2017 and 2018, DOT developed objectives for each of its five key compliance activities; however, as illustrated in table 4 below, DOT does not have performance measures for three of its objectives. For the three objectives for which DOT has not established performance measures, it has documented qualitative measures in internal agency documents. For example, while DOT has not developed a performance measure related to enforcing airlines’ compliance with consumer protection requirements, it summarized enforcement cases in fiscal year 2017 that illustrated actions the agency had taken to achieve this objective. For instance, one enforcement action included a consent order against an airline with an assessed penalty of $1.6 million for violating DOT’s tarmac delay rule. DOT highlighted similar accomplishments for educating airlines and conducting inspections. For example, DOT issued guidance to help airlines understand their legal obligations to not discriminate against passengers in air travel on the basis of race, color, national origin, religion, sex or ancestry, and the agency highlighted identifying unlawful practices by multiple airlines during an inspection of airlines at an airport. While the actions described may provide DOT with some information on whether it is achieving its objectives, they fall short of internal control standards that call for federal agencies to define objectives in measureable terms to assess performance. DOT officials stated that they have not developed performance measures to monitor progress toward achievement of some objectives because it is difficult to develop quantifiable performance measures. We have previously reported that officials from other enforcement agencies with similar objectives found it challenging to develop performance measures in part due to the reactive nature of enforcement as well as the difficultly of quantifying deterrence, but were ultimately able to do so. Developing performance measures for all objectives would allow DOT to more fully assess the effectiveness of its efforts at promoting airlines’ compliance with consumer protection requirements. Specifically: Providing compliance information to airlines. DOT has not developed quantifiable performance measures to assess how well DOT educates airlines about consumer protection requirements. For example, DOT does not have a performance measure for developing and disseminating guidance for specific rules or to issue information on new rules within a certain time frame. Rather, officials told us that they proactively e-mail stakeholders new consumer protection rules— rather than relying on stakeholders having to find them on DOT’s website or Regulations.gov—and if officials receive the same question repeatedly, about the same requirement they might issue guidance on the topic. According to DOT officials, these activities help ensure that stakeholders are complying with relevant consumer protection requirements. DOT officials did not provide a specific reason for why they do not have a performance measure related to this objective. However, without such a measure, DOT cannot be sure that it is providing timely educational materials to clarify new consumer protection requirements and assist airlines in complying with these requirements. Conducting compliance inspections of airlines at headquarters and airports. DOT lacks quantifiable performance measures related to conducting inspections of airlines at airlines’ headquarters and at airports. Having such a measure could help ensure that DOT conducts these activities. Specifically, we found that while DOT continues to conduct inspections of airlines at airports, it has not conducted inspections at airlines’ headquarters since 2016, despite having identified this compliance activity as a key priority in planning documents. According to DOT officials, they have not conducted inspections at airlines’ headquarters for two primary reasons. First, DOT officials said inspections at airlines’ headquarters require significant staff resources, which DOT has allocated to other compliance activities in recent years. Second, officials said that no airline was an obvious choice for an inspection at its headquarters because DOT had not received a disproportionate number of complaints against a specific airline to suggest an inspection was warranted. However, the DOT OIG previously directed the agency to make these inspections a priority and to allocate resources accordingly, and DOT officials themselves have said that these inspections provide incentives for airlines’ continued compliance regardless of whether one airline has an obvious problem. Establishing performance measures for conducting both types of inspections would provide greater assurance that DOT conducts these activities on a regular basis. Moreover, officials told us that inspections at airlines’ headquarters examine specific consumer protection requirements that are not examined during inspections at airports, and that inspections at headquarters help promote compliance. Among other things, inspections at airlines headquarters allow DOT officials to: (1) review training manuals and training records; (2) examine a sample of passengers’ complaint data received directly by the airlines, including disability and discrimination complaints; and (3) verify that airlines are current on reporting data such as on mishandled baggage and denied boardings to DOT. Performance measures related to how often and under what circumstances compliance inspections should take place could provide assurance that DOT conducts these activities, and is not missing opportunities to monitor airlines’ compliance with consumer protection requirements. Enforcing airlines’ compliance with consumer protections. DOT officials told us that they have not developed performance measures for enforcement actions because they would not want to have performance measures that were punitive or reactive by, for example, requiring the agency to collect a certain penalty amount from airlines. While we acknowledge the complexity and risks involved in setting these types of performance measures, as mentioned previously, other agencies have done so. For example, one of the Federal Trade Commission’s performance measures is to focus 80 percent of enforcement actions on consumer complaints. Without a performance measure for enforcement activities, DOT is missing opportunities to assess the effectiveness of these activities and make any needed changes. We have previously reported that performance measurement gives managers crucial information to identify gaps in program performance and plan any needed improvements. DOT Has Made Recent Improvements, but Its Passenger Education Efforts Do Not Fully Align with Key Consumer Outreach Practices DOT Updated Its Website to More Effectively Educate Passengers on their Rights DOT’s primary vehicle for educating passengers is its aviation consumer protection website, which it relaunched in November 2017 (see fig. 3). According to DOT officials, as part of the relaunch, DOT improved the navigability and accessibility of the website by, among other things, arranging material by topic, adding icons for various subjects, and including a link for the website on DOT’s aviation homepage. The website now includes summaries of passengers’ rights on a number of issues including tarmac delays, overbookings, mishandled baggage, and disability issues, as well as DOT’s rules, guidance issued to airlines and others, and enforcement orders on key consumer protection issues. Moreover, the website is now accessible to people with disabilities. Moving forward, DOT has a number of additional updates planned through fiscal year 2019. For example, DOT plans to update its website with information on frequent flyer issues, optional services and fees, and codeshare agreements by the end of calendar year 2018. According to DOT officials, while not statutorily required to conduct these education activities, passenger education is a key effort to ensuring airlines’ compliance. DOT also has numerous other efforts to educate passengers on their rights. For example: Establishing resources for passengers. DOT developed Fly Rights—an online brochure that details how passengers can avoid common travel problems—in addition to material on unaccompanied minors, family seating, and a glossary of common air travel terms. DOT also developed training tools (e.g., brochures, digital content, and videos) on the rights of passengers with disabilities under the Air Carrier Access Act of 1986 and its implementing regulations, including wheelchair assistance at airports and onboard aircraft, traveling with a service animal, and traveling with assistive devices. While some of these materials were developed primarily for airline employees and contractor staff, others were developed to directly assist passengers with disabilities by providing helpful tips on airlines’ responsibilities, according to DOT officials. Building consumer education information into existing regulations. Passenger education is built into certain consumer protection requirements, according to DOT officials. For example, when an airline involuntarily denies a passenger boarding, immediately after the denied boarding occurs the airline must provide a written statement explaining the terms, conditions, and limitations of denied boarding compensation, and describing the airline’s boarding priority rules and criteria. Responding to complaints. DOT officials said they include information on an airline’s responsibilities when responding to passenger complaints. For example, if a passenger submits a complaint to DOT about not receiving compensation for a delayed or cancelled flight, the DOT analyst may inform the passenger that airlines are generally not required to compensate passengers in these instances. DOT’s Educational Efforts Fully Align with Five of Nine Key Practices We compared DOT’s efforts to educate airline passengers about their rights against key practices for consumer outreach GAO identified in prior work and found that DOT’s efforts fully align with five of the nine key practices (see fig. 4). For example, we found that DOT has successfully identified the goals and objectives of its passenger education program and identified the appropriate media mix for disseminating its materials. Similarly, we found that DOT had identified and engaged stakeholders, a step that, according to DOT officials, allowed them to better tailor materials. However, as summarized in the figure below, we found that DOT only partially met or did not meet the remaining four key practices. For example, DOT’s actions do not align with the key practice to “identify resources” and only partially align with the key practice to “develop consistent, clear messages” based on the established budget. According to a senior DOT official, DOT has not identified budgetary resources because, while important, DOT’s educational efforts are secondary to the office’s other efforts. Further, officials said that it has been difficult for the agency to develop a budget when it has been operating under a continuing resolution for some part of the fiscal year for the last decade. However, without identifying short- and long-term budgetary resources and planning activities accordingly, DOT is missing an opportunity to plan educational efforts or prioritize needs based on available resources. In addition, we found DOT’s efforts only partially align with the key practice that calls for an agency to research its target audience. While DOT has solicited some input from stakeholder groups such as those representing passengers with disabilities, DOT has not solicited feedback directly from passengers to understand what they know about their rights. DOT officials said they have not sought such feedback because they have not identified a method for doing so that would be statistically generalizable and not cost prohibitive. While costs are always an issue when considering budget priorities, we have previously reported on other agencies’ direct consumer outreach efforts that while not statistically generalizable were nonetheless useful for understanding the effect of the agencies’ efforts. For example, the Bureau of Consumer Financial Protection has used focus groups to understand its outreach efforts. Bureau of Consumer Financial Protection officials previously told GAO that while obtaining information through such efforts was resource intensive, it allowed them to assess the performance of their outreach activities. In another case, an agency surveyed users that access its website to help it understand whether its outreach efforts were effective. Obtaining input from passengers directly on what information they want or what they know about their rights would provide DOT with greater assurance that educational materials are appropriately tailored to meet a wide range of passengers’ needs. Finally, DOT has not established performance measures to understand the quality of its passenger education materials (i.e., process measures) or the effectiveness of its efforts (i.e., outcome measures). DOT officials said that they receive informal input from stakeholders on the quality of the materials and track website traffic to understand whether materials are reaching passengers. Officials said they believe that these mechanisms provide them with some assurance that the materials are meeting passengers’ needs and that passengers are accessing and using the materials. While these mechanisms may provide DOT with some information on how often materials are accessed online, they do not help it understand the quality of the materials and measure the success of its passenger education efforts. For example, while DOT officials track website traffic, they have not established a related performance measure. A number of different measures could be used to track processes and outcomes related to the use of its website, including the time consumers spend on the website, number of website pages viewed, bounce rate (i.e., percentage of visitors who looked at only one page and immediately left the site), or user’s perception of the experience of their visit. Establishing such measures would provide DOT with greater assurances that its educational efforts are appropriately tailored to passengers and leading to improved understanding of passengers’ rights, including whether any adjustments are needed. Conclusions To enforce consumer protection requirements, such as those preventing unfair or deceptive practices or unfair methods of competition by airlines, DOT has conducted almost 2,500 investigations and issued about 400 consent orders over the last decade. However, DOT lacks reasonable assurance that its approach is achieving the highest level of airlines’ compliance, given its available resources. For example, DOT has not assessed whether its procedures and training materials help analysts consistently code passengers’ complaints and identify potential consumer protection violations. Additionally, DOT has not fully used data from its case management system to inform its compliance program. Moreover, in the absence of comprehensive performance measures, DOT lacks a full understanding of the extent to which it is achieving its goal of airlines’ compliance with consumer protection requirements and whether any programmatic changes may be warranted. Improvements in these areas would provide DOT with additional information to target its resources and improve compliance. DOT has taken positive steps to educate passengers about their rights— through its revamped website and other educational resources. Nevertheless, DOT could improve its efforts by more fully following key practices GAO previously identified for conducting consumer education, such as by: seeking feedback directly from consumers; identifying short- and long-term budget resources; and establishing performance measures. Taking such actions would provide DOT with greater assurance that its efforts are meeting passengers’ needs. Recommendations for Executive Action We are making the following six recommendations to DOT: The Office of the Secretary should assess its procedures and training materials for coding airline passengers’ complaints, as appropriate, to help ensure that passengers’ complaints are consistently coded and that potential consumer protection violations are properly identified. (Recommendation 1) The Office of the Secretary should assess the feasibility and cost of updating its airline case management system to address data and reporting limitations, and to undertake those updates that are cost effective and feasible. (Recommendation 2) The Office of the Secretary should establish performance measures for each of its objectives for its five key airline-compliance activities. (Recommendation 3) The Office of the Secretary should capture feedback directly from airline passengers or identify other mechanisms to capture passengers’ perspectives to inform DOT’s education efforts. (Recommendation 4) The Office of the Secretary should identify available short- and long- term budgetary resources for DOT’s airline-passenger education efforts. (Recommendation 5) The Office of the Secretary should develop performance measures for DOT’s efforts to educate airline passengers. (Recommendation 6) Agency Comments We provided a draft of this report to DOT for review and comment. DOT provided written comments, which are reprinted in appendix IV, and technical comments, which we incorporated as appropriate. DOT concurred with our recommendations and officials said that they had begun taking steps to address the recommendations. We are sending copies of this report to the appropriate congressional committees, DOT, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at 202-512-2834 or vonaha@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Studies on the Effect of Market Structure on Elements of Airlines’ Customer Service Since its deregulation in 1978, numerous studies have examined the effects of competition in the airline industry. Most have examined the link between competition and pricing on specific airline routes—i.e., airline service between two airports or cities. These routes are viewed as the relevant markets for competitive analysis because they reflect the products that consumers purchase and for which airlines set prices. These studies have examined the pricing effect: (1) of route competition, (2) of the extent of an airline’s presence at airports, and (3) of mergers in the evolving airline industry. Studies have generally shown (1) that prices tend to be higher when fewer airlines serve a city-pair market and (2) that airline dominance at airports can be associated with higher market prices. Other studies have also shown that the presence of a low-cost airline on a route—or even the threat of entry by a low-cost airline—is associated with lower fares. In addition, some studies have examined whether there is a link between the level of competition in city-pair markets and certain elements of customer service quality, such as the incidence and length of delays, cancellations, lost baggage, flight frequency, and denied boarding. While competition generally lowers prices, the effect of competition on the quality of service is more ambiguous. On the one hand, firms may compete on quality of service; in this instance, competition leads to higher service, but it is also possible that a firm facing less competition may invest in quality of service to more fully differentiate among passengers. A variety of factors could influence the association between competition and customer service. These factors include, for example: the cost of providing higher levels of quality, the extent to which consumers have full knowledge of quality, the extent to which consumers change future purchasing decisions based on quality, and the value consumers place on product quality relative to product price. In the context of the airline industry, airline investments that underlie the provision of consumer services are not necessarily route-specific as they more likely relate to investments airlines make at airports, or at the overall airline level. For example, airlines make decisions about the extent to which resources—such as the number of aircraft and customer service personnel—are available at a given airport. Moreover, policies regarding training of gate and customer service personnel likely take place at the corporate level as do decisions about the configuration of aircraft, which may have related quality of service factors. Also, because airlines provide a service that involves a large network, some elements of quality may relate to the broad decisions regarding the management of that network. For example, if a flight is delayed on one route, it may affect the timeliness of several downstream flights due to the late arrival of the aircraft, pilots, and flight attendants, and airlines may take these networked effects into consideration in ways that could affect customer service. Still, some decisions that airlines make do have route-specific consequences that could influence customer service, such as decisions on flight scheduling, and which flights to cancel or delay in the face of operational disruptions. Some empirical airline literature on the impact of competition on certain quality factors predates several airline mergers, and some was conducted more recently. In the earlier literature, several studies found a linkage between the competitiveness of airline markets and customer service outcomes such as on-time performance, cancellations, mishandled baggage and flight frequency. These studies generally found that more competitive markets are associated with an improvement in one or more of these aspects of customer service. For example, one study found a small increase in the number of cancelled flights when a route was served by only one airline, and another found that such routes had, on average, slightly longer delays. However, the extent of these improvements has typically been small, such as an association with a small reduction in cancellations or a reduced average delay of just a few minutes. On the other hand, some studies found that delays and cancellations are less common when they involve airlines’ hub airports—especially when a flight is destined for an airline’s hub airport. In order to look more closely at the relationship between market competition and airline customer service in recent years, we reviewed several more current studies. Specifically, because the nature of the airline industry—particularly its competitive landscape—transformed after the 2007–2009 recession, we selected studies that included at least some of the study period post-recession. We identified six studies that met our criteria for inclusion, each of which examined some aspect of the link between airline market competition and one or more element of customer service. As with the earlier studies, these more recent studies generally found greater competition was associated with some improved customer service. Specifically, some studies found that flight delays were, on average, a little longer, and flight cancellations more likely when markets were more highly concentrated or in the aftermath of an airline merger. For example, one study found that a particular level of increased route concentration was associated with about a 4-minute average increase in flight delay. Another study found a similar effect on delay and also found a slightly higher incidence of cancellations on more concentrated routes. These increases in delays and cancellations were generally small. In the case of mergers, the findings are somewhat mixed. One study we reviewed found increased cancellations and more delays after mergers, but the effects tended to diminish over time, while another study did not find an effect of mergers on these measures of customer service. Another study found that the effect of mergers on consumer welfare—as measured by both price and flight frequency—may be idiosyncratic to the specific airlines involved in the merger and the state of competition in the broader market at the time of the merger. Finally, a GAO study that examined the effect of the tarmac delay rule on flight cancellations found that flights on routes where either the originating or destination airport was a hub airport for the airline had a lower likelihood of cancellation, possibly indicating a focus by airlines on maintaining smooth operations as much as possible. Generally, the differing findings on the extent or existence of quality impacts could be the result of varied methodologies in these analyses, including differing model specifications, variable measurements, and analysis time frames. Finally, while these studies provide insight into the link between competition and certain aspects of service quality, some elements of airline’s service quality are harder to explore in this way. For example, there are no data that would be readily usable in empirical analyses on the effect of competition on certain quality measures such as the extent airline websites are user-friendly, the ability to be rebooked on a different flight when a flight is missed or was cancelled, the helpfulness of airline staff, and consumer satisfaction with airline cabin amenities, such as seat comfort and availability and quality of food for sale. Moreover, while studies examine effects of competition at the route level, the national airline industry has become more concentrated in the past decade due to a series of bankruptcies and mergers. The reduced competition at this broad level may also have implications for customer service, such as the level of service provided at airports and policies on flight cancellations and rebooking. Appendix II: Objectives, Scope, and Methodology Our objectives for this report were to: (1) describe trends in DOT data on airline service from 2008 through 2017 and airlines’ actions to improve service; (2) assess how effectively DOT ensures airlines’ compliance with consumer protection requirements; and (3) assess the extent to which DOT’s airline passenger education efforts align with key practices for consumer outreach. We also examined the relationship between airline competition and customer service (app. I). The scope of this report focused on issues regarding consumer protections for airline passengers (i.e., “consumer protections”) overseen by DOT. We focused our analysis on the time period 2008 through 2017 unless otherwise noted because it encompassed key additions or amendments to consumer protection regulations, including Enhancing Airline Passenger Protections I, II, and III. For each of our objectives, we reviewed documents and data from DOT and airlines, to the extent possible. We also conducted multiple interviews with officials from DOT’s Office of the Assistant General Counsel for Aviation Enforcement and Proceedings and its Aviation Consumer Protection Division, in addition to a non-generalizable sample of 25 stakeholders—including representatives from 11 airlines, 3 market research organizations, 3 aviation academics, and 8 industry associations representing airlines, airline staff, and airline passengers. To describe trends in airline service, we analyzed DOT operational data and passenger complaints submitted to DOT from 2008 through 2017. Specifically, we analyzed DOT’s data on late flights; cancellations; diverted flights (i.e., flights operated from the scheduled origin point to a point other than the scheduled destination point in the airline’s published schedule); voluntary and involuntary denied boardings; and mishandled baggage to describe airlines’ operational performance. From 2008 through 2017, DOT required airlines with at least one percent of domestic scheduled-passenger revenues in the most recently reported 12-month period to report this data for reportable flights—we refer to these airlines as “reporting airlines” throughout our report. We also obtained data for passenger complaints submitted to DOT and analyzed the data to identify the frequency, types, and changes in complaints over time. We limited our analysis of passenger complaint data to “selected” airlines that were required to report operational data to DOT in 2017— the most recent year of available data when we started our review—because they were the 12 largest U.S. domestic passenger airlines in 2016. To assess the reliability of the operational data and complaints, we conducted electronic testing of the data to identify any outliers, compared our results to DOT published data, and interviewed DOT officials about how the data were collected and used. Because our interviews with DOT officials indicated that no changes had been made to the processes used to collect and maintain both data sources, we also relied on our past data reliability assessments from recently issued GAO reports, assessments that found that both data sources are sufficiently reliable for providing information on trends over time. Therefore, we determined that the data were sufficiently reliable for our purposes, including to present high-level trends in service over time. Moreover, we also reviewed analyses from three market research organizations that we identified during the course of our work— J.D. Power and Associates, the American Customer Satisfaction Index, and the Airline Quality Rankings—to provide additional information on airline service quality. We interviewed the authors to understand how they conducted the analyses; however, we did not evaluate the underlying methodologies. We determined that the results were reliable enough to report their high-level trends on passenger satisfaction. To understand airlines’ actions to enhance service, we interviewed or received written responses from 11 of 12 selected airlines. We conducted interviews with airline representatives using a semi-structured interview instrument, which included questions pertaining to business practices aimed at improving service from 2013 through 2017, among other things. We conducted three pretests with one airline and two industry groups. Representatives from each group provided technical comments, which we incorporated, as appropriate. We limited our timeframe to the most recent 5 years because business practices in the industry evolve quickly and we wanted to highlight the most relevant and recent practices. During interviews, we asked selected airline representatives whether these practices were documented in contracts of carriage or other customer commitment documents and reviewed those documents as appropriate. During these interviews, we also asked selected airline representatives if they considered certain aspects of their passenger complaint data they receive directly from passengers to be proprietary, and all airline representatives said the data were proprietary. To inform interviews with selected airlines representatives and to understand recent airlines business practices aimed at improving service for passengers, we also conducted a literature search of trade publications and industry reports from 2013 through 2017. Where relevant, we used information from this literature search as additional context and as a basis for our questions to airline representatives regarding specific business practices. To describe how DOT ensures airlines’ compliance with consumer protection requirements, we reviewed DOT’s documentation of the policies, procedures, and guidance that describe its five key compliance activities. In addition, we conducted multiple interviews with staff from DOT’s Office of the Assistant General Counsel for Aviation Enforcement and Proceedings and its Aviation Consumer Protection Division. To identify trends in DOT’s key compliance activities from 2008 through 2017, we analyzed reports and data DOT provided on the number and results of its airline inspections, investigations, enforcement actions, and civil penalties—including data from DOT’s case management system. To assess the reliability of the data, we interviewed DOT officials to understand how the data are collected and used and the steps DOT takes to ensure the data are accurate, complete, and reliable. We determined that the data were reliable enough to summarize trends in DOT’s investigation and enforcement actions from 2008 through 2017. To determine how effectively DOT implements its compliance program, we assessed selected key compliance activities—i.e., coding passenger complaints, using the case management system to inform compliance activities, and developing objectives and related performance measures—against selected principles of Standards of Internal Control in the Federal Government related to control activities. We also summarized other leading practices for developing performance measures, in addition to our past work, which has identified other agencies with successful performance measures. To understand the extent to which passenger education materials developed by DOT align with key practices for consumer outreach, we reviewed DOT’s educational materials and assessed them against nine key practices we previously developed for consumer education planning. In that prior work, GAO convened an expert panel of 14 senior management-level experts in strategic communications to identify the key practices of a consumer education campaign. We believe the key practices the expert panel identified in 2007 remain relevant today since the practices are not time-sensitive. In addition to reviewing relevant materials, we also conducted interviews with DOT officials to understand their outreach efforts. During these interviews, DOT officials agreed that these criteria were relevant to conducting consumer outreach. For a complete list of the criteria and corresponding definitions, see appendix III. To understand the impact of airline competition on customer service provided to passengers we conducted a literature search of pertinent studies in scholarly, peer-reviewed journals, conference papers, and government publications. We restricted our review to results published between January 1, 2012, and December 31, 2017, and our search yielded 57 academic results and 10 government studies. Of these results, we reviewed each abstract to determine whether it was relevant to our objective based on criteria we established. For example, we limited results to those looking at the U.S. airline system and eliminated results that focused solely on airfares. In total, we found that 5 academic studies and 1 government study were ultimately relevant and sufficiently reliable for our report. Moreover, we also summarized 6 additional studies that we identified by reviewing the bibliographies of our selected studies or that were identified as key pieces of research in the field to summarize prior work in this area. We conducted this performance audit from September 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Key Practices for Conducting Consumer Outreach GAO previously identified nine key practices that are important to conducting a consumer education campaign (see table 5). Appendix IV: Comments from the Department of Transportation Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Andrew Von Ah, (202) 512-2834 or vonaha@gao.gov. Staff Acknowledgments In addition to the individual named above, other key contributors to this report were Jonathan Carver, Assistant Director; Melissa Swearingen, Analyst-in-Charge; Amy Abramowitz; Lacey Coppage; Caitlin Cusati; Delwen Jones; Kelsey Kreider; Ethan Levy; Gail Marnik; SaraAnn Moessbauer; Malika Rice; Minette Richardson; Pamela Snedden; and Laurel Voloder.
Why GAO Did This Study Airlines recently came under scrutiny for their treatment of passengers—including a high-profile incident in which a passenger was forcibly removed from an overbooked flight. However, airlines maintain that service has improved, citing better on-time performance and lower airfares. DOT has the authority to issue and enforce certain consumer protection requirements. DOT also educates passengers about their rights. GAO was asked to examine airline consumer protection issues. This report examines, among other issues, (1) trends in DOT's data on airline service; (2) the effectiveness of DOT's compliance efforts; and (3) the extent to which DOT's passenger education efforts align with key practices for consumer outreach. GAO reviewed DOT data on airline service and analyzed passenger complaint data for the 12 largest domestic airlines from 2008 through 2017; reviewed relevant documents and data on DOT's compliance program; assessed DOT's educational efforts against key practices for successful consumer outreach; and interviewed DOT officials. GAO interviewed or obtained written information from 11 of the 12 airlines. What GAO Found The Department of Transportation's (DOT) data offered mixed information on whether airlines' service improved from 2008 through 2017. While DOT's operational data on rates of late flights, denied boardings, and mishandled baggage generally suggested improvement, the rate of passenger complaints received by DOT increased about 10 percent—from about 1.1 complaints per 100,000 passengers to 1.2 complaints per 100,000 passengers. DOT conducts five key activities to ensure airlines' compliance with consumer protection requirements (see table). However, GAO found that DOT lacked performance measures to help it evaluate some of these activities and that it could improve its procedures (i.e., guidance documents and training materials), that analysts use to code passenger complaints. Performance measures : DOT has established objectives for each of its five key compliance activities that state what it seeks to achieve; however, DOT lacks performance measures for three objectives. For example, DOT lacks a performance measure for conducting inspections of airlines' compliance with consumer protection requirements at airlines' headquarters and at airports. As a result, DOT is missing opportunities to capture critical information about airlines' compliance with consumer protection requirements. Procedures : DOT has procedures to help analysts code passenger complaints and identify potential consumer protection violations. GAO found that DOT's guidance for coding passenger complaints did not consistently include definitions or examples that illustrate appropriate use or help analysts select among the various complaint categories. Additional procedures would help DOT ensure that complaints are consistently coded and that potential violations are properly identified. GAO found that while DOT has taken steps to educate passengers on their rights, its efforts did not fully align with four of nine key practices GAO previously identified for conducting consumer education. For example, while DOT has defined the goals and objectives of its outreach efforts, it has not used budget information to prioritize efforts or established performance measures to assess the results. DOT has also not solicited input directly from passengers to understand what they know about their rights. Taking such actions would provide DOT with greater assurance that its efforts are meeting passengers' needs. What GAO Recommends GAO is making six recommendations, including that DOT: develop performance measures for compliance activities, improve its procedures for coding airline passengers' complaints, and improve how passenger education aligns with GAO's key practices. DOT concurred with our recommendations and provided technical comments, which we incorporated as appropriate.
gao_GAO-19-124
gao_GAO-19-124_0
Background Roles and Responsibilities U.S. agencies perform a wide variety of activities that contribute to export promotion, and responsibility for these activities is widely dispersed. Some of the services these agencies provide are intended, at least in part, to assist U.S. companies in entering foreign markets or expanding their presence abroad. For example, the U.S. government distributes trade-related information to exporters, conducts foreign country market research, and provides counseling to U.S. companies throughout the export process. U.S. agencies may also use diplomatic tools to advocate on behalf of U.S. companies to help ensure they can compete on a level playing field in export markets. Three of these agencies—State, Commerce, and USDA—receive appropriations that are restricted from being used to promote the sale or export of U.S. tobacco or tobacco products. These agencies promote the growth of other U.S. exports through various activities, as discussed in table 1. Funding Restrictions on Promoting Tobacco Congress has restricted the use of funds that are generally appropriated for State, Commerce, and USDA from being used to promote the sale or export of U.S. tobacco and tobacco products since the 1990s. In 1990, we reported that U.S. policy and programs for assisting the export of tobacco and tobacco products worked at cross purposes to U.S. health policy and initiatives, both domestically and internationally. Congress later restricted the use of funds that are generally appropriated to State, Commerce, and USDA from being used to promote the sale or export of U.S. tobacco and tobacco products. During fiscal years 1994 through 2003, Congress prohibited funds generally appropriated for USDA through annual appropriations acts from being used to promote the sale or export of tobacco or tobacco products. In fiscal year 2004, Congress permanently prohibited funds appropriated for USDA from being used to promote the sale or export of tobacco or tobacco products. According to USDA officials, USDA stopped its efforts to gather and disseminate tobacco-related production and consumption information overseas in the early 2000s. Congress restricted the use of certain appropriated funds, including appropriations for Commerce and State, from being used to promote the sale or export of U.S. tobacco and tobacco products from fiscal years 1998 through 2017. Congress passed the Departments of Commerce, Justice, State, the Judiciary and Related Agencies Appropriations Act, 1998, which prohibited the funds provided by the act from being used to promote the sale or export of tobacco or tobacco products. This act also prohibited the funds provided by the act from being used to seek the reduction or removal of foreign country restrictions on the marketing of tobacco or tobacco products. The act provided an exception for the funds to be used to address foreign-country restrictions on tobacco marketing that are not applied equally to all tobacco or tobacco products of the same type. These restrictions have been enacted through annual appropriations acts through fiscal year 2018. In fiscal year 2018, Congress altered the restriction language on tobacco promotion in the act making appropriations for State, which, according to State, makes promotion activities permissive with respect to the use of State appropriations. Congress used the term “should” in the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2018 (2018 State Appropriations Act) instead of the term “shall” as in prior acts making appropriations for State. Specifically, the 2018 State Appropriations Act states that “None of the funds made available by this Act should be available to promote the sale or export of tobacco or tobacco products. . . .” In contrast, prior acts making appropriations for State stated “None of the funds made available by this Act shall be available to promote the sale or export of tobacco or tobacco products. . . .” According to State officials, they interpreted the term “shall” in prior appropriations acts as a mandatory action, whereas the use of the term “should” gives the agency more discretion in how it addresses the restrictions. However, State has not changed how it addresses the restrictions and does not plan to promote the sale or export of U.S. tobacco, according to State officials. The legislation restricting fiscal year 2018 appropriations provided to Commerce and USDA from being used to promote tobacco retains the mandatory “shall” language. According to Commerce and USDA officials, the change to State’s restriction language does not affect the agencies’ activities because Commerce and USDA are still subject to the mandatory restrictions outlined in their agencies’ appropriations language. U.S. Agencies Have Issued Guidance to Implement Funding Restrictions on Promoting Tobacco State collaborates with Commerce, USDA, and other agencies to develop and periodically issue an interagency guidance cable to implement funding restrictions on promoting tobacco. According to officials, this cable serves as the primary source of guidance for implementing the restrictions on promoting tobacco for their officials at all posts overseas. State Periodically Issues an Interagency Guidance Cable to Implement Funding Restrictions State collaborates with Commerce, USDA, and other agencies to develop and periodically issue an interagency guidance cable to all posts overseas to facilitate their implementation of funding restrictions on promoting tobacco. State officials draft the updated cable and Commerce, USDA, and other agency officials have the opportunity to review and comment on it before State issues it through its cable system. This cable serves as the primary source of guidance for implementing the restrictions, according to officials at these agencies (see table 2). State has updated and issued the interagency guidance cable four times since 2013 to address changes in tobacco technology and other emerging issues, according to State officials. We identified two significant changes to the cable over the past 5 years. Addition of information concerning attendance at corporate social responsibility events: In May 2013, State added a provision that post officials should consult with headquarters before attending corporate social responsibility events involving U.S. tobacco companies. State officials in headquarters acknowledged that post officials may not link some activities, such as participating in corporate social responsibility events, to the promotion of or selling of products. They noted that this is why it is important to make post officials aware of the actions they should or should not take through the interagency guidance cable. Changes to the scope of tobacco products: In recent updates to the cable, State expanded the description of “tobacco and tobacco products” to address the emergence of new delivery systems for tobacco. Specifically, in 2014 State added the language “tobacco delivering products, such as electronic cigarettes” to provide an example of a tobacco product. In 2016, State changed the description to “electronic nicotine delivery systems such as e-cigarettes.” Then in 2018, State added “non-combustible products such as smokeless tobacco” to the description of tobacco products. In response to the revised funding restriction language in the 2018 State Appropriations Act, State modified the 2018 cable stating that the changes make promotion activities permissive with respect to the use of State appropriations. However, State decided not to change the portion of the cable describing specific actions officials should or should not take in the version it issued in April 2018, because according to State officials, they do not plan to promote tobacco. In addition, Commerce and USDA officials said that the change to State’s restriction language has not changed how they interpret the guidance. Commerce’s Policy on Client Eligibility Implements Funding Restrictions Commerce relies on both the interagency guidance cable as well as its client eligibility policy to implement restrictions on promoting tobacco. Commerce’s client eligibility policy applies to all export promotion services that Commerce provides and educates officials on how to effectively manage U.S. company requests for commercial assistance. The policy’s section on exceptions and other bases for declining services to companies states that Commerce is prohibited by law from promoting the export of tobacco or tobacco-related products. Commerce issued its updated client eligibility policy in October 2018. USDA Relies on the Interagency Guidance Cable to Implement Funding Restrictions USDA relies on the interagency guidance cable to provide direction to its officials overseas, and does not have agency-specific guidance for implementing its permanent funding restrictions on promoting tobacco. USDA officials said that the cable sufficiently addresses the funding restrictions on the agency’s promotion activities and helps to ensure that all officials serving at posts overseas conduct activities in a consistent manner. Most Post Officials Interviewed Were Aware of the Restrictions and Received Guidance but Many Did Not Receive Training Most State, Commerce, and USDA officials overseas we interviewed were aware of the restrictions on promoting tobacco. Most officials we interviewed had received some guidance concerning the restrictions, but several officials did not recall receiving the interagency guidance cable. Moreover, two of the agencies’ current training courses do not address the restrictions. Most Post Officials Were Aware of the Restrictions Officials in 21 of the 24 offices overseas we interviewed were aware of the restrictions. The three offices that were not aware of the restrictions were from State. Although these officials were not aware of the restrictions, they said they had never provided services to U.S. tobacco companies. Commerce and USDA headquarters officials said that it is widely known within their agencies that staff should not promote tobacco. Commerce and USDA officials said the guidance concerning these restrictions has been consistent for many years and that staff in the field and in headquarters are very aware of the restrictions. Most Post Officials Received Some Guidance Concerning the Restrictions Most officials overseas had received some guidance concerning the restrictions on promoting tobacco. Officials in 21 of the 24 offices overseas we interviewed had received written or verbal guidance concerning the restrictions on promoting tobacco at some point in their career. For example, officials in 15 offices mentioned receiving the State-issued interagency guidance cable when we asked them what type of tobacco-related guidance they had received. In addition, officials in four of the eight Commerce offices recalled receiving agency-specific guidance. Some officials said that their supervisors had informed them they are not allowed to promote tobacco exports. Some officials did not recall receiving the interagency guidance cable, which agency officials said serves as the primary source of guidance for implementing the restrictions, and some were not aware that State periodically issues the cable. For example, one USDA official stated that he could not recall the last time he received guidance and noted that cables can easily be overlooked. He recommended that USDA improve its efforts to distribute the cable and have supervisors maintain an annual checklist to ensure staff have read and understand it or incorporate it into annual training. A State official told us that he was in Washington, D.C. when State issued the prior cable and he did not learn about it until he had been stationed at his next overseas post for several months. A Commerce official noted that some officials new to post may not receive the interagency guidance cable for several months. All officials working overseas can access the interagency guidance cable through the State cable database or access other resources if a tobacco- related issue arises. For example, the Commerce client eligibility policy and the interagency guidance cable are available on an internal Commerce website. USDA officials in headquarters stated that they do not remind officials overseas about the restrictions or available guidance, but that, in response to our audit work, they plan to send an annual reminder. Finally, many post officials we interviewed said that they are aware of the activities their colleagues are undertaking and would have the opportunity to educate their colleagues before they provided any services to a tobacco company. Many Post Officials Did Not Receive Training Concerning the Restrictions Officials in 15 of the 24 offices overseas we interviewed said they did not receive any training concerning restrictions on promoting tobacco. In the past, State, Commerce, and USDA did not include information about the funding restrictions or related guidance in training materials. State and USDA officials in headquarters confirmed that training materials for officials conducting export promotion activities overseas do not address funding restrictions on promoting tobacco. According to an official at State’s Foreign Service Institute, tobacco products may be discussed in a trade-related course when describing those products officials should not advocate for, or in the 6-month economic studies course when examining the nexus between trade issues and public policy. However, State could not provide documentation of where this is specifically addressed in its curriculum. A USDA official stated that none of the Foreign Agricultural Service training courses explicitly discuss restrictions on promoting tobacco. According to Commerce officials, the training for new trade specialists did not include information about the restrictions on promoting tobacco when Commerce last provided the training in 2014. However, in response to our audit work, Commerce added this information into its training materials for new trade specialists in September 2018. Officials who do not receive training on the restrictions early in their careers may not be aware that they are prohibited from promoting tobacco. For example, one Commerce official told us he did not know about the restrictions while serving at his first post when he attended a meeting that involved representatives from the tobacco industry. He noted that he now questions whether he would have attended the meeting if he had known about the restrictions. Federal internal control standards state that appropriate training, aimed at developing employee knowledge, skills, and abilities, is essential to an organization’s operational success. If agencies do not explicitly include information about the restrictions and related guidance in training materials for officials conducting export promotion activities overseas, officials may work at a post for several months, or longer, before learning about the restrictions. Post Officials Have Implemented Restrictions on Promoting Tobacco but Guidance Lacks Clarity The State, Commerce, and USDA officials we interviewed said they have implemented the funding restrictions on tobacco as outlined in the interagency guidance cable issued by State. For example, post officials said they have not promoted the sale or export of tobacco or tobacco products or attended events solely sponsored by tobacco companies, though many officials said they attended events at which officials from tobacco companies were present. Post officials identified three areas of the guidance that may benefit from additional clarification, according to interviews with agency officials and our review of agency emails. Post Officials Have Implemented Funding Restrictions on Promoting Tobacco Our interviews with State, Commerce, and USDA officials in 24 offices in nine countries and our review of agency documents, showed that posts have implemented the interagency guidance outlining actions they should not take (see table 3.) Some Sections of the Interagency Guidance Cable Lack Clarity Post officials identified three areas of the guidance that may benefit from additional clarification, according to our interviews with agency officials and our review of agency emails: attendance at events, the types of permitted services, and the description of tobacco products. Officials Questioned When It is Permissible to Attend Certain Events Officials from all three agencies raised questions about whether and when it is permissible to attend events at which tobacco company representatives are present. The guidance does not specifically address attendance at events also attended by representatives of tobacco companies. State headquarters officials said the vast majority of questions received from posts concern whether personnel at a post may participate in an event when representatives from a company engaged in the tobacco industry are also expected to participate in that event. We also reviewed emails in which Commerce officials asked for additional guidance about attending events or meetings with tobacco companies. For example, one post official asked whether the embassy could invite a tobacco company to participate in an embassy-organized trade mission that would include meetings with the local governor and mayor. In this case, Commerce headquarters officials advised that the tobacco company’s participation could be construed as U.S. government support for the company’s commercial activities and recommended against including the tobacco company. A USDA official in headquarters also noted that attending events could, in some cases, be construed as supporting tobacco companies, and noted that this is an area where staff could use more guidance. Representatives from several tobacco control organizations expressed concern that interactions between U.S. government officials and representatives from tobacco companies at events organized by business associations created a perception that the U.S. government supported tobacco company sales in the country. For example, in 2017 a business association hosted a trade mission to one Southeast Asian country that included representatives from 30 U.S. companies, including a U.S. tobacco company. In response, two tobacco control organizations wrote to the U.S. ambassador in that country voicing their concern that U.S. government officials’ attendance at meetings that included the tobacco company representatives violated the spirit of the interagency guidance cable and gave the appearance that the U.S. government supports the tobacco company. Subsequently, the Deputy Chief of Mission distributed guidance specific to that post stating that officials were not allowed to attend a trade mission’s events or meetings if representatives from a tobacco company were scheduled to give a presentation. Several post officials said that attending events organized by business associations is a key function of their job. They attend these events to, among other things, exchange information about the local business climate and learn about the concerns of American companies. Officials Questioned the Types of Services They Can Provide Commerce and USDA officials identified ambiguities in the guidance concerning the types of services they are allowed to provide to tobacco companies or the tobacco industry. In 14 of the 21 Commerce emails we reviewed, officials at posts asked for additional guidance about the types of services they are permitted to provide to tobacco companies or the types of companies or products they can support. For example, some post officials asked whether they could engage with the host country government to obtain information about pending tobacco-related legislation at the request of a tobacco company. In one case, Commerce headquarters advised post officials that the restrictions did not prohibit them from raising concerns on a legislative proposal that would discriminate against foreign tobacco companies. They further noted that because of the sensitive nature of tobacco-related issues, any policy decision to engage should be weighed carefully. Commerce’s client eligibility policy does not provide a description of the types of actions Commerce officials should and should not take with regards to tobacco companies and products. The interagency guidance cable also does not provide information about some types of services, such as whether officials should engage with host country government officials to learn about pending tobacco-related legislation. According to a USDA official, some officials overseas interpret “promotional” activities differently and did not agree on whether both marketing and trade-related activities, such as enforcing trade agreements, are promotional activities. Officials Questioned the Description of Tobacco Products Commerce officials at post asked for additional guidance about whether they could provide export promotion services to companies exporting certain tobacco-related products in 3 of the 21 emails we reviewed. For example, some Commerce officials asked whether they could provide services to companies selling component parts for electronic nicotine delivery systems, such as e-liquids. Commerce’s prior client eligibility policy, issued in May 2017, did not include a list of tobacco products covered by the policy; whereas, the interagency guidance cable issued in 2014 states that tobacco products include tobacco delivery systems, such as electronic cigarettes, and the updated version issued in 2018 added non-combustible products, such as smokeless tobacco, to this description. However, neither the interagency guidance cable nor Commerce’s updated client eligibility policy specifically states whether the description includes component parts for electronic cigarettes and other tobacco products. GAO previously reported that electronic cigarettes include a wide range of products that share the same basic design and generally consist of three main parts: a power source, a heating element, and a cartridge or tank containing liquid solution, which is often sold separately. According to State officials in headquarters, the guidance on promoting tobacco was written for a broad audience and to make post officials mindful of the restrictions. They said they trust that officials overseas will use their professional judgment and in-country expertise to determine if post’s support for an event or a company will be construed as promotion of a tobacco product. Moreover, State and Commerce officials said that they expect officials overseas to ask headquarters questions to clarify the interagency guidance cable. While federal standards for internal control state that management should clearly document internal controls in policies and guidance to prevent officials from failing to achieve an objective or address a risk, we found that the interagency guidance does not provide examples of the factors post officials should consider when attending business association events. The guidance also lacks sufficient examples of the types of services officials are allowed to provide to tobacco companies and a clear description of tobacco products. More specific guidance would help ensure that State, Commerce, and USDA officials consistently implement their agency-specific funding restrictions on promoting tobacco exports. Conclusions The United States exported over $2 billion in tobacco and tobacco-related products in 2017. Congress has enacted restrictions on the use of certain appropriated funds to promote the sale or export of U.S. tobacco or tobacco products since the 1990s, and State, Commerce, and USDA have developed and updated guidance to implement these restrictions. However, not all officials were aware of the restrictions and more than half had not received training about the restrictions. Including information about the restrictions in training materials would help make officials aware of the restrictions early in their careers and prompt them to seek guidance if a tobacco-related issue arises. If officials conducting export promotion activities are unaware of the funding restrictions on promoting tobacco sales and exports, they may also be unaware of the activities they should and should not undertake. Moreover, some officials said that the guidance is unclear in some areas. Although officials said they need to attend business association events to support all U.S. companies conducting business in a country, they were unsure whether they can attend events where representatives from U.S. tobacco companies may be present. In addition, some officials also indicated that the current guidance lacks clarity on the types of services officials are allowed to provide to tobacco interests and what constitutes a tobacco product. Although we did not identify any instances in which a State, Commerce, or USDA official directly promoted U.S. tobacco products, clearer guidance would help to ensure that officials will consistently implement their agency-specific funding restrictions. Recommendations for Executive Action We are making three recommendations, including two to State and one to USDA. Specifically: The Secretary of State should work with the Foreign Service Institute to include information about the funding restrictions and relevant guidance on promoting the sale or export of tobacco or tobacco products in its training materials for employees conducting export promotion activities overseas. (Recommendation 1) The Secretary of Agriculture should include information about the funding restrictions and relevant guidance on promoting the sale or export of tobacco or tobacco products in training materials for employees conducting export promotion activities overseas. (Recommendation 2) The Secretary of State, in consultation with the Secretary of Commerce and the Secretary of Agriculture, should assess the interagency guidance cable on promoting tobacco in light of questions raised by officials at posts overseas and update it to address ambiguities, as needed. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to State, Commerce, USDA, and USTR for review and comment. In their comments, reproduced in appendix III, State concurred with our recommendations and described planned actions to address them. USDA concurred with the recommendation and told us that they had no comments on the draft report. Commerce and USTR told us that they had no comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Department of State, the Secretary of the Department of Commerce, the Secretary of the U.S. Department of Agriculture, the U.S. Trade Representative, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) the guidance select U.S. agencies have issued to implement funding restrictions on promoting tobacco exports overseas, (2) to what extent overseas officials from select U.S. agencies were aware of the restrictions and guidance, and (3) to what extent select U.S. agencies have implemented this guidance overseas. To address our first objective, we reviewed U.S. appropriations laws that prohibited the funds appropriated therein from being used to promote the sale or export of tobacco or tobacco products. We also reviewed guidance issued by the Departments of State (State) and Commerce (Commerce) concerning the promotion of tobacco exports overseas. We also interviewed officials in headquarters from State, Commerce, the U.S. Department of Agriculture (USDA), and the Office of the U.S. Trade Representative (USTR) about the funding restrictions on promoting tobacco exports overseas and the development and revision of guidance on tobacco promotion. To address our second objective, we interviewed officials in headquarters from State, Commerce, and USDA about any training officials posted overseas receive concerning the funding restrictions on promoting tobacco exports. In addition, we held structured interviews with 35 State, Commerce, and USDA officials overseas conducting export promotion activities and reached out to an additional 10 officials to ask about activities associated with the solicitation of gifts and attendance at corporate social responsibility events. These officials were located across 11 posts and in 9 countries. We interviewed officials in Cambodia, Croatia, Dominican Republic, Honduras, Indonesia, Philippines, South Africa, Thailand, and Vietnam. Because multiple officials from one agency attended a meeting in some cases, we are reporting their combined responses as one “office” response. Thus, we are reporting the results from the 24 offices we interviewed—9 State, 8 Commerce, and 7 USDA offices. We selected this non-generalizable sample of countries based on criteria that included the countries’ large or increasing amounts of U.S. tobacco imports, relatively high tobacco smoking prevalence rates, and geographic dispersion. The information obtained from these interviews is neither generalizable nor reflects the experiences of all State, Commerce, and USDA officials serving at posts overseas, but it does provide insights into officials’ experiences at post and illustrative examples across our sample on the topics discussed. To address our third objective, we interviewed officials in headquarters from State, Commerce, and USDA about post officials’ implementation of guidance regarding the promotion of tobacco exports, the types of questions they receive from post officials about the funding restrictions and guidance, and the additional advice they provide to post officials overseas. We asked post officials about the clarity of guidance, whether they attended events sponsored or attended by representatives of U.S. tobacco companies, and whether they discussed tobacco-related issues with host country government officials during our structured interviews with the 24 State, Commerce, and USDA offices overseas. We also analyzed a Commerce database, agency emails, and State cables and conducted a literature search. Commerce documents all the fee-based services it provides to companies in a database. We obtained a list of approximately 30,000 fee-based services Commerce provided in fiscal years 2013 through 2017, which included the name of the companies to which Commerce provided these services. We then downloaded a list of 763 U.S. tobacco companies from Nexus using criteria such as industry classification codes related to tobacco and tobacco products and the location of company headquarters. We limited the list of U.S. tobacco companies to those with revenues greater than $5 million. We then compared the two lists to determine if Commerce provided any fee-based services to U.S. tobacco companies. To assess the reliability of the Commerce fee-based services data, we reviewed relevant documentation and interviewed knowledgeable officials about system controls. We determined that Commerce’s fee-based services data were sufficiently reliable for the purposes of our reporting objectives. In addition, we requested State, Commerce, and USDA email communications concerning tobacco-related issues sent between January 2015 and February 2018 from post officials to headquarters. State was only able to provide one such email. USDA provided several emails, but the emails were not from USDA post officials to USDA officials in headquarters. Commerce provided us 21 emails that matched our request and an additional 20 emails from officials working throughout the United States. We analyzed the Commerce email communications to identify commonly asked questions or concerns about the existing guidance and actions the agencies should take to support U.S. tobacco companies or the tobacco industry. We also requested State cables from the eight countries in our sample sent between January 2013 and December 2017 that referenced at least 1 of the 10 U.S. tobacco companies with the highest revenues. We received and reviewed cables from six of these countries. We also conducted a literature search to identify instances in which U.S. government officials may have conducted activities addressed by the interagency tobacco guidance cable. To identify relevant articles, such as trade or industry articles, we searched various databases, including ProQuest and Nexus. From these sources, we identified one article relevant to our research objective. We performed these searches in December 2017 and searched for articles published from January 2013 to December 2017. We also interviewed representatives of the tobacco control community and business associations to obtain their perspectives concerning U.S. government support for tobacco exports and U.S. government interactions with U.S. tobacco companies. Specifically, we interviewed the World Health Organization (WHO), four global or regional tobacco control nongovernmental organizations, and several local nongovernmental organizations in two countries in our scope. In addition, we interviewed officials from the local American Chamber of Commerce and the U.S.- Association of Southeast Asian Nations Business Council in two countries. The information obtained from these interviews is neither generalizable nor reflects the experiences of all tobacco control organizations or business associations, but it does provide insights into these officials’ experiences. We conducted this performance audit from November 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Trends in U.S. Tobacco Exports The United States exported over $2.1 billion in tobacco and tobacco products in 2017. Figure 1 shows how tobacco exports fluctuated from 2007 to 2017. Specifically, total tobacco exports have ranged from a high of approximately $2.4 billion in 2007 to a low of about $1.7 billion in 2012. U.S. tobacco exports to Asia have decreased by 68 percent over the past 11 years, whereas exports to North America have increased 10-fold (see fig. 2). Most of the decrease in exports to Asia is attributable to reduced exports to Japan, which fell 95 percent from 2007 to 2017. Most of the increases in exports to North America are attributable to Canada, which accounted for approximately 40 percent of total U.S. tobacco exports in 2017. Appendix III: Comments from the Department of State Appendix IV: GAO Contact and Staff Acknowledgements GAO Contact David B. Gootnick, (202) 512-3149 or gootnickd@gao.gov. Staff Acknowledgments In addition to the contact named above, Cheryl Goodman (Assistant Director), Celia Thomas (Assistant Director), Amanda Bartine, Leah DeWolf, Jewel Conrad, Aldo Salerno, and Neil Doherty made key contributions to this report. Grace Lui, Justin Fisher, and Ming Chen provided technical assistance.
Why GAO Did This Study The World Health Organization estimates that tobacco use kills over 7 million people each year, more than tuberculosis, HIV/AIDS, and malaria combined. Since the 1990s, Congress has enacted restrictions regarding the use of certain appropriated funds to promote U.S. tobacco exports. GAO was asked to review the implementation of these restrictions. This report examines (1) guidance select U.S. agencies have issued to implement these restrictions, (2) whether overseas officials from select U.S. agencies were aware of the restrictions and guidance, and (3) select U.S. agencies' implementation of the guidance overseas. GAO reviewed U.S. laws, agency guidance, and internal communications; analyzed Commerce data; and interviewed agency officials in Washington, D.C. and in 24 offices across 11 overseas posts in 9 countries. GAO selected these countries based on criteria that included U.S. tobacco export totals, smoking rates, and geographic dispersion. What GAO Found Congress has restricted the use of certain appropriated funding to promote tobacco exports and the Departments of State (State), Commerce (Commerce), and Agriculture (USDA) have issued interagency guidance through the cable system that they rely on to implement these restrictions. State collaborates with these and other agencies to periodically update this cable. The cable informs officials about the types of actions they should take—such as providing routine business facilitation services to all U.S. companies—and the types of actions they should not take—such as attending events sponsored by tobacco companies. Most, but not all, officials overseas that GAO interviewed were aware of the restrictions and received some guidance concerning the restrictions. However, GAO found that some officials did not recall receiving the interagency guidance cable. In addition, State and USDA's current training materials do not address the restrictions. Federal internal control standards state that appropriate training is essential to an organization's operational success. Thus, providing officials overseas with training about the funding restrictions and related guidance would help to ensure that officials are aware of the restrictions. U.S. officials overseas have implemented restrictions on promoting tobacco, but some officials said that the interagency guidance lacks clarity. Officials said that they have not promoted tobacco by, for example, attending events sponsored solely by tobacco companies. However, officials identified three areas of the guidance that are unclear: (1) attendance at events not sponsored by U.S. tobacco companies but attended by representatives of these companies; (2) the types of services officials can provide tobacco companies; and (3) the description of tobacco products, such as whether component parts for electronic cigarettes are included. Federal standards for internal control state that management should clearly document internal controls in policies and guidance to prevent officials from failing to achieve an objective or address a risk. By providing more specific guidance, the agencies would help ensure that officials consistently implement the funding restrictions on promoting tobacco. What GAO Recommends GAO recommends that (1) State and USDA include information about the funding restrictions and guidance in training materials for relevant employees and (2) State, in consultation with Commerce and USDA, assess and update the interagency guidance cable, as needed, on promoting tobacco in light of questions raised by officials at posts overseas. State and USDA concurred with the recommendations.
gao_GAO-18-59
gao_GAO-18-59_0
Background Electronic health records that are interoperable and contain all relevant patient information are crucial for optimizing the health care provided to patients. Historically, patient health information has been scattered across paper records kept by different caregivers in many different locations, making it difficult for a clinician to access all of a patient’s health information at the time of care. Lacking access to these critical data, a clinician may be challenged in making the most informed decisions on treatment options, potentially putting the patient’s health at risk. Thus, the move toward collecting, storing, retrieving, and transferring these records electronically can significantly improve the quality and efficiency of care. This is especially true in the case of military personnel and veterans, such as those in the Coast Guard, because they tend to be highly mobile and may have health records at multiple facilities both within and outside the United States. Therefore, EHRs that are interoperable among health care systems of providers such as the Coast Guard, the Department of Defense (DOD), and the Department of Veterans Affairs (VA) are key to improving the care these patients receive. In April 2004, the President called for widespread adoption of interoperable EHRs by 2014. Similarly, in August 2006, the President instructed agencies, as they implemented, acquired, or upgraded health information technology (IT) systems, to utilize systems and products that met recognized interoperability standards. For nearly two decades both DOD and VA have been working to implement interoperable health care systems, although with little success. The Coast Guard Has Historically Relied on EHRs and Related Systems to Support Health Care Efforts The Coast Guard’s HSWL Directorate is responsible for ensuring the readiness and health of nearly 50,000 members throughout the United States. In this regard, the Office of Health Services within HSWL is charged with providing healthcare to Coast Guard members, other military active duty and reserve members, retired personnel, and eligible family members. The Coast Guard’s healthcare services are supported by 41 U.S. based health clinics and 125 sick bays. In an effort to meet the need for interoperable EHRs, in 2002, the Coast Guard implemented DOD’s Composite Health Care System (CHCS) at its clinics and sickbays. According to the Coast Guard’s medical manual, the clinics and sickbays used CHCS for various health care-related activities, including scheduling patient appointments; documenting patient consults and referrals; storing prescriptions; tracking and controlling prescribed medications; and tracking laboratory orders. CHCS interfaced with the DOD Defense Eligibility Enrollment Reporting System, which provided verification of the identity and benefit eligibility of Coast Guard members; other military active duty, reserve, and retired personnel; and their eligible family members. CHCS also interfaced with other health care-related systems, such as a DOD prescription repository, a patient lab delivery system used by health care providers, a system that provided eyewear-related services, and the military’s health insurance provider’s system. To provide a more user-friendly way of accessing CHCS, the Coast Guard implemented DOD’s Provider Graphical User Interface (PGUI) in 2004. This interface also provided clinics and sick bays with additional system functionality, such as the ability to create and store medical notes electronically. According to HSWL staff, although CHCS and PGUI provided the Coast Guard with a way to manage health records electronically, these systems were outdated and lacked key functionality such as billing, scheduling, and case management. Therefore, the Coast Guard intended to transition from CHCS and PGUI to DOD’s more modernized Armed Forces Health Longitudinal Technology Application (AHLTA) in 2009 to achieve interoperability with DOD and VA and comply with executive orders and statutes that called for efficient health care initiatives. However, HSWL staff stated that the cost of adopting and maintaining AHLTA, as well as the need for the Coast Guard to meet its unique mission requirements, led the agency to move forward with implementing a new system of its own in 2010. The new system was intended to be interoperable with both DOD’s and VA’s health information systems. Toward this end, on September 30, 2010, the Coast Guard awarded a 5- year, $14 million contract to acquire a commercial off-the-shelf (COTS) EHR system. According to the Coast Guard’s EHR business case, the system was to provide ambulatory services, including online management of patient health records; patient scheduling and billing services; dental and radiology modules; management of prescribed medications and tracking laboratory orders, among other capabilities. However, while working to implement the COTS EHR system, HSWL staff determined that many other Coast Guard health care-related IT systems were outdated and also needed modernization. As a result, the HSWL Directorate began an effort to expand the original EHR modernization effort to integrate these other necessary and outdated services. This expanded project was called IHiS. According to the HSWL Directorate, IHiS was to provide additional services such as work-life and safety data management, work-life case management, wireless access, and an integrated patient portal that was intended to allow patients to access their medical records at any time. The project consisted of various contracts with 25 different vendors and was estimated to cost approximately $56 million to implement, which included the original $14 million COTS EHR contract. HSWL staff stated that, at the time that the IHiS project was being planned and designed, the Department of State was also planning to develop an EHR system. In order to reduce the overall cost to both parties, in 2012, the Department of State signed an interagency agreement with the Coast Guard to utilize IHiS for that department’s personnel. The system was to be implemented in phases with beta testing at two to three selected Coast Guard clinics in October 2015, and then subsequent implementation at the other clinics, sick bays, and Department of State locations. However, on October 19, 2015, the Coast Guard decided to terminate the IHiS project and decommissioned PGUI in 2015 and CHCS in 2016. The Coast Guard Attributed IHiS Termination to Financial and Other Risks, after Spending Approximately $60 Million on the Project According to the Director of HSWL, who was appointed to the position in August 2015, financial, technical, schedule, and personnel risks led the Coast Guard’s EOC to decide to terminate the IHiS project. Specifically, the Director of HSWL provided us a written summary of information on the IHiS project risks that she said she had verbally communicated to the EOC during meetings on September 24, 2015, and October 6, 2015. The financial risks that the Director presented were based on internal investigations initiated in January 2015 and May 2015 to determine whether the HSWL Directorate had violated the Antideficiency Act by using incorrect funding sources and incorrect fiscal year funds for the IHiS project. In this regard, the Coast Guard ordered project management and contractor staff to cease work on IHiS until a determination was made regarding the antideficiency violation. In addition, the Director stated that she relayed technical risks to the EOC. These risks were identified in an e-mail in late August 2015 by Coast Guard project management staff who participated in the design and development efforts for IHiS. The Director and the related e-mail identified the following technical risks: Lack of testing. IHiS lacked an independent security assessment to verify that the system’s security infrastructure was adequate. In addition, full interface testing with systems such as the Defense Eligibility Enrollment Reporting System had yet to be completed to ensure security and data integrity. Limited system functionality. The system that was to provide user verification and IHiS role management services was not yet complete. In addition, Coast Guard workstations could not yet access IHiS from the network and the patient portal lacked two-factor authentication. Further, the service that was to register new IHiS users in the system had yet to be completed. The Director also presented schedule and personnel risks to the EOC: Delays in the implementation timeline. The Director stated that between August 2015 and September 2015, she requested that the DOD’s Defense Health Agency Solution Delivery IT team independently validate the IHiS timelines and the status of the project. The Director said she requested this review because of the technical risks identified in the August 2015 e-mail and concerns as to whether IHiS would be ready to be piloted at the first clinic in the fall of 2015. According to the Director, the Defense Health Agency team projected the timeline for the first clinic implementation to be approximately 1 year later than originally estimated. The Director added that Defense Health Agency team members stated that the timeline was delayed, in part, because critical IHiS interfaces and workflows were not complete or operational. The Director told us that these estimations were provided by the Defense Health Agency team verbally and that the team did not provide the Coast Guard any written documentation outlining its findings. Changes in project management staff. Although HSWL staff had been managing the IHiS project since it was initiated in 2010, C4&IT was directed to assume the oversight responsibilities for IHiS implementation in May 2015 due to concerns about the project’s adherence to established governance processes raised by the internal investigators looking into the potential Antideficiency Act violations. By August 2015, the key project management personnel that had overseen the project since 2010 had been removed. According to C4&IT staff, IHiS was cancelled during the transition of project managers. As a result of the changes in staff, one vendor noted that it was unclear as to who were stakeholders, responsible parties, and decision makers. According to the Director, these risk factors had demonstrated that the project was far from ready for deployment and that continuing IHiS could cause significant stewardship and reputational harm to the Coast Guard. As a result of the risks presented by the Director, the EOC members made the decision to cancel IHiS, and did not consider any other alternatives to its cancelation. Subsequent to the project’s cancelation, the Deputy Commandant for Mission Support conducted an analysis of the amount of money that had been obligated for and spent on the project. According to the analysis, which included obligations and expenditures from September 2010 to August 2017, the Coast Guard had obligated approximately $67 million and, of that amount, had spent approximately $59.9 million on the IHiS project at the time of its cancelation. Further, according to Office of Budget and Programs staff members, no equipment or software from the IHiS project could be reused for future efforts. In addition, according to senior staff within the Acquisition Directorate, the Coast Guard continued to pay millions of dollars to vendors over 2 years after the project’s cancelation to satisfy existing contractual obligations. For example, according to staff within the Acquisition Directorate: $102,993 was paid in November 2017 to one vendor for leased equipment that was damaged or missing, as part of closing out the contract. $460,352 was paid in November 2017 to an equipment vendor because the Coast Guard was obligated to do so after it had exercised the contract option period just prior to canceling IHiS. Approximately $872, 000 was paid to various vendors by November 2017 as part of closing out other contractual obligations for items such as software licensing and support and a data storage center. Approximately $2.4 million is to be paid to one vendor by February 2018 for software and licensing products. Approximately $2.8 million is to be paid by February 2018 for removal and shipment of equipment. However, the amount spent on the project is likely underestimated because the Coast Guard’s analysis of spending did not include labor costs for the agency’s personnel (civilian or military) who spent approximately 5 years managing, overseeing, and providing subject matter expertise on the project. It also did not include any travel costs incurred by these personnel. The Coast Guard Could Not Demonstrate Effective Project Management, Lacked Governance Mechanisms, and Did Not Document Lessons Learned for the IHiS Project The Coast Guard could not demonstrate that it effectively managed and oversaw the IHiS project prior to its discontinuance. Specifically, although the Coast Guard was to follow the SDLC Practice Manual to guide its management and oversight of the project, the agency could not provide complete evidence that it had addressed 15 of the 30 SDLC practices we selected for evaluation. In addition, project team members provided inconsistent explanations regarding whether or not documentation existed to demonstrate the actions taken to manage and oversee the IHiS project. Further, although the Coast Guard developed charters for various governance boards to provide project oversight and direction, the boards were not active and the Chief Information Officer (CIO) was not included as a member of the boards, further contributing to a lack of key governance mechanisms for IHiS. Finally, the Coast Guard did not document and share lessons learned from the failed project to help prevent similar outcomes for future IT projects. The Coast Guard Could Not Demonstrate That Selected Project Management Practices Were Addressed In an effort to institute disciplined, repeatable practices for IT development and acquisition, the Coast Guard developed the SDLC Practice Manual, which establishes the seven-phase methodology for developing the Coast Guard’s Assistant Commandant for C4&IT systems, such as IHiS. The practice manual is intended to guide project management teams through a progression of activities for managing and overseeing IT projects from conceptual planning to disposition. (Appendix II provides a discussion of each SDLC phase included in the practice manual and the 30 selected practices that we evaluated.) Although IHiS was to adhere to the SDLC practices established in the manual, the Coast Guard could not demonstrate that the staff providing day-to-day management of the project had always done so. Specifically, of the 30 selected project management practices that we evaluated for the initial four SDLC phases of IHiS—Conceptual Planning, Planning and Requirements, Design, and Development and Testing—Coast Guard officials provided documentation that the project management team fully addressed 15 practices and partially addressed 5 practices. The agency could not provide documentation that the project team had addressed 10 other practices. Table 1 provides a complete listing of the SDLC project management practices that we selected for evaluation and the extent to which the Coast Guard could demonstrate that it completed each practice. For this phase, the Coast Guard demonstrated that steps had been taken to address five of the seven selected project management practices for IHiS. Specifically, it assigned project management roles, such as the project manager, asset manager, and the system’s sponsor. The agency also documented the initial IHiS business case and acquisition strategy, as well as the designation memorandum that identified IHiS as a C4&IT system. However, the Coast Guard could not demonstrate that the project management team had validated the project’s alignment with the agency’s enterprise architecture and that the project had received the required phase exit approval. As a result, the Coast Guard could not provide evidence that the necessary steps were taken to ensure that the project would align with the agency’s business objectives and that project management staff had received approval to proceed to the next SDLC phase. Planning and Requirements Phase For this phase, the Coast Guard demonstrated that 8 of the 11 selected project management practices were performed for the IHiS project. Specifically, the agency provided evidence that it had completed the tailoring plan that detailed the SDLC processes that would be required throughout the IHiS system’s lifecycle, developed an initial risk management plan that included a list of vulnerabilities and the measures to overcome or lessen them, and conducted a cost benefit analysis. The Coast Guard also documented functional requirements; reviewed external mandates, such as those mentioned earlier; created an initial training plan; and designated the system development and system support agents. Finally, the Acting CIO approved the project to move to the next phase and stated in a memorandum that the project had met all the requirements of the planning and requirements phase. However, the Coast Guard could not demonstrate that it had fully completed all of the requirements of this phase. For example, the Coast Guard provided documentation that partially met the requirement to develop a project management plan. Specifically, the agency created a project management plan that included certain required elements, such as a project description, work breakdown structure, and a life cycle cost estimate. However, it did not complete other required elements. Specifically, although the Coast Guard developed a project schedule for IHiS, it was not well-constructed, which made the overall quality of the IHiS schedule unreliable. For example, the IHiS schedule allowed for many activities to slip a significant number of days before impacting the dates of key events. Further, the Coast Guard could not demonstrate that it had created a communication plan—another element of the project management plan—that is essential to identifying how system development progress is to be communicated across the project management team. The Coast Guard also could not demonstrate that two other selected practices were addressed. Specifically, the agency could not provide an integrated logistics support plan that is intended to document processes for ensuring IHiS data management and records management, among other things. In addition, the Coast Guard could not demonstrate that it had developed an information assurance plan that is intended to articulate the information security controls required to ensure the availability, integrity, authentication, and confidentiality of the patient health information that was to be stored in IHiS. As a result, the Coast Guard could not demonstrate that it had performed key steps to construct a reliable schedule for IHiS, plan for how the project’s progress was to be communicated to key stakeholders, ensure appropriate data and records management for information stored in IHiS, and plan for the controls necessary to secure patient health information. Design Phase The Coast Guard demonstrated that actions had been taken to partially address three of the eight selected project management practices for the design phase. In this regard, the agency partially addressed the requirement to develop a detailed system design. Specifically, the system design documentation included a description of the operating system, external and internal system interfaces, inputs and outputs of each subsystem, administrative components that are intended to connect systems, and system security requirements. However, the system design documentation did not include information on the system architecture components, system timing and sizing, and system auditing requirements. The documentation also did not address all IHiS functional requirements as required by the SDLC. The Coast Guard also partially addressed the requirement to develop an operational analysis plan. For example, the plan included performance and operating measures related to availability, maintainability, and training. It also included support measures related to system utilization, incident management, and problem management. However, the Coast Guard had not included mission-related performance measures; operating measures related to reliability, user satisfaction, and effectiveness of technology; and other system support measures related to change management. In addition, the agency partially addressed the requirement to create the test and evaluation master plan. Specifically, the test and evaluation master plan included required elements, such as the scope, content, methodology, and sequence of testing, as well as the management of and responsibilities related to testing activities. However, the plan did not define activities for integration and security testing, both of which are intended to validate that the integrated system components function properly. The Coast Guard could not demonstrate that five other selected practices were addressed for the IHiS project. In this regard, it could not demonstrate that the project team had: held review sessions with the user community to ensure that the requirements and the design were consistent with the new or enhanced business requirements; developed contingency and disaster recovery plans to document the steps necessary to continue IHiS operations in the event of a disruption; completed the privacy impact analysis to describe what information was to be collected by IHiS, why the information was being collected, intended use of the information, and how the information was to be secured, among other things; tested the system design to ensure that it would have met requirements and support business processes; and obtained exit approval for the design phase to demonstrate that all requirements of the phase were met. As a result, no evidence was provided that the Coast Guard performed all of the required steps to translate detailed system requirements into the system design and develop plans for life cycle support, such as those that address contingencies, disaster recovery, and testing for IHiS. Development and Testing Phase The Coast Guard demonstrated that actions had been taken to address two of the four selected practices and partially addressed one practice for the development and testing phase. For example, the agency developed the IHiS implementation plan that specified key activities, such as system training and monitoring, and included a schedule of activities that were to be accomplished during implementation. In addition, the Coast Guard created a diagram of the IHiS system layout as part of its effort to address one practice—to develop system documentation. However, it could not demonstrate that other required system documentation, such as system and user manuals that specify how to use and operate the system, had been created. Further, the Coast Guard could not demonstrate that it had conducted IHiS system testing, although the agency granted an authority to operate (ATO) and indicated in the ATO memorandum that the system had undergone some form of testing. The Coast Guard’s SDLC specifies that system testing is to take place prior to the issuance of an ATO. However, according to a memorandum signed by the IHiS authorizing official, a short-term ATO was granted for the system on March 30, 2015, in an attempt to ensure there would be a functioning replacement system in place prior to the decommissioning of CHCS. Nevertheless, the Coast Guard could not provide complete evidence that it took the necessary steps intended to ensure that the system would function as expected, such as conducing system testing. Relevant Documentation Was Often Not Available Over the course of our review, Coast Guard project team members provided inconsistent explanations regarding the availability of documentation to support the project management activities for IHiS. For example, with regard to the SDLC practices that we identified as not having been implemented, the former IHiS project manager and a knowledgeable representative for the contractor responsible for providing engineering and acquisition technical assistance for IHiS stated that the agency had developed most of the supporting documentation which would demonstrate that actions consistent with the SDLC practices had been taken. In addition, annotations within the IHiS acquisition strategy indicated that required SDLC artifacts, such as enterprise architecture documentation; plans for integrated logistics support, contingency, and disaster recovery; and a privacy impact assessment, among many others, were documented, available, and maintained within a document management tool. However, staff within the HSWL Directorate, the Office of Budget and Programs, and the Office of Enterprise Applications Management told us that the documentation either did not exist or could not be located because several of the key project management team members were no longer employees of the Coast Guard. The absence of the various documents and other artifacts that would support the required SDLC activities raises doubts that the Coast Guard took the necessary and appropriate steps to ensure effective management of the IHiS project. Carrying out established procedures for effective management and oversight of IT projects will be important for supporting any system development and acquisition effort that the Coast Guard undertakes to implement a future EHR system. The Coast Guard Lacked Governance Mechanisms for IHiS Oversight According to the IT Investment Management Framework, efforts to build a foundation for IT governance involve establishing specific critical processes, such as instituting investment boards and controlling investments as they are developed. In addition, we have long reported that federal IT projects have failed due, in part, to a lack of oversight and governance especially at an executive-level, such as the CIO. The Coast Guard documented charters for four governance bodies that were intended to provide oversight to the IHiS project: The Executive Steering Committee was to provide executive oversight of the design, implementation, operation, and long term direction for IHiS. Responsibilities of the committee were to include monitoring the overall acquisition, integration, and operation of IHiS; authorizing major changes in the project’s objectives, scope, and requirements; and reviewing the reliability, availability, and affordability of the project, among other things. Members of the committee were to include representatives from the Coast Guard’s HSWL Directorate, the Office of Enterprise Applications Management, and Department of State representatives. The User Group was to make recommendations to the IHiS Program Management Office on functionality and system design and to ensure that decisions were based on end-user needs. Responsibilities of the group were to include making suggestions on improving IHiS for the user, participating in planning for future changes or upgrades to the system, and evaluating strategies to maintain and improve system efficiency. The IHiS project manager was to serve as chair of the group, and the Coast Guard and Department of State were to nominate user representatives from each functional area of IHiS as additional group members. The Change Control Board was to evaluate change proposals in regard to technical, user, and cost impact to the system and recommend change requests to the IHiS baseline. Members of the board were to include representatives from the Coast Guard’s Office of Enterprise Applications Management, the Business Operations Division, and the Department of State. The System Security Committee was to manage the risk to IHiS and identify and mitigate security vulnerabilities. Responsibilities of the committee were to include reviewing IHiS security configurations, changes to those configurations, and proposed changes to IHiS to ensure that the system’s security would not be compromised. Members of the committee were to include representatives from the Coast Guard’s Office of Enterprise Applications Management, the Business Operations Division, and Department of State security and privacy representatives. While the Coast Guard chartered these various governance bodies for IHiS oversight, the agency could not provide evidence that the boards had ever been active in overseeing the project prior to its cancelation. As a result, the IHiS project lacked important oversight mechanisms to ensure the project’s success. In addition, the CIO (Deputy Assistant Commandant for C4&IT) was not included as a member of any of the IHiS governance bodies. According to a memorandum signed by the Acting CIO in 2011, C4&IT was responsible for ensuring that the IHiS project was compliant with SDLC requirements. However, the Coast Guard could not provide evidence that demonstrated how C4&IT and the CIO were involved in ensuring compliance with the requirements. Taking steps to fully implement governance boards that include the CIO will be important to the Coast Guard’s oversight efforts in implementing a future EHR system and may decrease the risk of IT project failure. The Coast Guard Did Not Document Lessons Learned from the IHiS Project We developed the IT Investment Management Framework that stresses the importance of identifying lessons learned to support future investment decisions. We have also previously reported that mechanisms for documenting, sharing, and disseminating lessons learned serve to communicate acquired knowledge more effectively and ensure that beneficial information is factored into planning, work processes, and activities. Lessons learned provide a powerful method of sharing good ideas for improving work processes, facility or equipment design and operation, quality, safety, and cost-effectiveness. They can be based on positive experiences or on negative experiences that result in undesirable outcomes, such as the cancelation of the IHiS project. Additionally, it is important to disseminate lessons learned since lessons are of little benefit unless they are distributed and used by people who will benefit from them. Although Coast Guard officials stated that lessons learned had been identified throughout the process of developing IHiS, as of 2 years after its cancelation, the agency had not documented and shared any lessons learned from the project and does not have established plans for doing so. According to an official from the Office of Budget and Programs, the Coast Guard had not yet documented lessons learned because the agency views the lessons learned process as ongoing. While the Coast Guard may view the lessons learned process as ongoing, the IHiS project was canceled in 2015, and it is important to document and share the lessons already identified so that this beneficial information can be factored into the planning activities for future systems and projects. Until the Coast Guard takes steps to document and share identified lessons learned with individuals charged with developing and acquiring its IT systems, opportunities to protect future systems against the recurrence of mistakes that contributed to the failure of IHiS will likely be missed. The Coast Guard Is Managing Health Records Using a Predominately Paper Process; Many Challenges Hinder Service Delivery In the absence of an EHR system, the Coast Guard currently relies on a predominately paper health record management process to document health care services for its nearly 50,000 military members. After canceling the IHiS project in October 2015, the agency could not return to managing health records using its legacy electronic capabilities because PGUI was decommissioned in 2015 and CHCS was decommissioned in January 2016. Thus, the Coast Guard directed clinics and sick bays to remove relevant information from CHCS and PGUI and maintain all health records for its members using a predominately paper process. The Coast Guard supplements its current paper process by using applications that various other agencies operate and maintain. For example, the Coast Guard uses the Navy’s Medical Readiness Reporting System to, among other things, track immunizations, periodic health assessments, dental exams, dental status, and required physical exams. In addition, the agency uses the Army’s Aeromedical Electronic Resource Office electronic tracking system to document aviation physical exams and aero medical summaries. However, while these systems hold valuable information, they are separate applications requiring separate logins and do not encompass comprehensive Coast Guard health beneficiary information. Currently, the Coast Guard’s clinical staff (i.e., clinic administrators and clinicians) are to generally perform the following steps to process each paper health record: Schedule an appointment for patient using Microsoft Outlook’s calendar feature. Provide the patient with the required forms for completion upon his or her arrival. Verify that all required paper forms are complete and correct. Handwrite clinical notes in a paper health record during the appointment. Complete referrals on an internal referral form and fax the form to the external provider. Handwrite prescription. Review and initial all lab and x-ray reports before filing them in the paper health record. File forms in their assigned sequence within the health record. Store all paper health records in secure cabinets or other secure areas of the facility. Conduct an accuracy and completeness check of the health record upon notification that an individual will be transferred to another facility and correct any identified deficiencies. Mail patient’s paper health record to a new facility if there is a permanent change of station, or provide the patient his or her health record in a large sealed envelope to carry by hand. Figure 1 generally depicts the required steps for managing paper health records. The Coast Guard Faces Numerous Challenges in Managing Its Paper Health Records and Has Adopted a Number of Manual Steps to Deliver Services In response to our survey, the 12 HSWL Regional Managers identified a number of challenges that clinics and sick bays in their regions had experienced in managing and maintaining paper health records. These challenges were grouped into 16 categories. Further, the 120 clinic and sick bay administrators that subsequently responded to a separate survey reported varying degrees to which they viewed each category as challenging. Figure 2 provides the clinic and sick bay respondents’ views of the challenges. The following summarizes clinic and sick bay responses for each identified challenge with managing and maintaining paper health records: Incomplete records. Ninety-eight (82 percent) of the respondents reported incomplete records as challenging. In this regard, 34 of the survey respondents reported that not all CHCS and PGUI records were printed out and included in patients’ paper health records as required before the systems were retired; therefore, they had no way to ensure the patients’ paper records were complete. According to one respondent, paper records are also often incomplete due to parts of the record being dispersed across different medical facilities, thus, making it difficult to put together a complete patient history and sometimes resulting in the need to repeat testing and treatment of patients. Penmanship. Among the 91 (76 percent) survey respondents that reported penmanship as challenging, several noted that it is difficult for staff to read illegible handwritten medical notes. This, in turn, results in difficulty determining the accurate diagnosis, the required prescription, or a referral. Tracking medications. According to 89 (76 percent) of the respondents, it is challenging to track medications without an EHR. For example, one administrator stated that the lack of an EHR makes the management of patient medication use difficult, as staff are unable to verify what medications a patient is taking, what medications have been prescribed from an outside location, and/or the effectiveness of medications. Another administrator stated that staff members rely heavily on patients to remember what medications they are taking—potentially causing harm if patients cannot remember what medications they are taking and the medications have dangerous interactions. Amount of time to manage records. According to 86 (72 percent) of the respondents, managing paper health records is challenging and requires more time for staff to complete and file paperwork. Several respondents stated that the size of the paper health records has increased, resulting in additional time required to review and file records. Ability to search within records. Eighty-three (70 percent) of the respondents reported the ability of clinical staff to search within paper health records for information as challenging. For example, one respondent stated that providers must flip through individual pages of a record to search for necessary information. Another respondent reported that some patients have up to three volumes of a health record and it can take up to 2 or 3 days to find requested information if the patient does not recall when or where the medical care was performed. Figure 3 shows a large paper health record and the multiple storage cabinets used to store them, which illustrates the difficulty in manually searching for information within the records. Missing records. Eighty-three (69 percent) of the survey respondents stated that missing records are challenging. According to one administrator, repeat evaluations that may not be required for chronically ill patients are being conducted due to missing records. Another administrator stated that information can often get misfiled in the record of a patient with a similar name. Availability of records. Seventy-eight (65 percent) of the respondents reported that the availability of records is challenging. For example, one administrator reported that many records are located in different locations, making it difficult to access the necessary information. Another administrator stated that delays occur when clinic staff have to wait for patients to bring records in for review or wait for updated notes from a previous location. Amount of time for patient encounters. According to 65 (55 percent) of the respondents, the lack of an EHR has resulted in an increase in the amount of time required to check-in patients, complete patient appointments, and enter information in the patient record. According to one administrator, clinical documentation has to be completed by hand and some clinicians wait until the end of the day to complete notes. Another administrator reported that the clinician stays after the clinic closes to complete notes. Conducting consultations. Sixty-one (51 percent) of the respondents reported conducting consultations with paper records as challenging. Several administrators stated that patient information is faxed or scanned and submitted for the consulting provider to review. According to one administrator, there are times when documentation must be faxed or scanned multiple times in order to produce a legible copy, resulting in increased time spent gathering and submitting information. Health trends. According to 59 (50 percent) of the respondents, the use of paper records makes combining data to understand population health trends challenging. According to one survey respondent, accomplishing this without an EHR requires manually searching through every paper health record. Ability to view and print laboratory reports. Fifty-six (47 percent) of the survey respondents reported that the inability to view and print laboratory reports without an EHR is challenging. One administrator stated that their clinic could view and print the results from one particular laboratory, but if a patient received services from any other lab the clinic staff would have to request that the patient bring the laboratory results to the clinic. Another administrator stated that it could take 2 or more days to receive requested lab results because there was no way to easily obtain them via a centralized system. Sending referrals. Forty-two (35 percent) of the respondents stated that sending referrals is challenging. One administrator reported facing challenges with faxed referral forms not being received after obtaining a fax confirmation. Another respondent reported having to spend an increased amount of time on the referral process with each referral necessitating at least 20 minutes to complete the required forms and fax them to the external provider—with 10–25 referrals being sent each day. Cost of maintaining records. Thirty-nine (33 percent) of the respondents reported that the cost of maintaining paper health records is challenging. For example, one administrator reported that health records are frequently mailed to other medical locations or to the National Archives (for those separated or retired), which is a large expense for the Coast Guard. Another administrator stated that the time taken to gather paperwork, wait for civilian providers to send notes, and coordinate and execute health record updates is costly to the Coast Guard. Lastly, several administrators reported that expenditures for paper and printing products have increased due to the lack of an EHR. For example, one administrator reported that the clinic had increased its expenditure for paper by 50 percent. Scheduling of appointments. Thirty-eight (32 percent) of the respondents reported that the time it takes to schedule appointments is challenging. One administrator stated that, due to the lack of a scheduling system, patient appointments are being scheduled using the Outlook calendar function, which is time consuming when there are network slowdowns or freezes during high rates of utilization. Another administrator reported that appointments are sometimes double scheduled or occasionally disappear from the calendar and, in one instance, a patient received an appointment reminder for an appointment that the patient had never scheduled. Security/privacy of records. According to 34 (28 percent) of the respondents, the security and privacy of health records is challenging. One administrator reported that paper records are more prone to be within reach of individuals that should not have access to them because they are not stored in a secure EHR that has protections built in. Ordering x-rays. Thirty-one (26 percent) of the respondents reported that the process for ordering x-rays is challenging. According to several administrators, the current process for ordering x-rays involves submitting a referral by fax, which takes additional time for processing and waiting for results to be returned by fax. Several administrators reported that it is difficult to know if all x-ray results have been received and filed. The responding clinic and sickbay administrators described a range of alternative work-around processes that they have developed to help alleviate several of the challenges. Specifically, they reported having developed additional forms, tracking methods, and alternative processes, as well as having notified Coast Guard HSWL management of the challenges they face. Regarding developing forms, approximately 31 percent of the survey respondents noted that they had developed additional forms in order to more easily obtain the information that they would have had available to them with an EHR in place. According to one administrator, these forms are based on the most common patient encounter needs and capture information such as medications, allergies, chronic issues, and family history. In addition, these administrators reported developing electronic file versions, such as a Microsoft Word document, of the standard health forms so that they can e-mail them to patients and reduce the number of paper forms that have to be completed by hand and scanned. According to the administrators, these steps help address handwriting and space challenges. In addition, approximately 37 percent of the respondents reported developing tracking methods, such as Microsoft Excel spreadsheets and logs, to collect data and assist in tracking patient and provider information. One administrator reported that a spreadsheet was created to track patients with conditions that require monitoring, since there is no longer a system that has the data in one place. Another administrator reported creating a spreadsheet to track referrals, numbers of physicals, patient encounters, and medical readiness. Based on the survey responses, these tracking methods have helped address the challenges related to combining data to understand health trends, and tracking medications and referrals. Further, 30 percent of the survey respondents noted that they have also developed alternative processes to mitigate some of the challenges with managing paper health records. For example, one administrator stated that the clinic started conducting weekly reconciliations of referrals to ensure that all treatment records from outside referrals were obtained by the clinic and placed in the paper health record. Another administrator stated that the clinic had begun e-mailing patient encounter notes to the medical officer for review in an effort to ensure patient records are complete. Finally, approximately 55 percent of the respondents reported that they have notified HSWL senior management of the challenges encountered with managing and maintaining paper records. According to an official within the Acquisitions Directorate, the Coast Guard plans to mitigate many of the challenges identified by the Regional Managers with a new EHR system initiative. However, these alternative processes may not provide sustained solutions to overcoming these challenges. Until Coast Guard implements a new EHR solution, the challenges inherent in a predominantly paper process will likely remain. The Coast Guard Intends to Acquire a New EHR System, but Has Not Yet Chosen a Solution The Coast Guard has begun taking steps to acquire a new EHR system referred to as the Electronic Health Record Acquisition (eHRa). According to the Acquisitions Directorate, the Coast Guard plans to manage and oversee the acquisition of eHRa through its non-major acquisition process (NMAP), as described in its Non-Major Acquisition Process (NMAP) Manual. The NMAP requires formal approval reviews at three discrete knowledge points called acquisition decision events (ADE) and includes three phases to assess the readiness and maturity of the acquisition. Figure 4 graphically represents the ADEs and phases of the NMAP. (Appendix V provides a more detailed discussion of each ADE and each of the three phases that make up the NMAP process.) Once the Coast Guard identifies the need for a new acquisition program, the program’s sponsor is to seek ADE-1 approval. ADE-1 occurs when the program is designated as a non-major acquisition by the Deputy Commandant for Mission Support. If an acquisition receives ADE-1 approval, it proceeds to the analyze/select phase of the NMAP. The analyze/select phase is the first of three phases of the process, and includes required work activities such as preparing a requirements document, conducting market research to identify available alternatives, developing an acquisition strategy, developing a life cycle cost estimate, and preparing a project plan. The Coast Guard formally identified the need for a new EHR system on February 1, 2016, and obtained ADE-1 approval on February 13, 2016. Subsequent to the ADE-1 approval, the Coast Guard initiated the following activities associated with the analyze/select phase: Requirements development. As part of its efforts to develop new system requirements for eHRa, the Coast Guard identified its capability gaps as a result of the lack of an EHR in a Capability Analysis Report. The report offered two courses of action to address the capability gaps: (1) business process re-engineering to enhance the current paper-based process, or (2) transition to a system-based solution. According to the Acquisitions Directorate, the Coast Guard plans to use the report to inform its effort in developing requirements for eHRa. Market research. The Coast Guard issued a request for information in April 2017 to assess industry capabilities as part of market research for the new system. The request for information asked that the solutions fall into one of four categories that the Coast Guard was considering: Federal shared service. This option would allow the Coast Guard to use a system that is already in use by another federal agency. In addition, this option aligns with the Office of Management and Budget’s Federal Information Technology Shared Services Strategy, issued in May 2012, which highlighted the prevalence of redundancy in federal IT systems. Managed by the Coast Guard, but externally hosted. This solution would require the Coast Guard to acquire a COTS system and manage its implementation. However, the system would be maintained by a vendor at an externally hosted data center. Commercial software as a service. This option involves purchasing commercial software for an EHR solution that is operated and maintained by a commercial vendor. In-house. With this solution, the Coast Guard would manage the implementation and maintenance of a COTS system with support from a commercial vendor. As a result of the Coast Guard’s request for information, the agency collected cost, schedule, and capabilities information from commercial and government solution providers, including DOD and VA. The Coast Guard used the providers’ responses to develop an alternatives analysis report that was completed in October 2017. The report recommended a solution based on performance, risk, cost, and schedule advantages. The report indicated that the Coast Guard plans to use the results of the alternatives analysis to refine the acquisition strategy, and to support the development of artifacts which are required to successfully achieve the ADE-2 milestone. Staff within the Acquisitions Directorate stated that they were also in the process of finalizing a life cycle cost estimate and a project plan for eHRa—documents necessary for ensuring that appropriate business decisions will be made regarding eHRa’s logistics, affordability, and resources, among other things. As of December 2017, the Coast Guard had not yet made a final determination as to which option would be chosen as the solution for the eHRa acquisition. Until a solution is chosen and successfully implemented, the Coast Guard and its thousands of members will continue to face the many challenges inherent with managing and maintaining paper health records. Conclusions The Coast Guard abruptly discontinued the IHiS project in 2015, citing financial, technical, schedule, and personnel risks. Coast Guard officials estimate this failed project has thus far cost the agency about $60 million. Further, this effort left the Coast Guard without any reusable system components for future EHR efforts. The Coast Guard could not demonstrate that it had fully implemented effective management and oversight for the IHiS project prior to its discontinuance. Specifically, the Coast Guard could not fully show key project management actions were taken for IHiS, lacked governance mechanisms, and did not document lessons learned for the failed project. By not doing so, the agency reduced the probability of the project’s success. The Coast Guard’s decision to revert to a predominately paper process has created a number of challenges for its many clinics and sick bays. These challenges are hindering their ability to deliver services. To help alleviate several of these challenges, the Coast Guard’s clinics and sick bays have developed alternative work-around processes. However, these alternative processes will likely not provide sustained solutions. The Coast Guard is currently taking steps to plan for a new EHR system, but as of December 2017—over 2 years after the cancelation of the IHiS project—it had not yet selected another solution. Successfully and quickly implementing an EHR system is vital to overcoming the challenges the Coast Guard currently faces in managing paper health records. The expeditious and judicious implementation of such a system can significantly improve the quality and efficiency of care to the thousands of Coast Guard active duty and reserve members that receive health care. Recommendations for Executive Action We are making the following four recommendations to the Coast Guard: The Commandant should direct the Chief Information Officer and the Chief Acquisition Officer to expeditiously and judiciously pursue the acquisition of a new EHR system. (Recommendation 1) The Commandant should direct the Chief Information Officer and the Chief Acquisition Officer to ensure established processes required for the future acquisition or development of an EHR are effectively implemented and adequately documented. (Recommendation 2) The Commandant should direct the Chief Information Officer and the Chief Acquisition Officer to establish and fully implement project governance boards for the future EHR effort that include the Chief Information Officer. (Recommendation 3) The Commandant should direct the Chief Information Officer and the Chief Acquisition Officer to document any lessons learned from the discontinued IHiS project, share them with the new project management team, and ensure lessons learned are utilized for the future EHR effort. (Recommendation 4) Agency Comments and Our Evaluation The Department of Homeland Security provided written comments on a draft of this report. In its comments (reprinted in appendix VI), the department concurred with our four recommendations and identified actions being taken or planned to implement them. Among these actions, the department stated that it is judiciously pursuing an EHR solution, called eHRa, through its acquisition process, which is currently in the analyze/select phase of the NMAP process. The department also stated that a contract award for eHRa is planned for later this fiscal year. In addition, the department stated that it established a designated acquisition program with a dedicated program management office team and oversight council for EHR activities, and that the EOC monitors eHRa’s progress through the acquisition process. The department further added that governance boards for eHRa have been established that include the CIO as required by the NMAP manual. Finally, the department said that it plans to compile lessons learned from the discontinued IHiS project by March 30, 2018. Given the actions identified, the department requested that we consider the first three of our four recommendations to be closed. However, while the Coast Guard is taking positive steps with regard to initiating the eHRa program, the department noted that key decisions related to analyzing, selecting, and acquiring the new system remain to be made. Further, the Coast Guard has not yet awarded a contract for an EHR solution and is not planning to do so until later this fiscal year. Thus, the extent to which it establishes and effectively implements processes and governance boards throughout the project, and expeditiously and judiciously pursues the acquisition of the new system, remain to be seen. Accordingly, we will not yet close any of the recommendations. The department also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Commandant of the Coast Guard, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology The objectives of this study were to (1) describe what led the United States Coast Guard (Coast Guard) to the decision to terminate further Integrated Health Information System (IHiS) development, and how much was spent on the project; (2) evaluate the Coast Guard’s management and oversight actions for the discontinued electronic health records (EHR) modernization project and what, if any, lessons learned were identified; (3) describe the Coast Guard’s current process for managing health records and the challenges, if any, it is encountering; and (4) determine the Coast Guard’s plans for effectively implementing a new EHR system and the current status of its efforts. To address the first objective, we reviewed relevant IHiS project documentation, such as key contracts, the project plan, presentations by the project management team, and IHiS-related memorandums. We also reviewed project expenditures documentation developed by the Deputy Commandant for Mission Support and the Acquisitions Directorate. We supplemented our review with interviews of agency officials within the Health Safety and Work-Life (HSWL) Directorate, Office of Budget and Programs, Office of Resource Management, Office of Contract Operations, and the Office of Acquisition Support, as well as six key contractors. To address the second objective, we reviewed relevant policies and guidance, such as the Coast Guard’s Command, Control, Communications, Computers and Information Technology (C4&IT) System Development Life Cycle (SDLC) Policy and the SDLC Practice Manual intended to guide the management and oversight of development and acquisition projects at the Coast Guard. We evaluated available IHiS project management documentation, such as project plans, the project’s schedule, decision memorandums, charters for IHiS governing bodies, and Executive Oversight Council (EOC) meeting minutes, which demonstrated actions taken by project management staff during the IHiS project, and assessed them against selected practices identified in the Coast Guard’s SDLC Practice Manual. The practices we selected are fundamental to effective information technology (IT) management and oversight. These included practices for conceptual planning, planning and requirements, design, and development and testing. We selected the practices from each applicable phase that had an associated artifact or called for the agency to take specific action(s) that we were able to validate through evidentiary review. If an artifact was applicable to multiple practices in multiple phases of the SDLC, we evaluated the artifact in only one phase and one practice. We also interviewed agency officials from Coast Guard offices such as the HSWL Directorate, Office of Budget and Programs, and Office of Resource Management regarding their role in managing and overseeing the IHiS project. In addition, we interviewed or received written responses from knowledgeable representatives for six key contractors tasked with providing the ambulatory care system and patient portal, safety data management and user credentialing system, software, and engineering and acquisition technical assistance. These interviews focused on the contractor’s role in the IHiS project, any issues they experienced, and the status of the services they were providing at the time of cancelation. Lastly, we interviewed Coast Guard officials within the HSWL and Acquisition Directorates to determine whether lessons learned were obtained and documented to inform future decisions for the new EHR project. Our methodology to determine the extent to which the Coast Guard demonstrated the completion of the selected SDLC phase practices included three levels of assessment: (1) the Coast Guard provided documentation that demonstrated that the IHiS project satisfied all of the elements of the required SDLC project management practice; (2) the Coast Guard provided documentation that demonstrated that the IHiS project partially satisfied some but not all elements of the required SDLC project management practice; and (3) the Coast Guard could not provide documentation that demonstrated that the IHiS project satisfied any of the elements of the required SDLC project management practice. To address the third objective, we reviewed Coast Guard medical records management documentation, such as medical manuals, workflow procedures, and standard operating policies and procedures for clinics and sick bays. We also administered a survey via e-mail questionnaire to all of the 12 HSWL Regional Managers and a web-based survey to all of the 166 clinic and sick bay administrators. The survey to Regional Managers included questions on whether the clinics and sick bays in their region faced challenges in managing health records without an EHR system in place and whether all the records from decommissioned EHR systems had been included in the paper records. The survey to clinic and sick bay administrators included questions on the challenges reported by Regional Managers and the mitigation strategies, if any, employed for the challenges identified. Before administering the surveys we pretested them by interviewing 1 Regional Manager and 5 clinic and sick bay administrators to ensure that our survey questions and skip pattern were clear and logical and that respondents could answer the questions without undue burden. We administered the survey to the 12 Regional Managers from March 2017 to April 2017; therefore, the corresponding responses reflect information and views as of that time period. We received 12 responses, for a 100 percent response rate. We administered the survey to the clinic and sick bay administrators from April 2017 to August 2017; therefore, the corresponding responses reflect information and views as of that time period. We received 120 responses, for a 72 percent response rate. To address the fourth objective, we identified the process through which the Coast Guard is managing its acquisition of its new system, the Non- Major Acquisition Process (NMAP) Manual. We then obtained planning documentation, such as relevant memorandums that described the Coast Guard’s need for an EHR, the Coast Guard’s request for information to assess industry capabilities for market research purposes, and a capabilities analysis study plan to identify gaps in the Coast Guard’s EHR capabilities. We also reviewed a capabilities analysis report which details required capabilities for improving patient care, and an alternatives analysis report which details solutions the Coast Guard should consider based on performance, risk, cost, and schedule. We assessed these documentation against requirements identified in the NMAP, specifically within the first phase of the acquisition process. We also interviewed officials within the Acquisition Directorate to determine the status of the efforts to acquire or develop a new EHR system. We conducted this performance audit from October 2016 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of the Coast Guard’s SDLC Phases and Selected Project Management Practices The Coast Guard implemented the Systems Development Life Cycle (SDLC) process for non-major information technology (IT) acquisitions in 2004 to help ensure IT projects are managed effectively and meet user needs. The process, as described in the Coast Guard’s SDLC Practice Manual, consists of seven phases and related practices—30 of which we selected for evaluation for the initial four SDLC phases of Integrated Health Information System (IHiS). The following is a summary of each SDLC phase and a description of the project management practices we selected for review: Phase 1: Conceptual Planning This phase is the first step of the development or significant enhancement process. During this phase, high-level business needs are identified, a concept for fulfilling the business needs is proposed and validated, and resources are committed. Activities (or practices) we selected for review in this phase include formalizing SDLC role designations, such as the project manager, asset manager, and sponsor; developing the initial business case with information regarding the background, system justification, and project risk management, among other things; validating alignment with the enterprise architecture; identifying the funding source and providing a rough order of magnitude cost estimates as part of developing the acquisition strategy; designating the system as a Command, Control, Communications, Computers, and Information Technology (C4&IT) system; and obtaining approval to exit the conceptual planning phase. Phase 2: Planning and Requirements This phase begins after the project has been defined and appropriate resources have been committed. During this phase, business requirements are collected, defined, and validated. More specifically, as part of the phase practices we selected for review, the SDLC tailoring plan is completed; and initial life cycle management plans for project management, risk management, integrated logistics support, training, and information assurance are developed. In addition, a cost benefit analysis is conducted; functional requirements are documented; external mandates are reviewed; the system development agent and system support agent are designated; and approval to exit the planning and requirements phase is obtained. During this phase, business requirements are translated into system requirements to develop the detailed system design. Selected practices for this phase include developing the detailed system design to specify the operating system, architecture components, timing and sizing, and interfaces, among other things; developing the operational analysis plan to document system performance measures, system operating measures that address reliability, maintainability, availability, training, and user satisfaction; and system support measures containing the level of effort needed to support the system; conducting review sessions with the user community to ensure that the system design sufficiently met all functional requirements; developing contingency and disaster recovery plans; completing the privacy impact analysis; documenting the test and evaluation master plan with the scope, content, methodology, sequence, management of, and responsibilities for test activities; testing the system design according to the operational test and evaluation plan and capturing design test results in the test and evaluation master plan; and obtaining approval to exit the design phase. Phase 4: Development and Testing The system is developed or acquired based on detailed system design specifications and validated through a variety of tests during this phase. The objective is to ensure that the system functions as expected and that sponsor and user requirements are satisfied. More specifically, as part of the phase practices that we selected, system testing is conducted; system documentation, such as system manuals, user manuals, and diagrams of the system is developed; an implementation plan is developed; and an authority to operate is obtained. During this phase, the system is placed in the production environment and system users are trained. It also includes efforts required to implement the system and resolve problems identified during the system’s transition from development to deployment. We did not select practices to evaluate in this phase since the system was discontinued before implementation. Phase 6: Operations and Maintenance The system becomes operational during this phase, and its main purpose is to ensure that the system continues to perform according to specifications. In addition, routine hardware and software maintenance and upgrades are performed to ensure effective system operations; user training continues as needed; and additional user support is provided to help resolve reported problems. We did not select practices to evaluate in this phase since the system was discontinued before implementation. This phase represents the end of the system’s life cycle. It provides for the systematic termination of a system to ensure that vital information is archived. The emphasis of this phase is to ensure that the system (e.g., equipment, software, data, procedures, and documentation) is packaged and disposed of in accordance with appropriate regulations and requirements. We did not select practices to evaluate in this phase since the system was discontinued before implementation. Appendix III: Copy of the Survey That GAO Administered to Coast Guard Health Safety and Work-Life Regional Managers The questions we asked in our survey of the 12 Health Safety and Work- Life (HSWL) Regional Managers from March 2017 to April 2017 are shown below. For a more detailed discussion of our survey methodology see appendix I. Appendix IV: Copy of the Survey That GAO Administered to Coast Guard Clinic and Sick Bay Administrators The questions we asked in our survey of the 166 clinic and sick bay administrators from April 2017 to August 2017 are shown below. For a more detailed discussion of our survey methodology see appendix I. Appendix V: Summary of the Coast Guard’s Non-Major Acquisition Process Acquisition Decision Events and Phases Coast Guard’s Non-Major Acquisition Process (NMAP) Manual defines the process for the designation, management, and oversight of non-major acquisitions. The NMAP requires formal approval reviews at three discrete knowledge points called acquisition decision events (ADE) and includes three phases to assess the readiness and maturity of the acquisition. The phases represent work that must be accomplished to demonstrate readiness to proceed to the next phase. The following is a summary of each ADE and subsequent phase within the NMAP: ADE-1 occurs when the Deputy Commandant for Mission Support designates the procurement as a non-major acquisition and approves the acquisition to enter the analyze/select phase. Following ADE-1 approval, the Chief Acquisition Officer or Chief Information Officer (CIO) designates a project manager. The analyze/select phase includes project management activities such as conducting market research to identify available alternatives, preparing a requirements document, developing an acquisition strategy, developing a life cycle cost estimate, and preparing a project plan. The primary purpose of ADE-2 is to approve the alternatives identified through market research and to assess the readiness of the acquisition for a contract award in which the acquisition moves into the obtain phase. The CIO is the decision authority and provides oversight for ADE-2. The obtain phase includes activities such as evaluating whether the proposed solution can effectively meet the functional requirements, initiating deployment planning, and conducting usability testing. The primary purpose of ADE-3 is to assess the readiness of the acquisition to be deployed and supported by authorizing the acquisition to enter the produce/deploy and support phase. The CIO is the decision authority and provides oversight for ADE-3. The produce/deploy and support phase includes activities such as ensuring the delivered product meets cost, schedule, and performance baselines as described within the project plan, as well as executing production contracts. VI: Comments from the Department of Homeland Security Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, key contributors to this report were Nicole Jarvis (Assistant Director), Ashfaq Huda (Analyst in Charge), Chris Businsky, Juana Collymore, Sharhonda Deloach, Rebecca Eyler, Andrea Harvey, Gina Hoover, Jason Lee, Rob Letzler, Monica Perez- Nelson, Kelly Rubin, and Andrew Stavisky.
Why GAO Did This Study In 2010, the Coast Guard initiated an effort—known as IHiS—to replace its aging EHR system with a suite of modernized systems that was to automate various health care services for its nearly 50,000 military members. However, in October 2015, the Coast Guard announced that the modernization project would be canceled. GAO was asked to review the Coast Guard's efforts to develop a modernized EHR system. GAO's objectives were to (1) describe what led the Coast Guard to terminate further IHiS development, and how much was spent on the project; (2) evaluate the Coast Guard's management and oversight for the discontinued project and what, if any, lessons learned were identified; (3) describe the Coast Guard's current process for managing health records and the challenges, if any, it is encountering; and (4) determine the Coast Guard's plans for effectively implementing a new EHR system and the current status of its efforts. To do so, GAO reviewed project expenditures, analyzed key project management documentation, surveyed Regional Managers and clinical staff, and interviewed knowledgeable staff. What GAO Found Financial, technical, schedule, and personnel risks led to the United States Coast Guard's (Coast Guard) decision to terminate the Integrated Health Information System (IHiS) project in 2015. According to the Coast Guard (a military service within the Department of Homeland Security), as of August 2017, $59.9 million was spent on the project over nearly 7 years and no equipment or software could be reused for future efforts. In addition, the Coast Guard could not fully demonstrate the project management actions taken for IHiS, lacked governance mechanisms, and did not document lessons learned for the failed project. As a result of the cancelation of the IHiS project and the decommissioning of the two legacy electronic health record (EHR) systems IHiS was to replace, the Coast Guard directed its clinics to revert to maintaining health records using a predominantly paper process. Coast Guard Regional Managers and clinic and sick bay administrators informed GAO of the many challenges encountered in returning to a paper process. These challenges include the inability for some clinics to adequately track vital information such as the medications members are taking—potentially causing harm to them. To help alleviate several of these challenges, the Coast Guard has developed alternative work-around processes. However, these alternative processes may not provide sustained solutions to overcoming these challenges. In February 2016, the Coast Guard initiated the process for acquiring a new EHR system. As of November 2017, agency officials had conducted research and recommended a solution based on performance, risk, cost, and schedule advantages. However, 2 years after canceling IHiS and moving toward a predominately manual process, the agency has not yet made a final determination on this. Successfully and quickly implementing an EHR system is vital to overcoming the challenges Coast Guard currently faces in managing paper health records. The expeditious implementation of such a system can significantly improve the quality and efficiency of care to the thousands of Coast Guard active duty and reserve members that receive health care. What GAO Recommends GAO is recommending the Coast Guard (1) expeditiously and judiciously pursue the acquisition of a new EHR system, and in doing so (2) ensure key processes are implemented, (3) establish project governance boards, and (4) document lessons learned. The Department of Homeland Security concurred with GAO's recommendations.
gao_GAO-18-303
gao_GAO-18-303_0
Background FWS provides grants to a variety of recipients, including state agencies, tribal governments, and nongovernmental organizations. In fiscal year 2016, FWS awarded $1.5 billion in grants, which was about 50 percent of the agency’s total $2.9 billion budget authority. Within FWS, WSFR is responsible for awarding most of the grant funding available from FWS, and in fiscal year 2016, WSFR awarded $1.2 billion in grants. As we have previously reported, most federal grant-making agencies generally follow a grants management process that includes awarding grant funds and monitoring grant projects. The award process generally involves announcing the grant opportunities, reviewing applications, and making award decisions. During the monitoring process, the agency oversees the implementation of the grant project and periodically reviews financial and performance reports from grant recipients. In our past reports, we have found that it is important for federal agencies to employ a fair and transparent process to make award selections for competitive grant programs and to monitor federal grant funds to ensure that they are used properly and effectively to achieve program goals. In general, WSFR awards two types of grants: formula and competitive grants. Formula grants: WSFR awards these grants to recipients in amounts based on required formulas. The two largest formula grant programs WSFR manages are the Wildlife Restoration Program and Sport Fish Restoration Program, which provided $699 million and $356 million, respectively, in grants to states in fiscal year 2016. According to WSFR documents and officials, these grants are often used by states to help their fish and wildlife agencies restore, enhance, and manage wildlife and sport fish resources and provide public access to those resources. Each state’s use of certain funds and each state’s wildlife and sport fish activities are to be audited every 5 years, and these audits have generally been conducted by Interior’s Office of Inspector General (OIG). According to Interior OIG officials, these audits have been conducted since 2002, and each state has been audited three times over the past 15 years. Competitive grants: WSFR awards these grants to eligible applicants for specific projects based on a competitive process in which grant applications are scored against certain criteria. Competitive grants comprise a much smaller portion of the grant funding that WSFR awards; in fiscal year 2016, WSFR awarded about $54 million in competitive grants. Competitive grants, unlike the formula grants, are not required under their program-specific statutes to be regularly audited. According to the Interior OIG and WSFR officials, the OIG has conducted few audits of these programs. Funding for most of WSFR’s grant programs comes from two sources: the Wildlife Restoration Account and the Sport Fish Restoration and Boating Trust Fund. These accounts are generally funded by industries paying excise taxes and import duties on certain equipment and gear manufactured for purchase by hunters, anglers, boaters, archers, and recreational shooters, including pistols, bows and arrows, and fishing rods and reels, among other items. Federal taxes on fuel for motorboats and small engines are also a source of funding. In administering grant programs, WSFR adheres to federal laws and regulations, as well as agency policies and guidance. Federal laws: The 1937 Pittman-Robertson Wildlife Restoration Act and the 1950 Dingell-Johnson Sport Fish Restoration Act established the Wildlife Restoration Program and the Sport Fish Restoration Program, respectively. The Pittman-Robertson and Dingell-Johnson Acts have been amended to, among other things, establish additional grant programs, many of which are competitive programs. For example, the Clean Vessel Act of 1992 amended the Dingell-Johnson Act and created the Clean Vessel Act Grant Program. In addition, in 1998, the Sportfishing and Boating Safety Act amended the Dingell- Johnson Act and established the Boating Infrastructure Grant Program. Federal government-wide grant regulations: The Uniform Guidance, issued by OMB and adopted by federal grant-making agencies, includes requirements for several aspects of the federal grants management process, including the award and monitoring processes. For example, sections 327 and 328 lay out general requirements for financial and performance reporting by grant recipients. Agency regulations: Some of the WSFR grant programs have specific regulations that, among other things, define eligible activities, application procedures, and the conditions for using grant funding. For example, the Boating Infrastructure Grant Program, the Clean Vessel Act Grant Program, and the National Coastal Wetlands Conservation Grant Program have program-specific regulations that govern aspects of the grant process, such as the eligible uses of grant funding. Agency policies and guidance: WSFR also has agency guidance found in the FWS Service Manual, along with other guidance on grants. The manual describes the structure and functions of FWS’s organization and contains policies and procedures that govern administrative activities and program operations. For example, the FWS Service Manual contains a chapter focused on the Multistate Conservation Grant Program that reiterates or clarifies requirements, including program-specific statutory requirements as well as those found in the Uniform Guidance. In addition to the FWS Service Manual, the FWS, and WSFR within it, is subject to grant management guidance issued by the Department of the Interior. For example, in December 2014, Interior’s Office of Acquisition and Property Management issued a memorandum that required (1) maximum competition in grant awards through a fair and impartial competitive process, and (2) a comprehensive, impartial, and objective grant application review process based on criteria contained in the grant award announcement. WSFR Program Awards and Monitors Five Competitive Grant Programs WSFR awards and monitors five competitive grant programs, according to agency documents and officials we interviewed. These five grant programs are (1) the Boating Infrastructure Tier 2 Grant Program, (2) the Clean Vessel Act Grant Program, (3) the Competitive State Wildlife Grant Program, (4) the Multistate Conservation Grant Program, and (5) the National Coastal Wetlands Conservation Grant Program. While these grant programs support different types of projects, they generally are funded from the Sport Fish Restoration and Boating Trust Fund, and most require non-federal matching funds from the grant recipient based on statute. The exceptions to this are the Multistate Conservation Grant Program, which also receives funds from the Wildlife Restoration Account and does not require matching funds, and the Competitive State Wildlife Grant Program, which receives funding from annual appropriations. Table 1 provides summary information on these five competitive grant programs. Across the five competitive grant programs, the number of grants and the funding awarded varied by program. In fiscal years 2012 through 2016, the largest amount of federal grant funding was awarded through the National Coastal Wetlands Conservation Grant Program—about $94 million total—while the least amount of grant funding was awarded through the Competitive State Wildlife Grant Program—about $24 million total, as shown in table 2. Based on our review of competitive grant award documentation for fiscal years 2012 through 2016, the percentage of projects selected from the applications received ranged from 63 percent for the Competitive State Wildlife Grant Program to 100 percent for the Clean Vessel Act Grant Program. While all Clean Vessel Act grant applicants received funding, they did not all receive the total amount of funding requested; rather, the amount of funding was based on the total amount of funding available and the score the application received. The same applies for other grant programs, as the agency sometimes provides less funding to a recipient than was requested depending on various factors, such as the total amount of funding available. For more information on the number of applications received and awards for each grant program, see appendixes II through VI. In fiscal year 2016, the five WSFR competitive grant programs funded a variety of projects according to our review of the list of awarded projects. Boating Infrastructure Tier 2 Grant Program. Grants were awarded to states for projects focused on improving facilities for recreational boaters. These projects included installing docks, installing boat slips, and constructing restroom and shower facilities for boaters. For more information on this grant program, see appendix II. Clean Vessel Act Grant Program. Grants were awarded to states for projects focused on constructing and maintaining facilities to accept sewage from recreational boats, including sewage pumpout stations and floating restrooms. In addition, some of the grants were to be used for public education materials on the importance of properly disposing sewage from boats. For more information on this grant program, see appendix III. Competitive State Wildlife Grant Program. Grants were awarded to states and a nongovernmental organization for projects focused on state-identified species of greatest conservation need, which may include endangered or threatened species. These projects included conducting research on these species along with creating and enhancing habitat for these species. For more information on this grant program, see appendix IV. Multistate Conservation Grant Program. Grants were awarded to nongovernmental organizations and federal agencies for a variety of projects that were national or regional in scope, such as providing training to state fish and wildlife officials. Over half of the grants awarded (11 of 18) were awarded to the Association of Fish and Wildlife Agencies (AFWA), but most of the funding went towards the administration of the National Survey of Fishing, Hunting and Wildlife- Associated Recreation ($6.4 million of the $7.7 million). According to AFWA and WSFR officials, the reason many grants are awarded to AFWA is because this organization is in a unique position to carry out projects that benefit multiple states as required by law. For more information on this grant program, see appendix V. National Coastal Wetlands Conservation Grant Program. Grants were awarded to states for projects focused on acquiring and restoring wetlands. Many of these projects focused on acquiring wetlands that benefit wildlife. For more information on this grant program, see appendix VI. Under these five grant programs, state agencies often partner with subgrantees to carry out grant projects. According to WSFR officials, subgrants are common in the Boating Infrastructure Tier 2, Clean Vessel Act, and Competitive State Wildlife grant programs. For example, states are the recipients of Boating Infrastructure Tier 2 grants, but they can subgrant the money to marina operators to oversee the construction of dock facilities. Grant Award Process Involves Announcing Grant Opportunity, Reviewing Applications, and Making Award Decisions and Is Generally Consistent with Regulations The award process WSFR uses for the five competitive grant programs generally involves announcing the grant opportunity and reviewing applications to make award decisions, and in some cases federal agencies or third parties are involved in these activities. The award process used for the five competitive WSFR grant programs is generally consistent with federal grant regulations in the Uniform Guidance. Award Process Involves Announcing Opportunities, Reviewing Applications, and Making Award Decisions, and Third Parties Play a Role in this Process for Some Grant Programs The award process WSFR uses for the five competitive grant programs we reviewed involves announcing the grant opportunity and reviewing applications to make award decisions, and third parties are involved in these activities for some grant programs. Based on our review of agency guidance and interviews with WSFR officials, announcing a grant opportunity begins with developing a Notice of Funding Opportunity (NOFO). The NOFO contains information for applicants to consider when deciding whether to apply, including the amount of funding available, the types of applicants that are eligible, the process to apply, and the criteria that will be used to score applications. NOFOs are available publicly at www.grants.gov. Interested parties then submit grant applications, which WSFR reviews for eligibility by examining the project’s goals, budget, and environmental impact, among other things. A review panel comprised of WSFR staff, and in some cases other FWS staff or a third party organization, reviews and scores the applications based on criteria in the NOFO and develops a list of recommended projects and funding amounts for these projects. This list is forwarded to the Director of FWS for review and approval and if approved, FWS then awards the grant. For all of the grant programs except for the Competitive State Wildlife Grant Program, other federal agencies or third party organizations are involved in some aspects of the award process (as shown in table 3). In general, these entities are more involved in reviewing grant applications than in developing the NOFOs for the grant programs. AFWA, a third party, has the largest involvement in the award process for the Multistate Conservation Grant Program, and implements most aspects of the award process. Specifically, the Wildlife and Sport Fish Restoration Programs Improvement Act of 2000, which established this grant program, requires that FWS only fund grant projects that are on a priority list established by AFWA. To develop this list, AFWA has developed a process to review and score applications, and the highest-scoring applications are put on a priority list. This list is presented to all AFWA members at their annual meeting and if approved by the membership, AFWA forwards the priority project list to the Director of the U.S. Fish and Wildlife Service for review and approval. The Multistate Conservation program leader at WSFR said he also reviews grant applications to determine whether the project’s budget is reasonable and whether the project is eligible for funding. Other federal agencies and third party organizations are also involved in the award process for other WSFR competitive grants programs as follows: Boating Infrastructure Tier 2 Grant Program: The Sport Fishing and Boating Partnership Council reviews and scores each grant application and provides these scores to WSFR. The scores from the Council are averaged with WSFR’s scores to develop a final ranked list of grant projects. Officials from the Council said that they provide expertise to the review process since Council members are often engineers or members of boating organizations. Clean Vessel Act Grant Program: Program regulations state that WSFR will convene a review panel to include representatives from WSFR, the U.S. Environmental Protection Agency (EPA), the U.S. Coast Guard, and the National Oceanic and Atmospheric Administration (NOAA). WSFR provides the grant applications and WSFR’s proposed list of recommended projects to these agencies for review. According to WSFR officials, they have received limited input from these agencies, due in part to staff turnover at these agencies in recent years. For example, in fiscal year 2016, EPA indicated in an email to WSFR that it agreed with the proposed funding decisions for the program, and NOAA sent a letter to WSFR indicating that it had not reviewed all of the applications but it supported the program and did not object to the agency’s scoring of the applications. National Coastal Wetlands Conservation Grant Program: Staff from FWS’ Coastal Program partner with WSFR in developing the NOFO, reviewing applications, and scoring applications. For example, the review panel for fiscal year 2016 included seven staff from the Coastal Program and four staff from WSFR. Award Process Is Generally Consistent with Federal Grant Regulations The five competitive WSFR grant programs we reviewed follow an award process that is generally consistent with federal grant regulations found in the Uniform Guidance. Specifically, the Uniform Guidance requires that grant funding opportunities be publicly announced and that the NOFO contains certain information, including the criteria and process used to evaluate applications. In reviewing the five NOFOs used for the fiscal year 2016 grant cycle for the five competitive grant programs, we found that all five NOFOs were made publicly available on the website www.grants.gov, and the NOFOs contained the information required by the Uniform Guidance. These NOFOs contained criteria for scoring applications that matched the criteria in program-specific regulations for the grant programs that have them. For example, the regulations for the National Coastal Wetlands Conservation Grant Program contain 13 different scoring criteria, which were listed in the NOFO for that program. The Uniform Guidance also contains provisions regarding a review process for grant applications. Specifically, the Uniform Guidance requires that, unless prohibited by federal statute, the agencies must design and execute a merit review process for competitive grant applications, and that this process must be described in the NOFO. In accordance with the Uniform Guidance, Interior issued guidance on implementing a merit review process in December 2014. This guidance requires that the “competitive process be fair and impartial” and that all applicants be evaluated based on the criteria in the funding announcement. In reviewing the five competitive grant programs, we found that there was a merit review process and that this process was described in the five NOFOs for fiscal year 2016 that we reviewed. As part of the merit review process, four of the competitive grant programs convened review panels attended by those that scored applications for the fiscal year 2016 grant cycle, and these panels developed a recommended list of projects, according to our review of award documents. The exception was the Clean Vessel Act Grant Program, where an in-person review panel meeting was not held but rather projects were scored separately within each region, and regional officials submitted their scores to WSFR headquarters. These two sets of scores were combined and the WSFR program leader developed a recommended list of projects, according to WSFR officials. The Uniform Guidance also requires that federal agencies must establish conflict of interest policies for federal awards. As a result, in December 2014, Interior established a policy requiring agency officials who evaluate grant applications as part of a review panel to sign a conflict of interest certificate. In our review of the award documents for the fiscal year 2016 grant cycle, we generally found signed copies of these certificates for members of the review panels, except for the Multistate Conservation Grant Program. This program did not have certificates for the fiscal year 2016 grant cycle because AFWA, which oversees the scoring of applications, did not require these forms until the fiscal year 2017 grant cycle. We reviewed these forms for the fiscal year 2017 grant cycle and found that each member of the AFWA review panel had submitted a form. AFWA officials said that the organization had previously required a general conflict of interest form to be signed by its members, and they started requiring a specific form for review panel members in fiscal year 2017 to align with Interior’s policy. WSFR Monitors Grants in a Manner Consistent with Federal Grant Regulations, but Performance Reports Were Sometimes Missing Required Information WSFR Monitors Grants through Review of Financial and Performance Reports WSFR monitors its competitive grants primarily by reviewing annual financial and performance reports submitted by grant recipients, which is consistent with federal regulations. We found in our review of these reports for a sample of grant projects awarded funds in fiscal year 2015 that grant recipients generally submitted them on time, but that some performance reports were missing required information. According to WSFR officials, their primary method for monitoring projects funded by competitive grants is to review financial and performance reports submitted by grant recipients. Grant recipients submit these reports to WSFR staff in FWS regional offices. According to regional WSFR officials, regional staff who specialize in financial matters review the financial reports to ensure they are filled out correctly. Staff do this by comparing financial information on the amount of federal funding reported by recipients with amounts found in Interior’s Financial and Business Management System, which is used to track grants. In addition, WSFR grant specialists review the performance reports to ensure they contain required information, such as an update on the progress of meeting the specified goals of a grant project. If WSFR staff identify discrepancies in the financial reports or deficiencies in the performance reports, WSFR regional staff work with the grant recipients to resolve them. WSFR regional staff occasionally perform site visits to grant projects to verify grant activities described in the performance reports. WSFR regional staff said they perform site visits as funding and time allow and that recently they have had to limit site visits due to budget and staffing constraints. Actions to Monitor Grants Are Consistent with Federal Grant Regulations, but Some Performance Reports Were Missing Required Information The Uniform Guidance contains requirements for financial and performance reports for monitoring federal grants. Specifically, the Uniform Guidance requires federal agencies to collect financial information from grant recipients at least annually. The Uniform Guidance also requires grant recipients to submit performance reports at least annually, and these reports are to include certain information, such as a comparison of the actual accomplishments of a grant with its goals and the reasons why goals were not met, if appropriate. To further guide FWS staff in implementing these requirements, the FWS Service Manual provides additional information on the agency’s expectations for these reports, including the required content. For example, the Service Manual states that recipients should submit financial information, including the amount of federal and matching funds spent and remaining on a grant. The Service Manual also identifies the standard federal form that should be used for this report. For performance reports, the FWS Service Manual states that FWS must require certain information from grant recipients, including a comparison of actual accomplishments to the goals of the grant projects, and if the goals were not met, the reasons why. In our review of the agency’s monitoring process for selected grants awarded in fiscal year 2015, we found that WSFR required both financial and performance reports at least annually, as required by the Uniform Guidance. In addition, the number and due dates of these reports were specified in the letters provided to grant recipients when they were awarded the grant. These award letters also specified the amount of federal funding for the grant along with any required non-federal matching funds. We reviewed 53 financial reports and 51 performance reports for a sample of 32 grants awarded in fiscal year 2015 and found that most reports were submitted by their due date or within 2 weeks of this date, as table 4 shows. In addition, the majority of the reports we reviewed met the content requirements found in the Uniform Guidance and the FWS Service Manual. Specifically, all 53 financial reports were submitted on the standard form prescribed by the Service Manual. In addition, the financial information on the amount of the grant and non-federal matching funds aligned with the amounts specified in the award letter for nearly all the financial reports we reviewed. In our review of performance reports, we found that most contained information required by the Uniform Guidance on the grant project’s goals, progress toward those goals, and an explanation for why the goals had not been met, if applicable. However, in our sample, nine performance reports submitted for six awarded grants were missing some of this information. For example, one performance report stated that “no activities had occurred” under the grant, but it did not specify what the goals of the grant were or why no progress had been made, as required by the Uniform Guidance. Additionally, two annual performance reports for another grant described the goals of the grant and said they had not been met, but did not provide information as to why. Officials from one state fish and wildlife agency said that there was not a template to follow when preparing performance reports. Officials from another state agency said that while the requirements for performance reporting were laid out clearly in most NOFOs, they could be interpreted differently by different state officials, and these officials needed to ask for clarification from WSFR officials. The format and content of the performance reports is generally left for grant recipients to choose, according to WSFR officials, because neither the Uniform Guidance nor internal FWS guidance recommends a specific template for the performance reports. However, the program leader for the Multistate Conservation Grant Program provides grant recipients with a suggested template to follow when preparing performance reports. The template contains areas in which to describe the goals and objectives of the grant along with progress made towards these. The seven performance reports we reviewed for the Multistate Conservation Grant Program followed this template and, as a result, all contained the information required by the Uniform Guidance. We also found that Region 8 developed a suggested template for performance reports, but the template did not explicitly ask for grant recipients to explain why the goals of a grant had not been met. The lack of a clear performance report template may have contributed to 2 of the 10 performance reports from region 8 we reviewed not including clear explanations of why the goals of the grant had not been met, as required by the Uniform Guidance. According to WSFR officials, the agency is planning to develop a more standardized reporting process for performance reports but the timeline for completion of this has not been formally established and remains uncertain. According to Standards for Internal Control in the Federal Government, management should design control activities to achieve objectives and respond to risks. This includes designing mechanisms to help monitor performance to ensure the objectives of the program are being achieved. As noted previously, the Uniform Guidance specifies that grant performance reports contain a comparison of actual accomplishments to the goals of the project, and the reasons why the goals were not met, as appropriate. The absence of a clear format for these reports may have contributed to some reports not containing all the information needed to comply with federal grant requirements. Without a template or some other standardized method for performance reporting across competitive grant programs, WSFR grant recipients may continue to submit performance reports to WSFR that do not meet all of the content requirements of the Uniform Guidance and do not convey all the information needed for FWS to oversee its competitive grant programs. Conclusions WSFR awards and monitors five competitive grant programs and, in general, WSFR’s process for awarding and monitoring these grants is consistent with regulations for federal grants established in OMB’s Uniform Guidance. However, there were instances in which the performance reports submitted by grant recipients did not include a comparison of actual accomplishments to the goals of the project, as required by the Uniform Guidance. WSFR does not have a template for performance reporting for four of the five competitive grant programs we reviewed, and the template used by one region does not clearly ask for all required information. Without a template or standardized method that facilitates the collection of performance information, WSFR grant recipients may continue to submit performance reports to WSFR that do not contain the information required by the Uniform Guidance and do not convey all the needed information for FWS to oversee its competitive grant programs. Recommendation for Executive Action The Director of the U.S. Fish and Wildlife Service should direct WSFR to develop a template or other standardized method to facilitate collection of all required information for grant performance reports. (Recommendation 1) Agency Comments We provided a draft of this report to the Department of the Interior for review and comment. In its written comments, reproduced in appendix VII, the Department of the Interior agreed with our recommendation and described actions it plans to take. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) identify and describe the competitive grant programs that the Wildlife and Sport Fish Restoration (WSFR) program awards and monitors; (2) examine how WSFR awards grants under these programs and the extent to which this is consistent with relevant federal regulations; and (3) examine how WSFR monitors grants under these programs and the extent to which this is consistent with relevant federal regulations. To identify the competitive grant programs that WSFR both awards and monitors, we reviewed federal laws and regulations related to WSFR grant programs. In particular, we reviewed the 1937 Pittman-Robertson Wildlife Restoration Act and the 1950 Dingell-Johnson Sportfish Restoration Act and amendments to these laws, along with associated regulations for these laws. We also reviewed OMB’s Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (Uniform Guidance) for federal grant awards. In addition, we reviewed agency guidance and information on grant programs; agency budget documents; and grant program descriptions in the Catalog of Federal Domestic Assistance, a compilation of federal assistance programs that includes grants. Based on our review of these materials, we developed an initial list of grant programs that WSFR had a role in managing, and we spoke with WSFR officials to gather information on which competitive grant programs met the criteria of WSFR being responsible for both awarding and monitoring grants. We corroborated this list of grant programs with WSFR officials. We analyzed data on the competitive grant programs we identified from Interior’s Financial and Business Management System for fiscal years 2012 through 2016, the most recent five-year period for when the award process had been completed. To determine the reliability of these data, we interviewed agency officials and conducted electronic testing of the data, and we determined the data were sufficiently reliable for our purposes. To examine the process WSFR uses to award competitive grants and the extent to which this is consistent with relevant federal regulations, we reviewed relevant agency regulations and guidance along with relevant sections of the Uniform Guidance. To assess the extent to which the award process is consistent with relevant regulations, we compared the process WSFR uses to award grants with OMB’s Uniform Guidance. In addition, we reviewed award documents for grants awarded in fiscal year 2016 for the competitive grant programs we identified. We selected fiscal year 2016 because it was the most recently completed award cycle. These documents included the Notice of Funding Opportunity, which described the funding opportunity to applicants; documentation of the scoring of applications; and memos that documented the results of the scoring process. We also reviewed the entire grant files for eight grants awarded in fiscal year 2016 to determine what documents were contained in these files. In selecting this non-probability sample of files, we selected at least one file for each of the grant programs we examined and at least one file from each of the FWS regional offices that had a grant awarded in fiscal year 2016. However, one of these files was misclassified under an incorrect grant program, so we excluded it from our review. As a result, we did not examine an entire file from the FWS Region 8 office. We reviewed the award documents and files using a standard document review tool to examine specific parts of these documents, such as the descriptions of the process used to review and score applications. To ensure that this review tool was filled out correctly, two GAO staff members reviewed the documents: one filled out the data collection instrument and the other verified this work. In addition to looking at award documents for fiscal year 2016, we also examined memos that documented the results of the grant scoring process for fiscal years 2012 through 2015 for the grant programs we identified. We reviewed the grant scoring memos from fiscal years 2012 through 2016 grants cycles because they comprise the most recent five-year period for when the award process had been completed. To examine the process WSFR uses to monitor competitive grants and the extent to which those processes are consistent with relevant federal regulations, we reviewed relevant agency regulations and guidance along with relevant sections of the Uniform Guidance. To assess the extent to which the monitoring process is consistent with relevant regulations, we compared the process WSFR uses to monitor grants with OMB’s Uniform Guidance. We used a standard document review tool to review financial and performance reports for 32 of 129 grants that were awarded in fiscal year 2015 to determine the extent to which these reports contained information required by the Uniform Guidance. We selected fiscal year 2015 to ensure that enough time had elapsed under these grants for financial and performance reports to have been required and submitted. In selecting this non-probability sample of files, we ensured that we had at least one file for each of the grant programs and at least one file from each of the eight FWS regional offices. For financial reports, we determined whether reported financial information on the grant award and matching funds aligned with the dollar amounts in their award letters, whether the reports were submitted by their due dates, and whether they were submitted on the correct form. For performance reports, we determined whether they were submitted by their due dates and whether they contained information on the grant project’s goals, progress toward those goals, and an explanation why the goals had not been met, if applicable. The Uniform Guidance requires this information to be in performance reports. The results from our analysis of these documents are not generalizable to all monitoring documents for grants awarded in fiscal year 2015, but allowed us to examine how WSFR monitored selected grants. For all three objectives, we interviewed WSFR staff responsible for managing WSFR grant programs. These included WSFR program leaders at headquarters and WSFR staff in each of the eight FWS regional offices that are responsible for the five competitive grant programs we reviewed. We asked these officials about the role they played in awarding and monitoring competitive grants. In addition, we interviewed other FWS officials that were involved with managing grants and officials from select third party organizations that played a role in awarding grants, including the Association of Fish and Wildlife Agencies and the Sport Fishing and Boating Partnership Council. We also interviewed grant applicants, including state fish and wildlife agency officials and nongovernmental organizations to learn about their experiences during the award and monitoring process for WSFR grants. We selected applicants that had various experiences with the grant programs in fiscal year 2016, including those that applied and did not receive funding and those that applied and received funding. The results of the interviews with grant applicants cannot be generalized to other applicants, but were used to obtain perspectives on the grant award and monitoring processes. We conducted this performance audit from March 2017 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Information on the Boating Infrastructure Tier 2 Grant Program Below is summary information on the Boating Infrastructure Tier 2 Grant Program that we compiled from reviewing relevant laws and regulations, reviewing agency documents, and interviewing agency officials. Establishment and goals of the program: The program was established by the Sportfishing and Boating Safety Act of 1998, which amended the Dingell-Johnson Sport Fish Restoration Act. The program provides grants to be used for constructing, renovating, or maintaining docking or mooring facilities for transient, nontrailerable recreational vessels that are 26 feet or greater in length. These facilities generally must allow public access, and examples of facilities that can be built with these funds include boat slips, piers, buoys, fuel stations, restrooms, bulkheads, dredging, or laundry facilities. Grants can also be awarded to produce information and education materials specific to the program or projects funded by the program. Governor-designated agencies in a state of the United States, the District of Columbia, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, or the U.S. Virgin Islands are eligible for this grant program. The designated agency is often a state natural resource or fish and wildlife agency. Subgrants to other entities are allowed. According to Wildlife and Sport Fish Restoration (WSFR) officials, subgrants under this program are common. About 2 percent of the Sport Fish Restoration and Boating Trust Fund is devoted to the grant program. In fiscal year 2016, there was $8.6 million in federal funds available for the Tier 2 program. The maximum grant award is $1.5 million per project, and recipients generally must provide matching funds worth at least 25 percent of the total cost of projects. Funds not obligated within three fiscal years shall be transferred to the Coast Guard and expended for state recreational boating safety programs. Highlights from the award process used in fiscal year 2016: The Notice of Funding Opportunity for fiscal year 2016 was posted on www.grants.gov on June 22, 2015, and applications were due by September 18, 2015. Thirteen states submitted a total of 22 applications for projects. Regional staff for the Wildlife and Sport Fish Restoration Program and members from the Sport Fishing and Boating Partnership Council scored the applications and recommended that 10 projects be fully funded and one be partially funded. The Deputy Director of the U.S. Fish and Wildlife Service approved the list of recommended projects on March 11, 2016. The U.S. Fish and Wildlife Service announced the selected projects on March 17, 2016. Information on past applications and selected projects: Table 5 shows the number of applications received and selected projects under the Boating Infrastructure Tier 2 Grant Program in fiscal years 2012 through 2016. Appendix III: Information on the Clean Vessel Act Grant Program Below is summary information on the Clean Vessel Act Grant Program that we compiled from reviewing relevant laws and regulations, reviewing agency documents, and interviewing agency officials. Establishment and goals of the program: The program was established by the Clean Vessel Act of 1992, which amended the Dingell-Johnson Sport Fish Restoration Act. This program funds grants to coastal states for certain activities, such as constructing and renovating pumpout stations and waste reception facilities and conducting a program to educate recreational boaters about the problem of human body waste discharges from vessels and inform them of the locations of pumpout stations and waste reception facilities. The program also funds grants to inland states meeting certain criteria. Under program regulations, facilities need to be open to the public in order to be eligible for a grant. Since the program was established, over 6,000 dump or pumpout facilities have been built and over 3,700 of these facilities have been operated or maintained using grant funds. Governor-designated agencies in a state of the United States, the District of Columbia, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, or the U.S. Virgin Islands are eligible for this grant program. The designated agency is often a state natural resource or fish and wildlife agency. Subgrants to other entities are allowed. According to Wildlife and Sport Fish Restoration (WSFR) officials, subgrants under this program are common. About 2 percent of the Sport Fish Restoration and Boating Trust Fund is devoted to the grant program. In fiscal year 2016, there was $13.7 million in federal funds available for the program. The maximum award amount is generally $1.5 million, and recipients generally must provide matching funds worth at least 25 percent of the total cost of projects. Funds not obligated within three fiscal years shall be transferred to the U.S. Coast Guard and expended for state recreational boating safety programs. Highlights from the award process used in fiscal year 2016: The Notice of Funding Opportunity (NOFO) for fiscal year 2016 was posted on www.grants.gov on August 12, 2015, and applications were due by December 2, 2015. A total of 21 states and the District of Columbia submitted 33 applications. WSFR staff from the U.S. Fish and Wildlife Service regions scored applications in their regions; then, these scores were averaged with scores from the WSFR program leader for the Clean Vessel Act grant program, who scored all of the applications. WSFR provided copies of grant applications to the U.S. Environmental Protection Agency (EPA), U.S. Coast Guard, and National Oceanic and Atmospheric Administration (NOAA) for them to review and score the applications. WSFR also provided its scores on the applications to these agencies. EPA informed WSFR in an email that it agreed with the proposed funding decisions for the program. According to WSFR, the Coast Guard did not provide comments on the proposed scores. NOAA sent a letter to WSFR indicating that it had not reviewed all of the applications but it supported the program and did not object to the agency’s scoring of the applications. The Deputy Director of the U.S. Fish and Wildlife Service approved the list of recommended projects on April 28, 2016. The U.S. Fish and Wildlife Service announced the winning grant awards on May 11, 2016. According to the fiscal year 2016 NOFO, this program attempts to provide support to as many eligible projects as possible. In practice, all eligible applications have been awarded funds from fiscal year 2012 through fiscal year 2016. If funding requests exceed available funds, WSFR applies a formula to allocate funding based on the score the application receives. Information on past applications and selected projects: Table 6 shows the number of applications received and selected projects under the Clean Vessel Act Grant Program in fiscal years 2012 through 2016. Appendix IV: Information on the Competitive State Wildlife Grant Program Below is summary information on the Competitive State Wildlife Grant Program that we compiled from reviewing relevant laws and regulations, reviewing agency documents, and interviewing agency officials. Establishment and goals of the program: The State Wildlife Grant Program provides grants for the development and implementation of programs for the benefit of wildlife and their habitats, including species that are not hunted or fished. Eligible activities include planning and conservation implementation. The competitive portion of the State Wildlife Grant Program was established by the Consolidated Appropriations Act, 2008. Fish and wildlife agencies in a state of the United States, the District of Columbia, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, or the U.S. Virgin Islands and at the discretion of affected states, the regional Association of Fish and Wildlife Agencies are eligible for this grant program. According to the Notice of Funding Opportunity (NOFO) for this program, for each of the 48 contiguous United States and the District of Columbia, at least two states must be active participants in proposed conservation actions. Applicants are also encouraged to engage with other partners on projects. Potential partners include tribes, federal agencies, other state agencies, local governments, nongovernmental organizations, academic institutions, private landowners, industry groups, and international partners. The program is governed and funded through annual appropriations acts. In fiscal year 2016, there was about $5.6 million available for the program. For most applicants proposing a multi-state project, the maximum award is $500,000 and the minimum award is $50,000. Applicants must provide matching funds worth at least 25 percent of the total cost of projects. Past appropriations for these grants have been appropriated to remain available until expended. The appropriations acts governing the program have generally provided that any amount apportioned in one fiscal year that remains unobligated by the end of the next fiscal year are to be reapportioned in the following fiscal year. Highlights from the award process used in fiscal year 2016: The NOFO for fiscal year 2016 was posted on www.grants.gov on November 20, 2015, and applications were due by February 19, 2016. The Wildlife and Sport Fish Restoration Program (WSFR) received 21 eligible applications. Applications were reviewed by a panel consisting of WSFR staff from each region of the U.S. Fish and Wildlife Service (FWS). The panel recommended fully funding 14 projects and partially funding 1 project, for a total of $5.6 million, with $2.9 million in non- federal matching funds. The Deputy Director of the U.S. Fish and Wildlife Service approved the list of recommended projects on May 19, 2016. FWS announced the selected projects on May 20, 2016. Information on past applications and selected projects: Table 7 shows the number of applications received and selected projects under the Competitive State Wildlife Grant Program in fiscal years 2012 through 2016. Appendix V: Information on the Multistate Conservation Grant Program Below is summary information on the Multistate Conservation Grant Program that we compiled from reviewing relevant laws and regulations, reviewing agency documents, and interviewing agency officials. Establishment and goals of the program: The program was established by the Wildlife and Sport Fish Restoration Programs Improvement Act of 2000, which amended the Pittman-Roberts Wildlife Restoration Act and the Dingell-Johnson Sport Fish Restoration Act. The program focuses on funding multistate conservation projects that benefit a certain number of states or a regional association of state fish and game departments. Fish and wildlife agencies in a state of the United States, the District of Columbia, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, or the U.S. Virgin Islands are eligible for this grant program. The U.S. Fish and Wildlife Service (FWS) is also an eligible applicant for the purpose of carrying out the National Survey of Fishing, Hunting, and Wildlife-Associated Recreation, which is conducted every five years. Nongovernmental organizations are also eligible, provided that they submit a certification that they will not use the grant funds to fund, in whole or in part, any activity of the organization that promotes or encourages opposition to the regulated hunting or trapping of wildlife or the regulated taking of fish. Grant projects shall not be eligible unless they will benefit at least 26 states, a majority of states in a FWS region, or a regional association of state fish and wildlife agencies. By statute, FWS may only make grants for projects identified on a priority list prepared by the Association of Fish and Wildlife Agencies (AFWA), a nongovernmental organization that represents state fish and wildlife agencies on conservation and land management issues, after following certain procedures. Up to $6 million annually is authorized to fund grants, with no more than $3 million from the Wildlife Restoration Account and $3 million from the Sport Fish Restoration Trust Fund. In practice, some of the grant funds are carried over to future years to fund certain multi-year projects, such as the National Survey of Fishing, Hunting, and Wildlife-Associated Recreation. This program does not have a matching funds requirement. Funds not obligated within two fiscal years revert back to the Wildlife Restoration and Sport Fish Restoration programs for apportionment to the states. Highlights from the award process used in fiscal year 2016: The Notice of Funding Opportunity for fiscal year 2016 was posted on www.grants.gov on April 13, 2015, and the deadline for submitting letters of intent to AFWA was May 11, 2015. These letters of intent provide a summary of the grant project, and they were scored by AFWA’s national grants committee. The highest-scoring applicants were invited to submit a full grant application to AFWA by August 14, 2015. The national grants committee scored these applications and presented these scores to AFWA members at its annual meeting in September 2015. Members voted to approve the priority list at this meeting. AFWA provided the priority list containing 18 projects to FWS. The Deputy Director of Program Management and Policy of the U.S. Fish and Wildlife Service approved the list of recommended projects on December 7, 2015. During the award process, Wildlife and Sport Fish Restoration staff also reviewed the grant applications. FWS announced the selected projects on February 11, 2016. Information on past applications and selected projects: Table 8 shows the number of applications received and selected projects under the Multistate Conservation Grant Program in fiscal years 2012 through 2016. Appendix VI: Information on the National Coastal Wetlands Conservation Grant Program Below is summary information on the National Coastal Wetlands Conservation Grant Program that we compiled from reviewing relevant laws and regulations, reviewing agency documents, and interviewing agency officials. Establishment and goals of the program: The program was established by the Coastal Wetlands Planning, Protection and Restoration Act. This program’s primary goal is the long-term conservation of coastal wetlands’ ecosystems. It accomplishes this by helping states protect, restore, and enhance their coastal habitats through a competitive grants program. Since 1992, the U.S. Fish and Wildlife Service (FWS) has awarded over $377 million through these grants. Governor-designated agencies of an eligible coastal state are eligible for this grant program. The designated agency is often a state natural resource or fish and wildlife agency. Subgrants are allowed, are relatively common, and can be awarded to local governments and nonprofit organizations. About 3 percent of the Sport Fish Restoration and Boating Trust Fund is devoted to the grant program. In fiscal year 2016, there was about $20.3 million in federal funds available for the program. The maximum award amount is $1 million, and states generally must provide 50 percent of the total cost of the project. However, states that have established and are using a state fund for the purpose of acquiring coastal wetlands must provide a minimum of 25 percent of the total cost of projects. Projects are generally funded through annual proposals. Funds must be obligated by December 31st of the year after funds were allocated, meaning that, for example, fiscal year 2015 funds must be obligated by December 31, 2016. Funds not obligated during the specified time frame return to the FWS program account. Highlights from the award process used in fiscal year 2016: The Notice of Funding Opportunity for fiscal year 2016 was posted on www.grants.gov on February 5, 2015, and applications were due by June 24, 2015. The FWS Wildlife Sport Fish Restoration Program (WSFR) received 32 applications. A panel of WSFR and FWS Coastal Program regional officials scored and ranked the applications, and recommended 28 projects for funding. The Deputy Director of the U.S. Fish and Wildlife Service approved the list of recommended projects on January 13, 2016. WSFR awarded $20 million in grant funding, which was supplemented by $20.5 million in non-federal matching funds. FWS announced the selected projects on February 2, 2016. Information on past applications and selected projects: Table 9 shows the number of applications received and selected projects under the National Coastal Wetlands Conservation Grant Program in fiscal years 2012 through 2016. Appendix VII: Comments from the Department of the Interior Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above Elizabeth Erdmann (Assistant Director), Steven Bagley, and Scott Heacock made key contributions to this report. Additional contributions were made by Thomas M. James, Ying Long, Kim McGatlin, Patricia Moye, Anne Rhodes-Kline, and Sheryl Stein.
Why GAO Did This Study FWS awarded $1.5 billion in grants in fiscal year 2016, which represented about half of the agency's budget. In general, FWS awards two types of grants: (1) formula grants, which are distributed to recipients based on a required formula, and (2) competitive grants, where potential recipients submit an application for funding that is reviewed and scored against criteria. Within FWS, WSFR manages several grant programs. GAO was asked to review WSFR's management of its competitive grant programs. This report (1) identifies and describes competitive grant programs that WSFR awards and monitors; (2) examines how WSFR awards grants under these programs and the extent to which this is consistent with relevant regulations; and (3) examines how WSFR monitors grants under these programs and the extent to which this is consistent with relevant regulations. GAO reviewed relevant federal laws, regulations, and FWS guidance; analyzed agency data for fiscal years 2012-2016; reviewed award documents for fiscal year 2016 and a sample of monitoring documents for grants awarded in fiscal year 2015 (selected to ensure sufficient time for required reports to be submitted) and compared these with requirements from relevant regulations; interviewed WSFR headquarters and regional officials and grant recipients. What GAO Found The U.S. Fish and Wildlife Service's (FWS) Wildlife and Sport Fish Restoration (WSFR) program, within the Department of the Interior, awards and monitors five competitive grant programs. These grant programs fund different types of projects ranging from building docks to acquiring wetlands. GAO found that the number of grants and funding awarded varied by grant program from fiscal years 2012 through 2016. Dollars in thousands The award process WSFR uses for the five competitive grant programs generally involves publicly announcing the grant opportunity through a Notice of Funding Opportunity, which contains information applicants need to consider when applying, such as available funding and criteria that will be used to score applications. A panel comprised of WSFR staff, and in some cases other FWS staff or a third party organization, reviews and scores the applications based on the criteria in the Notice of Funding Opportunity and develops a list of recommended projects and funding amounts. The list is forwarded to the Director of FWS for review and approval. GAO found that WSFR's grant award process is consistent with federal regulations for awarding federal grants. WSFR monitors its competitive grants by reviewing financial and performance reports submitted by grant recipients. In general, this process is consistent with relevant regulations, but some of the performance reports were missing required information. Specifically, for fiscal year 2015 grants GAO reviewed, financial and performance reports were generally submitted on time by grant recipients, but several performance reports (9 of 51) did not include a comparison of actual accomplishments to the goals of the grant, as required by regulations. WSFR does not have a template for grant recipients to follow in preparing these reports for most of the grant programs, and the template used by one region does not clearly ask for all required information. WSFR officials have said the agency plans to develop a more standardized reporting process but no timeline has been established. According to Standards for Internal Control in the Federal Government , management should design control activities to achieve objectives and respond to risks, including designing mechanisms to help monitor performance. Without a template or standardized method that facilitates the collection of performance information, WSFR grant recipients may continue to submit performance reports that are missing information needed by FWS to monitor its competitive grant programs. What GAO Recommends GAO recommends that FWS develop a template or other standardized method to facilitate collection of all required information for grant performance reports. The Department of the Interior concurred with this recommendation.
gao_GAO-18-648
gao_GAO-18-648_0
Background VA Contracting Organizational Structure VA serves veterans of the U.S. armed forces and provides health, pension, burial, and other benefits. The department’s three operational administrations—VHA, Veterans Benefits Administration, and National Cemetery Administration—operate largely independently from one another. Each has its own contracting organization, though all three administrations also work with national contracting offices under the Office of Acquisition, Logistics, and Construction for certain types of purchases, such as medical equipment and information technology. VHA, which provides medical care to about 9 million veterans at 172 medical centers, is by far the largest of the three administrations and, as such, is the primary focus of our review. These VHA medical centers are organized into 18 VISNs, organizations that manage medical centers and associated clinics across a given geographic area. Each VISN is served by a corresponding Network Contracting Office, which awards contracts for goods and services needed by the VISN. VA’s Office of Procurement Policy and Warrant Management (referred to in this report as the Office of Procurement Policy), within the Office of Acquisition and Logistics, is responsible for all procurement policy matters at the VA. Figure 1 shows the organizational structure of the procurement function at VA. Preferences for Veteran- Owned Small Businesses in Awards of VA Contracts The 2006 Veterans Benefits, Health Care, and Information Technology Act established a requirement that VA contract competitions must be restricted to SDVOSBs and VOSBs if: 1) the contracting officer reasonably expects that at least two such businesses will submit offers, and 2) the award can be made at a fair and reasonable price that offers the best value to the government. (In this report, we refer to these two elements of the law as criteria.) This determination is known as the “VA Rule of Two.” The statute also establishes an order of priority for the contracting preferences, with the highest preference for SDVOSBs, followed by VOSBs. (In this report, we refer to these businesses collectively as SD/VOSBs.) There are a number of socio-economic programs implemented in the Federal Acquisition Regulation (FAR) that provide contracting preferences or special contracting authorities for specific groups. These include contracting preferences for small businesses overall as well as more targeted preferences such as SBA’s 8(a) Business Development Program, which assists disadvantaged small businesses. Unlike these other socioeconomic preference programs that generally apply to agencies across the federal government, the 2006 statute created a preference for SD/VOSBs that applies only to VA. In June 2016, the Supreme Court decision in Kingdomware Technologies, Inc. v. United States found that the manner in which VA had been applying the preference for SD/VOSBs was not consistent with the 2006 statute. This case arose because VA was not applying the statute’s preference in competitions for orders under the FSS, which VA uses to order medical supplies, among other things. The Supreme Court ruled that VA’s FSS orders are subject to the 2006 statute, and that the VA Rule of Two must be applied because the statute mandates its use before contracting under competitive procedures. Previously, VA considered FSS a mandatory source of supplies and services that must be used when possible, but did not require that contracting officers apply the Rule of Two when placing FSS orders. An example of a mandatory source used across the federal government is the AbilityOne procurement list. AbilityOne is a program to employ the blind and people with severe disabilities to provide supplies and services to federal customers. Federal agencies that need the specific products and services on AbilityOne’s procurement list are generally required to purchase them through the program. Contracting officers, who are authorized to commit the government to contracts, are ultimately responsible for awarding and administering contracts, including ensuring compliance with the VA Rule of Two. Within the VA contracting organizations we reviewed, the contracting officer typically designates a representative of the customer office—the organization that has requested the purchase of a good or service for its use—as the contracting officer’s representative. This individual assists with tasks that support the work of the contracting officer, such as market research, developing independent government cost estimates, and monitoring contractor performance. Verification of SD/VOSBs The 2006 statute also required VA to maintain a database of verified SD/VOSBs, and required that only firms appearing in the database may qualify for VA awards set aside for SD/VOSBs. VA’s Office of Small and Disadvantaged Business Utilization (OSDBU) maintains this database through its Center for Verification and Evaluation, which assesses whether small businesses meet the criteria for being veteran-owned and controlled by verifying self-certifications provided by the SD/VOSBs. A separate federal agency, SBA, is responsible for setting size standards (by revenue and employees) for what constitutes a small business; the threshold varies by industry. Certified SD/VOSBs—which VA has verified as owned and controlled by veterans—are listed in VA’s Vendor Information Pages (VIP). This is an online database accessible to VA’s contracting workforce and the public that includes basic information about each firm. Firms listed in this database select numerical codes based on the North American Industry Classification System to identify the types of goods and services they seek to provide to the VA; firms can do business under a variety of these codes. Subcontracting Limitations While SD/VOSBs that receive awards through set-asides may subcontract with firms that do not have small business status, the SD/VOSBs generally must perform a certain percentage of the work on a contract themselves. The SBA establishes regulations that govern these subcontracting limitations, which were most recently revised in May 2016. These regulations place limits on the percentage of the overall contract value that firms in particular socio-economic categories, including SD/VOSBs, may pay to subcontractors that do not belong to the same category. The purpose of the subcontracting limitations is to ensure that firms that receive awards on a set-aside basis perform a material portion of the contract themselves, rather than subcontracting a majority of the work to firms that would have been ineligible for the award. Under SBA’s revised regulations, subcontracted work performed by “similarly situated” entities—those in the same socio-economic category as the firm awarded the set-aside contract—does not count against the subcontracting limitation. Table 1 lists the maximum percentage a firm that is awarded a set-aside contract may subcontract to firms that are not in the same socio-economic category under SBA’s 2016 Subcontracting Limitations regulations. If a firm violates the subcontracting limitations, SBA’s subcontracting limitation regulation would allow the government to impose a penalty of $500,000 or, if it is greater, the dollar amount spent on subcontracted work in excess of the permitted level. Contracting officers are responsible for ensuring compliance with the terms of the contract, and, as discussed in more detail below, the terms of a contract may include a requirement to comply with SBA’s limitations on subcontracting regulation. In addition, we have reported that contracting officers were not clear who was responsible for the monitoring, and uncertain about how to conduct the monitoring. The VA’s Inspector General and SBA compliance reviews have reported similar findings. Obligations to and Number of SD/VOSBs Receiving Awards Were Higher Following the Supreme Court Decision Obligations and Awards to SD/VOSBs Increased Since the Supreme Court Decision VA’s set-asides to SD/VOSBs increased following the 2016 Supreme Court decision, particularly among non-construction contract actions. The change in percentage of obligations made under set-aside contracts varied across VA contracting organizations, in part because of differences in the types of goods and services they bought. The number of SD/VOSBs certified by VA also increased, as did the number of those firms that received contract awards. VA obligations and awards for SD/VOSB set-asides increased in fiscal years 2016 and 2017, particularly fiscal year 2017, which was the first full fiscal year following the 2016 Supreme Court decision. VA obligations for SD/VOSB set-asides have increased as a percentage of total VA obligations over this period, while the percentage of obligations through other set-aside types—mostly non-veteran-owned small business set- asides—remained almost steady. VA obligated about $3.9 billion through SD/VOSB set-asides in fiscal year 2017, and VA’s overall obligations also increased. Figure 2 depicts this information. The number of individual awards—new contracts and orders—made by VA through SD/VOSB set-asides has also increased as a percentage of total VA awards from fiscal years 2014 through 2017, particularly in fiscal year 2017 following the Supreme Court decision, as shown in figure 3. SD/VOSB Non- Construction Set Asides Increased VA has consistently set aside a much greater percentage of construction contracts and orders for SD/VOSBs than for other types of goods and services, according to our analysis of VA eCMS data from fiscal years 2014 through 2017. Construction accounted for about 51 percent of obligations under SD/VOSB set-asides, despite construction representing only about 15 percent of VA’s overall contract obligations during this period. VA contracting officials we spoke with stated that the market for firms performing construction services generally has a greater percentage of capable SD/VOSBs than the market for firms providing non- construction goods and services. VA contracting officers working on construction contracts told us that they experienced little effect from the policy changes related to the 2016 Supreme Court decision because they had already been setting aside most construction contract actions for SD/VOSBs. Nonetheless, there was an increase in the percentage of total obligations for construction set-asides to SD/VOSBs in fiscal year 2017, while total obligations for construction contracts declined. Figure 4 shows total and set-aside obligations for construction and non-construction contract actions in fiscal years 2014 through 2017. As depicted in figure 4, obligations for non-construction SD/VOSB set- asides increased in fiscal year 2017 both in total dollars and as a percentage of total obligations. Among obligations for non-construction SD/VOSB set-asides, the top five categories of goods and services by obligations across fiscal years 2014 through 2017 included: 1. Automatic data processing and telecommunications. 2. Information technology equipment, software, supplies, and support equipment. 3. Medical/dental equipment and supplies. 4. Professional services. 5. Housekeeping services. Obligations for SD/VOSB Set-Asides Varied across VA Contracting Organizations The percentage of obligations for SD/VOSB set-asides varied across VA contracting organizations. Among the contracting offices for VHA’s 18 VISNs—which together accounted for about 47 percent of total obligations—the percentage for SD/VOSB set-asides ranged from approximately 17 percent to 40 percent in fiscal year 2017, as shown in figure 5. Total obligations and SD/VOSB set-aside obligations also varied across VA’s three national contracting offices—the National Acquisition Center, Strategic Acquisition Center, and Technology Acquisition Center—in part because of differences in the types of goods and services they procure. The Technology Acquisition Center had a larger increase in SD/VOSB set-aside obligations than other contracting organizations in fiscal year 2017. This increase is consistent with our finding that IT-related categories were among the types of goods and services that had the highest increase in SD/VOSB obligations following the Supreme Court decision. The National Acquisition Center consistently had the lowest volume and percentage of obligations for SD/VOSB set-asides; officials noted that its areas of focus in pharmaceuticals and high tech medical equipment are markets that have little participation from small businesses and SD/VOSBs. Figure 6 shows obligations on set-aside and non-set- aside contracts and orders in these three national contracting offices over fiscal years 2014 through 2017. The Number of Veteran- Owned Small Businesses Receiving Set-Aside Awards Has Increased Since the Supreme Court Decision Data from VA’s OSDBU shows consistent increases over the last several years in the number of certified firms listed in its VIP database, with a noticeable spike following the Supreme Court decision. While the number of certified SD/VOSBs in VIP increased annually from fiscal years 2014 through 2017, the largest increase—from 8,925 to 11,926 firms— occurred in the last year of this time frame. The number of SD/VOSBs that received set-aside contracts or orders also increased over fiscal years 2015 through 2017. The largest year-to- year increase during this period was in the last year of this time frame, when the number increased from 1,174 to 1,663, as shown in figure 7. VA Updated Policy for the Veterans First Program and Provided Training to Contracting Officers to Address Confusion VA Updated Veterans First Program Policy In response to the Supreme Court’s 2016 decision in the case of Kingdomware Technologies, Inc. v. United States, VA released a July 2016 policy for the Veterans First program, a revision to its 2007 policy. To develop this revised policy, officials from VA’s Office of Procurement Policy said they created an integrated project team that consisted of representatives from VA procurement leadership, the Office of General Counsel, OSDBU, and others. VA’s Office of Procurement Policy also subsequently issued a “class deviation” to the VA Acquisition Regulation to implement changes VA viewed as necessary for consistency with the Supreme Court’s decision. VA’s Deputy Senior Procurement Executive issues class deviations when necessary to allow VA’s contracting organizations to deviate from the FAR or VA Acquisition Regulation. According to VA officials, these deviations effectively replace existing policy. The Office of Procurement Policy also issued guidance to provide clarifications on certain issues. Among the guidance VA issued was a decision tree that summarized how to apply the VA Rule of Two under the new 2016 Veterans First policy. Figure 8 presents our analysis of VA’s process. VA’s Office of Acquisition and Logistics had issued an Information Letter in June 2007 that established procedures for the Veterans First program, to comply with the 2006 federal statute that directed VA to prioritize SD/VOSBs in their contracting decisions. While the basic principle of the VA Rule of Two was the same across the 2007 and 2016 policies, the 2007 policy did not provide contracting officers as many details for applying the VA Rule of Two. In contrast, the 2016 policy provides more detail on how contracting officers must implement set-asides for SD/VOSBs across different types of procurements and various steps in the contracting process, including market research and use of existing contract vehicles—such as FSS and agency-wide indefinite delivery contracts. These changes had implications for how VA contracting officers make contracting decisions and document their work. Table 2 summarizes key differences in emphasis between the 2007 and 2016 policies and the work that contracting officers must perform. VA Provided Training on Updated Veterans First Program Policy VA has conducted training for its workforce on the 2016 Veterans First policy and subsequent updates and guidance. VA’s Office of Procurement Policy collaborated with the VA Acquisition Academy to provide several installments of online training to contracting officers. The academy offered initial training to contracting officers in July 2016, just after the policy was issued. Supplemental training was offered to supervisors in December 2016. In March 2018, the academy offered follow-up training for all contracting officers to provide further clarification on the Veterans First policy. These trainings focused on specific areas of frequent questions that the Office of Procurement Policy received from contracting officers, including market research, fair and reasonable price determinations, and limitations on subcontracting, among other things. These trainings were highly encouraged but not mandatory. Figure 9 details the training provided to contracting officers. VA Took Steps to Clarify Certain Aspects of Veterans First Policy to Help Address Contracting Officer Confusion and Concerns VA’s Office of Procurement Policy addressed some aspects of the 2016 Veterans First policy that had caused confusion and concerns among contracting officers by providing additional guidance and policy. Contracting officers we met with told us of their initial uncertainty about whether they could use existing contract vehicles and whether they must apply the VA Rule of Two before using these vehicles under the Veterans First policy. In response to such concerns, the Office of Procurement Policy gathered frequently asked questions, and created guidance by posting answers on its website as another mechanism for providing clarification to contracting officers. VA also issued new policy and guidance to address contracting officers’ concerns about the additional work and delays associated with cases where they set-aside a solicitation for SD/VOSBs but did not receive any offers. Specifically, 28 of the contracting officers we interviewed individually and in roundtable discussions told us they sometimes had to cancel SD/VOSB solicitations for this reason and then reopen procurements without the SD/VOSB set-aside, resulting in delays in the contract award process. Other contracting officials we spoke with told us that since the implementation of the 2016 Veterans First policy, individual contract actions take longer to award on average due to the need to re- solicit in cases where they set aside solicitations for SD/VOSBs but do not receive acceptable offers, as well as due to expectations for increased documentation of the rationale for issuing a solicitation without an SD/VOSB set-aside restriction. For instance, a contracting officer at one of the VISN contracting offices we visited stated that a majority of his contract actions have involved multiple rounds of solicitations, which has increased his workload and procurement lead times. In response to such concerns, VA’s Office of Procurement Policy provided informal guidance in early 2017, followed by policy in February 2018 that contracting officers could use “tiered” or “cascading” solicitations. Under VA’s current policy, VA issues a solicitation that requests offers from multiple types of firms, or “tiers,” including SD/VOSBs, other small business types, and, potentially, large businesses. The solicitation establishes an order of preference among the different tiers. The contracting officer separates the offers based on the firms’ size or socioeconomic status, and then evaluates them in the order of preference established by the solicitation. If the award cannot be made at the first tier, the evaluation moves to the succeeding tier or tiers until an award can be made. Contracting Officers Face Several Challenges in Applying Aspects of the Veterans First Policy Applying the 2016 Veterans First policy has presented challenges for contracting officers. First, the VA system that contracting officers are required to use for the initial step of market research was not designed for this purpose, and contracting officers we interviewed expressed dissatisfaction with it. Second, contracting officers we spoke with expressed confusion about conducting market research and applying the VA Rule of Two criteria—determining whether there is a reasonable expectation that two or more SD/VOSBs will submit offers and that award can be made at a fair and reasonable price that offers best value to the government. Further, contracting officers also expressed confusion on how to determine whether the prices offered by SD/VOSBs in response to a set-aside solicitation are fair and reasonable. Finally, continuing workload issues, real and perceived pressure to set aside contracts, and training not reaching all VA contracting officers are other factors that continue to contribute to the challenges. Contracting Officers Cited Barriers in Using VA’s Vendor Information Pages System to Conduct Market Research VA’s 2016 Veterans First policy requires contracting officers to use VIP as the first step in market research to identify SD/VOSBs capable of performing the work. While the use of VIP and documentation of its use had been required by the VA Acquisition Regulation since 2009, presenting it as the first step for all market research was a key change in how contracting officers use this system. Forty-one out of 60 contracting officers we interviewed individually and in roundtable discussions expressed dissatisfaction with VIP as the starting point for market research, citing difficulty in using it and lack of usefulness to conduct market research. Specifically, several of these contracting officers stated that while VIP can be used to determine whether firms are certified as SD/VOSBs, it does not contain much information to help them determine whether these SD/VOSBs will be capable of performing the contract. They also stated, and OSDBU officials confirmed, that each SD/VOSB self-selects the codes that indicate the types of goods and services it can provide, and many list a large number. As a result, a search can return hundreds of results. Twenty-six contracting officers we interviewed— either individually or in roundtable discussions—stated that they have had instances where they issued an SD/VOSB set-aside solicitation based on a VIP search returning a high number of SD/VOSB contractors that provide the desired goods or services, but no SD/VOSBs submitted offers. Many of these contracting officers stated that, because they feel they cannot rely on the VIP results, they have taken subsequent steps such as using public “sources sought” notices to gauge interest from SD/VOSBs. While this step requires additional time, they said they found it to be a better source of information for making a VA Rule of Two decision. VA OSDBU officials stated that they would like to provide contracting officers with enhanced utility for conducting and documenting market research. They acknowledged that VIP is not designed to be used as a market research tool and that the challenges contracting officers noted were not surprising. The director of OSDBU stated that VA is planning to make some improvements to its VIP database to provide better information on SD/VOSB capability, but, according to these officials, these improvements are not yet available for use. The 2016 Veterans First policy requires contracting officers to document their VIP searches in the contract file, but this requirement is being implemented inconsistently. Specifically, 29 of the 35 contract files we reviewed did not contain such documentation. The cognizant contracting officers for most of these contracts told us they conducted the VIP searches; some stated they forgot to print and attach the results to the contract file, while others stated they had difficulty printing the results. According to VA’s Veterans First policy, documenting the results of the VIP search is required to establish the contracting officer’s basis for the VA Rule of Two decision, regardless of whether the contract is set aside or not. Documenting this information in the case files, as required, provides VA with assurance that contracting officers have performed this search to support their overall market research efforts. Contracting Officers Face Challenges in Determining Whether to Set Aside for SD/VOSBs Under VA Rule of Two There are a large number of certified SD/VOSBs offering various goods and services—about 12,000 as of fiscal year 2017, according to VIP data provided by the OSDBU. A number of contracting officers we met with stated that this can result in VIP searches that return a lengthy list of SD/VOSBs. As a result, the decision of whether to set aside a solicitation is often based on the second criterion of the VA Rule of Two—whether there is a reasonable expectation that the award can be made at a price that is fair and reasonable and offers the best value to VA. To meet this criterion, the contracting officer combines research and professional judgment to make a decision whether to set aside or not, according to VA officials. While these VA Rule of Two criteria have not changed since 2007, contracting officers told us that their perception of the rule’s application has changed following the Supreme Court decision and VA’s 2016 Veterans First policy. Several contracting officers we met with stated that sometimes, when they identified that there were two or more SD/VOSBs that they expected to submit offers, they set aside a solicitation without providing full consideration of this second criterion. These contracting officers told us it is difficult in some cases to make a prospective determination that they can reasonably expect to be able to make an award at a fair and reasonable price without any actual offers in-hand. Contracting officers told us that prior to the Supreme Court decision their understanding was that they had the option to set aside contract actions for SD/VOSBs when they expected that the price would be fair and reasonable. They stated that after the decision, management relayed an expectation that contracting officers must set aside contract actions to SD/VOSBs unless they can prove that they cannot reasonably expect to make an award at a fair and reasonable price. Contracting officers also told us of instances where they identified multiple SD/VOSBs likely to submit proposals, but, based on their market research, it was unlikely that an award could be made at a fair and reasonable price that offered best value to VA. Many of these contracting officers stated that, despite those findings, they focused only on the number of SD/VOSBs, in part because they felt pressure to do so from local or headquarters’ management, OSDBU, or feared protests from SD/VOSBs, which would delay the award. In two specific areas of contracting we found examples of differing approaches to addressing the challenges faced by contracting officers when applying the VA Rule of Two criteria. Prior to the Supreme Court decision, there was little use of SD/VOSB set-asides in real property leasing or for high-tech medical equipment, according to officials from contracting offices responsible for these procurements. After the decision, there was uncertainty about whether and how to apply the Veterans First policy to these areas of contracting. As illustrated in the examples below, real property officials continue to face challenges applying the VA Rule of Two to leasing, whereas high-tech medical equipment contracting officials addressed this challenge by preparing a business case and used it to apply the VA Rule of Two consistently across their contracts: Officials in VA’s headquarters Construction and Facilities Management office—responsible for planning, designing, and constructing VA facilities—told us that prior to the Supreme Court decision they did not apply the VA Rule of Two to its real property leases. These officials stated that they have found the Rule of Two to be difficult to apply. According to the officials, VHA facilities have requirements for specific size, space, and location, and there are few SD/VOSBs in this industry, so it is rare that an SD/VOSB can meet these requirements. These officials further told us that, since the Supreme Court decision, they have often set aside lease solicitations for SD/VOSBs as long as there were two firms available despite uncertainty that these firms could compete for the work at a fair and reasonable price at best value to the VA. According to these VA officials, based on guidance they received from OGC and others, they felt compelled to conduct the procurements as SD/VOSB set-asides even when they were unsure that the second criterion of the VA Rule of Two would be met. These officials stated they are often unable to make awards to those firms—either because their proposals were not acceptable, or the SD/VOSBs did not submit proposals at all. They expressed concern that the Veterans First program is being applied to leasing when, from their perspective, it is impractical to do so. They stated that these challenges in applying VA’s Rule of Two criteria have added an average of 3 to 6 months to the process of awarding a new lease, resulting in delays in developing new facilities. Similarly, officials responsible for awarding leases at one VISN contracting office we visited told us they set aside a solicitation to an SDVOSB even though only one SDVOSB responded to a sources sought notice. This action was taken, according to the contracting officials, because they were concerned that their decision would be challenged by OSDBU if they did not set it aside. They stated they had been without a broker—a firm that helps to negotiate leases—for more than a year due to challenges in applying the VA Rule of Two, making it difficult for them to move forward with any new leases. In both cases, VA officials stated that they decided to solicit on an SD/VOSB set-aside basis even though they lacked confidence that there was a reasonable expectation that two or more SD/VOSBs would submit offers and that award could be made at a fair and reasonable price that offered the best value to the government. Also, in both cases, VA had to reissue solicitations without the SD/VOSB set-aside restriction, which lengthened the time that VA procurement staff were required to spend on the acquisition and delayed the fulfillment of VA’s leasing requirements. In contrast, another VA contracting organization determined that SD/VOSB set-asides were not feasible because there was no reasonable expectation that two or more SD/VOSBs would submit offers and that award could be made at a fair and reasonable price. The National Acquisition Center’s program to procure high-tech medical equipment—such as magnetic resonance imaging and X-ray machines—historically had little participation from SD/VOSBs. Following the release of the 2016 Veterans First policy, contracting officials responsible for the program halted all non-emergency purchases for over a year while they conducted an analysis of how to apply the VA Rule of Two to purchases of high-tech medical equipment. These officials analyzed the marketplace and concluded that no SD/VOSBs manufacture such equipment, and that purchasing this equipment from SD/VOSB resellers would greatly increase costs and not present the best value to VA. The results of this analysis were summarized in an internal report that was used as documentation to support the contracting officers’ decision not to set-aside high-tech medical equipment purchases for SD/VOSBs. As a result, they continued to meet medical centers’ equipment needs through existing purchasing arrangements. The contracting officers told us they also periodically revisit their analysis to identify any opportunities to set aside specific solicitations for SD/VOSBs. Determining Whether the Price Offered by an SD/VOSB Is Fair and Reasonable Poses Challenges for Contracting Officers Contracting officers must determine whether the price proposed by an SD/VOSB is fair and reasonable and offers the best value to VA before awarding the contract. The 2016 Veterans First policy did not change this requirement, and contracting officers are generally required to make this determination for every contract award. However, we found that many of the contracting officers we interviewed were uncertain how to balance the Veterans First preference with the determination of fair and reasonable price when lower prices were available on the open market. Twelve of the 30 contracting officers we interviewed for selected contract actions stated that it is difficult to assess whether the SD/VOSB’s offered price is fair and reasonable, and 8 stated that, in some cases, they lacked confidence in their determinations that prices were fair and reasonable. In many of these cases, contracting officers told us that they determined that a higher price was fair and reasonable in order to effectuate the Veterans First preference. For instance, a branch chief we interviewed provided five examples of purchases under $16,000 where, in recent, separate procurements, non-SD/VOSB small businesses had proposed prices for the same or substantially similar items that were about $400 to $3,000 less than those proposed by SD/VOSBs. These procurements were conducted as SD/VOSB set-asides, and awards were made to SD/VOSBs on the basis of the Veterans First preference. The FAR establishes that adequate price competition normally establishes a fair and reasonable price, and it provides methods for determining fair and reasonable pricing, such as comparing proposed prices to each other, previous prices paid for the same or similar items, published prices, or the independent government cost estimate. However, a few of these contracting officers told us that some of these comparison methods may not be reliable for offers received under SD/VOSB set-asides. They stated that they lacked the confidence that using these methods consistently provided robust and well-documented support for their decision to not award to an SD/VOSB. For example, they stated that in some instances, the independent government cost estimate is outdated, and the customer responsible for preparing it conducts limited market research. This issue is not unique to VA; in 2017, we reported on shortcomings in the usefulness and documentation of independent government cost estimates across several agencies. VA Procurement Policy officials emphasized that contracting officers must apply professional judgment and that no across-the-board standard exists—a higher price compared to non-SD/VOSBs might be appropriately found reasonable in some cases but not others, depending on many variables, including the degree of difference between the prices and the size and complexity of the requirement. However, in response to requests for clarification from contracting officers, VA officials provided conflicting informal guidance. For example, a contracting officer stated that, during a webinar training on the implementation of the Veterans First policy in late 2016, VHA’s Acting Chief Procurement and Logistics Officer said that, as a general rule, he would be hesitant to pay 5 percent more than any recent prices identified in contracting officers’ market research for the same or similar supplies or services from non-SD/VOSBs, a view he repeated when we interviewed him in spring of 2018. In contrast, the Executive Director for the Office of Acquisition and Logistics said he would not advocate paying any amount above recent prices identified in contracting officers’ market research for the same or similar goods or services from non-SD/VOSBs for any requirement. He stated that the Veterans First statute and policy did not authorize higher prices for goods and services from SD/VOSBs. According to a contracting officer we met with, he shared this view in a training session at a VA conference in March 2017, as well as when meeting with us in spring of 2018. A consistent message from senior management would provide VA greater assurance that its contracting officers have confidence when making fair and reasonable price determinations in set-aside acquisitions. In one of VA’s national contracting offices, the Strategic Acquisition Center, the Director told us that contracting officers were confused about how to implement the Veterans First policy in their work, particularly in making VA Rule of Two decisions and fair and reasonable price determinations. In order to address confusion and provide guidance to contracting officers, the Director stated that he provided a series of case studies to contracting officers that demonstrated effective application of these aspects of the Veterans First policy. Separately, other senior VA procurement officials stated that contracting office managers have a responsibility to address confusion and serve as a source of information for their contracting workforce. Contracting Officers Faced Challenges in Implementing Veterans First Policy, in Part, Due to Training Shortfalls, Pressures, and Workload Issues The judgments that VA contracting officers are asked to make—in conducting market research, making VA Rule of Two decisions, and determining whether proposed prices are fair and reasonable—can in some cases be inherently complex, and there are additional challenges that VA has faced in implementing Veterans First. There are several factors that contribute to these challenges. Training Did Not Reach All Contracting Officers, and Did Not Fully Address the More Challenging Components of the Veterans First Policy While VA provided training concurrently with the issuance of its 2016 Veterans First policy, the training did not reach all staff. According to VA Acquisition Academy officials, 81 percent of all VA contracting officers completed the initial training on the 2016 Veterans First policy in the summer of 2016. We reviewed academy training records for the 60 contracting officers we interviewed, and these records show that 14 of them did not take the initial training in 2016. In addition, only 52 percent of VA contacting officers completed the follow-up training on the Veterans First policy in the spring of 2018. According to the academy, the feedback provided by those that attended these training sessions was favorable, with ratings of 4.59 out of 5 and 4.75 out of 5, respectively. In communicating about the training to contracting officers, VA sent an announcement to all contracting officers, describing the training as “strongly encouraged” but not mandatory. According to VA Acquisition Academy and Office of Acquisition and Logistics officials, this is because neither of these organizations has the authority to designate training as mandatory—only VA’s Office of Human Resources and Administration has the ability to do so. GAO’s Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. In doing so, management should ensure that training is aimed at developing and retaining employee knowledge, skills, and abilities to meet changing organizational needs—such as those that occurred after the 2016 Supreme Court decision. Based on our review of the training, it does not fully address the more challenging aspects of implementing the Veterans First policy, such as making fair and reasonable price determinations when acquisitions have been set aside. Establishing more targeted training on the Veterans First policy and providing this training to all contracting officers would provide the VA with greater assurance that contracting officers have the knowledge and skills necessary to implement the more challenging components of this policy. Further, without establishing the importance of training on the Veterans First policy by assessing whether to make its attendance mandatory, management is not fully communicating its importance, and contracting officers may lack the tools needed to implement this policy. Contracting Officers Perceive Pressure to Apply Veterans First Preferences As previously stated, contracting officers told us they were not always confident in applying the Veterans First policy, in part because of pressure—real or perceived—from others. Contracting officers cited perceived negative scrutiny from leadership, OSDBU, Office of General Counsel reviewers, or potential protests from SD/VOSBs as reasons for their reluctance to not set aside requirements for SD/VOSBs, or to deem prices proposed by SD/VOSBs not fair and reasonable. Contracting officers explained that objections raised from any of these parties would add time to the procurement process, and a decision to cancel a set- aside because the prices were found not fair and reasonable would require yet more time to start the solicitation process again. Some contracting officers stated that they could not risk delays in awarding contracts by pursuing an approach other than setting aside for SD/VOSBs. We noted that training slides from a 2016 conference for VA contracting officials included a statement that, “contracting officers may not know if they have properly applied the VA Rule of Two standard until a court rules on the facts of a given case.” VA’s Acting Chief Acquisition Officer stated that he is aware of these perceived pressures and stated that some of these pressures are long-standing. He stated that VA had an initial effort to communicate the Veterans First policy immediately after the 2016 Supreme Court decision, but he acknowledged that contracting officers’ confusion remains, especially regarding fair and reasonable price determinations. VHA contracting officers also noted that because their customers are hospitals, there is an inherent need to avoid delays in the procurement process to prevent an adverse effect on patient care. The effect of these pressures was exacerbated by a concern we noted among contracting officers of whether their management would fully support a decision not to set-aside a contract. VA Faces Continuing Workload Issues The struggles that contracting officers are facing in making VA Rule of Two and fair and reasonable price determinations, as discussed above, are exacerbated by continuing workload stresses they have faced for years. In September 2016, we reported that managing workload is a challenge for VA’s contracting officers. For example, one medical center official stated that his local contracting office had at times turned away some purchase requests because it could not staff them. In November 2017, we also reported on contracting inefficiencies that affected contracting officers’ ability to provide goods and services in a timely manner and at best value to medical centers. Results from a recent survey of VA staff also illustrate existing workload stress within VA contracting. Specifically, in the Office of Personnel Management’s Federal Employee Viewpoint Survey, federal employees are asked if they believe their workload is reasonable; according to VA’s analysis of this data in 2017, 54.2 percent of the contracting officers at VA who responded said their workload was not reasonable. VA Conducts Limited Oversight of Compliance with Subcontracting Limitations In many cases, clauses that require compliance with and enable monitoring of subcontracting limitations are not included in VA contracts and orders with SD/VOSBs. Contracting officers are generally aware of subcontracting limitations, but they told us they do not have sufficient time or knowledge to conduct oversight. VA conducts some audits of compliance through a separate program. While the scale of that effort has been limited, these audits have identified a number of violations. VA, however, has not shared subcontracting limitation compliance risks or practices to improve monitoring efforts. Contract Clauses Are VA’s Primary Preventive Monitoring Mechanism, but Many Contracts We Reviewed Lacked Them VA contracting officers are required to include two different clauses when issuing solicitations for SD/VOSB set-asides: One clause requires contractors to comply with SBA’s subcontracting limitations regulation. Another enables the VA to obtain access to the SD/VOSB prime contractor’s records to monitor compliance with subcontracting limitations. SD/VOSB Set-Aside Clause Establishing Subcontracting Limitations Missing from Some Contract Actions Under the first clause, an SD/VOSB must comply with the SBA regulation that limits the percentage of the amount paid by the government under the contract that may be subcontracted to firms that are not in the same socio-economic category—that is, firms that are not also SD/VOSBs. This is known as the subcontracting limitations requirement. For example, under a services contract set aside for SD/VOSB contractors, an SD/VOSB prime contractor may only subcontract to non-SD/VOSBs a maximum of 50 percent of the amount paid by the government under the contract. The purpose of the subcontracting limitations requirement is to ensure that the SD/VOSBs that are awarded set-aside contracts do not subcontract the work beyond prescribed levels, and ensure that the goal of Veterans First—to promote opportunities for veteran-owned small businesses—is not undermined. In July 2016, VA updated its standard SDVOSB and VOSB set-aside clauses to refer to SBA’s revised subcontracting limitations regulation. For example, the SD/VOSB clause defines the criteria that firms contracting with VA must meet to be eligible for SD/VOSB set-asides and requires SD/VOSBs to agree to comply with SBA’s subcontracting limitations regulation in the performance of set- aside contracts. VA’s acquisition regulations require contracting officers to include the clause in all SD/VOSB set-aside contracts. We selected 35 VHA contracts and orders for review, 29 of which were set-aside to SD/VOSBs, to determine whether they contained the July 2016 (current) version of the SD/VOSB set-aside clause. All of our selected contract actions occurred after the 2016 Veterans First policy was issued, and after VA adopted SBA’s 2016 update of its subcontracting limitation regulation, which made the prior clause obsolete. We found that 11 of the 29 contract actions did not contain the current version of the clause—it was either missing entirely or an outdated version of the clause was used (see figure 10). The contracts and orders that contained the outdated version of the clause did not reference the significantly changed version of the SBA limitations on subcontracting regulation that is currently in effect, and therefore did not reference the version of the regulation that includes the penalty provision establishing that contractors that do not comply with subcontracting limitations may be subject to a $500,000 fine. Contracting officials told us the contracting officers likely forgot to include the clause or included an outdated version of the clause by mistake. Without including the mandatory clause in the contract actions as required, VA lacks assurance that SD/VOSBs are aware of subcontracting limitations. Monitoring Clause Missing from Most Contract Actions For the second clause, establishing VA’s right to access information from SD/VOSBs to monitor their compliance with the subcontracting limitations requirement, we found that 22 of the 29 contracts and orders we reviewed did not contain this clause. VA contracting officials told us the clause was not included in the contract in some cases because the contracting officers were unaware of the requirement, which was established in a June 2011 Information Letter policy memorandum. The policy memorandum directed contracting officers to include the clause in solicitations, which the Division Chief at one VISN contracting office identified as the reason it was not included in the contracts. However, the clause would not be in effect if not contained in the contract, and a VA procurement policy official confirmed that the intent was for this clause to be included in both solicitations and contracts. Without this clause, VA could face challenges in attempting to obtain information needed from the SD/VOSBs to determine their compliance with subcontracting limitations. Omission of this clause also poses a risk to VA by hindering its ability to detect violations, enforce the subcontracting limitations requirement, and ensure that the goal of Veterans First—to promote opportunities for veteran-owned small businesses—is not undermined. In June 2018, the VA rescinded the 2011 policy memorandum and issued a class deviation to the VA Acquisition Regulation. The class deviation revised the second clause—limitations on subcontracting monitoring and compliance—and required the clause to be included in solicitations and contracts. This is an important step to communicate that this clause is required in the contract. However, as noted above, the first clause—VA’s notice of set-aside clause that requires compliance with SBA’s limitations on subcontracting regulation—is already required by a previous class deviation and was missing from 8 of 29 contracts we reviewed. Given this, it is uncertain whether this VA Acquisition Regulation update alone will ensure that the monitoring clause is included in all contracts. VA Contracting Officers Conduct Limited Oversight to Assess Contractor Compliance with Subcontracting Limitations VA contracting officers conduct little oversight to ensure that SD/VOSBs comply with SBA’s subcontracting limitations regulations. According to the FAR, contracting officers are responsible for ensuring that the contractor complies with the terms of the contract, and, as discussed above, the terms of the contract may include subcontracting limitations. For the 29 SD/VOSB set-aside contracts and orders we reviewed, we found little evidence that contracting officers were monitoring compliance with SBA’s regulatory limitations on subcontracting requirements, which includes ensuring the VA clause that requires compliance with the subcontracting limitation is in the contract. Contracting officers we spoke with were aware of these responsibilities but cited several barriers to executing them, including high workload, a focus on awarding over administering contracts, and uncertainty of what steps to take. Senior VA procurement officials stated that monitoring the subcontracting limitations requirement has not been a high priority and that contracting officers have competing priorities and, thus, limited time available to conduct this monitoring. The VA’s limited oversight of subcontracting limitations has been a long- standing problem. In September 2016, SBA conducted a surveillance review of one of VA’s VISN contracting offices. In its review of 29 contract files, SBA found no evidence that the subcontracting limitations requirement was being monitored by contracting officers and recommended that VA take measures to ensure it conducts active monitoring. In July 2017, SBA followed up to determine what steps the VISN contracting office had taken to implement its recommendation to improve monitoring of the subcontracting limitations requirement. The SBA concluded that the VISN contracting office needed to take additional steps in order to close the recommendation. Some of the VA contracting officers we met with told us they rely on contracting officers’ representatives (COR) to monitor compliance with the subcontracting limitations and identify possible violations. CORs are generally at the location where the goods are being delivered or the services are performed to observe whether the SD/VOSB contractor is accomplishing the required work as specified in the contract. VA procurement officials told us that monitoring subcontracting limitations is the responsibility of contracting officers. VA’s Program to Assist Contracting Officers in Reviewing Subcontracting Limitations Is Limited in Scope In June 2011, VA’s Office of Acquisition and Logistics established the Subcontracting Compliance Review Program (SCRP) within the Risk Management and Compliance Service (RMCS) to assist contracting officers in conducting subcontracting limitations reviews. RMCS conducts its own reviews of compliance with subcontracting limitations, but the scale is limited. Specifically, RMCS conducted reviews of 95 SD/VOSB and other set-aside contracts out of thousands that were awarded since 2011, and the office is in the process of reviewing another 24 contract actions. The office selects a sample of contract actions awarded each fiscal year to review and may review other contract actions if contracting officers or other VA officials contact it with referrals of instances that warrant a review. RMCS officials told us they have received very few referrals to date. Many of the contracting officers we met with were unaware that SCRP existed, or that they could refer potential subcontracting limitations violations to it for review. However, VA’s manual describing the SCRP is housed on a portal accessible to contracting officers, and, in March 2018, VA’s Acquisition Academy training included information on the SCRP. RMCS’s subcontracting limitations reviews have identified a number of instances of non-compliance. Specifically, since 2011, the office has identified 25 instances of non-compliance with subcontracting limitations among the 95 reviews it has completed, or 26 percent of selected contract actions. For example, one review found that a VOSB contractor responsible for providing project management services paid more than the allowable percentage (50 percent) of the contract’s value to non- VOSB firms. In another example, the review found an SDVOSB contractor responsible for providing courier services paid more than 88 percent of the contract’s value to non-SDVOSB firms at about the halfway point in the contract’s period of performance. If VA’s mechanisms for monitoring and enforcing subcontracting limitations are not robust, the department exposes itself to increased risk of not detecting noncompliance. RMCS’s SCRP manual states that the evidence RMCS collects is to be provided to the contracting officer so that he or she can make a determination about whether the contractor is in compliance. The manual also outlines the various remedies available to contracting officers if an SD/VOSB is suspected of being or is found to be in noncompliance with the subcontracting limitations. A RMCS official told us that remedial actions taken with respect to noncompliant contractors are determined on a case-by-case basis and that contractors are generally provided an opportunity to correct the deficiency, if the contractor submits a viable plan. In several of the cases where the RMCS office identified non- compliance, contracting officers requested that SD/VOSBs develop a plan for becoming compliant with the subcontracting limitations requirement. For example, one plan specified additional oversight steps that the VOSB would take to ensure compliance with the subcontracting limitations, such as having the project manager provide a compliance plan to senior management for any instance of subcontracting with a non-VOSB that was anticipated to exceed a significant percentage of the total value of the contract award. RMCS officials said they had anticipated receiving additional resources to conduct the planned reviews when the SCRP was initially created but have yet to receive them. Officials stated they currently rely on three support contractor staff to conduct the reviews but are exploring the possibility of hiring additional staff to increase the number of reviews they can complete each year. In addition, the Acting Director also told us that the office has created a database that will ultimately allow contracting officers and CORs to identify contracts with which they have subcontracting limitations concerns. They have only implemented some of the database’s capabilities due to resource limitations. RMCS’s Acting Director stated she would like to grow the office and establish mechanisms to better facilitate communication between contracting officers and RMCS. She noted, however, that the lack of a permanent Director for RMCS, as well as competing funding priorities have made it difficult to establish these mechanisms. The Acting Director said she is the office’s sixth one in the past 2 and 1/2 years, and each person in this role has had other duties in addition to the position. VA Has Not Communicated Subcontracting Limitation Risks or Useful Monitoring Practices to Stakeholders Because VA has few mechanisms for monitoring subcontracting limitations and RMCS reviews are limited in scope, VA may not be able to detect the risk of fraud for the Veterans First program. Proactive fraud risk management is meant to facilitate a program’s mission and strategic goals by ensuring that taxpayer dollars and government services serve their intended purposes. To help agencies better address fraud, GAO’s 2015 report, A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework), includes a comprehensive set of leading practices that serve as a guide when developing or enhancing efforts to combat fraud in a strategic, risk-based manner. These practices include: Identifying and assessing risks. Collaborating and communicating with stakeholders—in this case, contracting officials— to share information on fraud risks. Applying lessons learned to improve the design and implementation of control mechanisms and communicating those changes to stakeholders. The Fraud Reduction and Data Analytics Act of 2015, and Office of Management and Budget guidance implementing its provisions, affirm that agencies should adhere to the leading practices identified in the Fraud Risk Framework. In our review of VA’s mechanisms for monitoring subcontracting limitations, we found that VA’s Office of Acquisition and Logistics as well as the RMCS perform some identification and assessment of risks, but that this assessment is not comprehensive. In addition, VA is not collaborating with and communicating these risks to stakeholders, as called for in GAO’s Fraud Risk Framework. By conducting a comprehensive assessment of fraud risk, VA would be better positioned to detect potential fraud related to subcontracting limitations for the Veterans First program. VA Has Taken Some Steps to Identify and Assess Risks, but Has Not Communicated These Risks to Stakeholders RMCS officials told us they were unable to comprehensively identify and assess the risks related to subcontracting limitations due to limited staff and resources. Nonetheless, they told us that they have identified certain situations—based on the reviews they have conducted to date and discussions with contracting officers—that may pose a higher risk of non- compliance with subcontracting limitations. These situations include: contracts for certain types of services, such as grounds maintenance, van transportation, and specialty trade construction; where a SD/VOSB has multiple contracts across several VISNs for the same services; and where a SD/VOSB does not have a business presence in the same geographical area where the services are being performed. They said these were higher risk situations because the SD/VOSBs have had difficulty completing the required work on their own, or the lack of a local business presence increases the likelihood that the SD/VOSB might rely on a local, non-SD/VOSB contractor to do more than the permissible portion of the work. According to RMCS officials, they have not shared information on subcontracting limitation risks with stakeholders, such as contracting officers and their management, but they agreed this could be a helpful step. By sharing information on higher risk situations, contracting officers would have a better understanding of when to refer cases to RMCS. Our prior work on subcontracting limitations, in the context of SBA’s 8(a) program, also identified situations presenting an increased risk that subcontracting limitations may be exceeded. These situations included instances when the 8(a) prime contractor proposed subcontractors that were the agency’s incumbent contractor or that had more experience in meeting the agency’s current requirement than the small business. It also included situations where the subcontractor, rather than the prime contractor, submitted documents to or corresponded directly with government officials. These situations highlight the importance of monitoring the extent of subcontracting. SBA has also identified risk factors to consider prior to contract award, such as the incumbent contractor working as a subcontractor or if the prime contractor lacks relevant experience and must rely upon its more experienced subcontractor to win the contract. In our review, contracting officers cited several contracts where subcontracting risk factors were present. In one case we reviewed, the contracting officer reported that a large business was the prime contractor on a previous water treatment services contract. After the 2016 Supreme Court decision, the contract was re-competed on a SDVOSB set-aside basis; a SDVOSB won the award and the incumbent contractor served as a subcontractor. According to the contracting officer, he suspected that the subcontractor was performing more than 50 percent of the work based on the SDVOSB’s limited capacity, but he said he did not have the authority to request information on payments from the SDVOSB prime contractor to the subcontractor. We found that neither the set-aside clause that limits subcontracting nor the monitoring clause were included in this contract, limiting the contracting officer’s ability to ensure the SDVOSB was meeting the appropriate subcontracting limitation requirement. The COR told us that the subcontractor performed most of the water treatment services work—the primary requirement under the contract—while the SDVOSB prime contractor sent invoices and conducted oversight. VA Has Identified Some Useful Monitoring Practices, but Has Not Communicated Them to Stakeholders RMCS officials told us they have identified some helpful practices that could improve compliance with subcontracting limitations. They said they have encouraged some contracting officers to require SD/VOSBs to explain in their proposals how they planned to comply with the subcontracting limitations requirement and said that some contracting officers have also used a worksheet to collect data on the work the SD/VOSB planned to complete themselves versus subcontract. Other VA contracting officials we met with also told us about additional practices they had implemented to facilitate monitoring of compliance with subcontracting limitations. These practices included the following: require the SD/VOSB contractors to submit quarterly reports during contract performance that indicate the percentage of the work completed by the SD/VOSB contractor and any subcontractors; hold pre-award discussions between the contracting officer and the SD/VOSB about the need to comply with subcontracting limitations; and convene post-award conferences between the contracting officer and COR to discuss whether the SD/VOSB is in compliance or not. Standards for Internal Control in the Federal Government state that management should internally communicate the necessary quality information to achieve the entity’s objectives. Although RMCS provides information to contracting officers and their management through the SCRP manual and related training, RMCS officials told us that they have not included these monitoring practices among the information they have shared. Having this information could improve contracting officers’ ability to ensure compliance with subcontracting limitations. Conclusions The basic premise of the Veterans First Contracting Program has not changed in the 12 years since its implementation began. However, the 2016 Supreme Court decision prompted VA to refocus and refine its policy, and implementing the refined policy and the associated VA Rule of Two across the entire enterprise of VA contracting has been challenging due to inherent complexities, perceived and real pressures to award contracts to SD/VOSBs, and inconsistent and sometimes conflicting management guidance. This environment created mixed messages and lessened some contracting officers’ confidence about how to appropriately apply the VA Rule of Two criteria, particularly in making a determination that there is a reasonable expectation that award could be made at fair and reasonable prices. Most of the contracting officers for the selected contracts we reviewed expressed dissatisfaction with VIP as the starting point for market research, citing difficulty in using it. While documentation of the VIP search results is required by the Veterans First policy, over three-quarters of the contract files we reviewed lacked such documentation. Such documentation, combined with support for overall market research efforts, provides VA with assurance that contracting officers have performed this search as part of the basis for their Rule of Two decision. These contracting officers also had some difficulty applying the VA Rule of Two, particularly in the more challenging component, determining whether they can reasonably expect prices offered by SD/VOSBs to be fair and reasonable—issues that could be mitigated by establishing more targeted training that would provide the VA with greater assurance that its contracting officers have the knowledge and skills necessary to implement this policy. Further, assessing whether training on the Veterans First policy should be designated as mandatory would provide VA with information necessary to determine if such training would be beneficial for all contracting officers. Monitoring of subcontracting limitations is an important oversight tool to ensure effective implementation of VA’s Veterans First program. Without ensuring that required contract clauses regarding subcontracting limitations are included in all SD/VOSB set-aside contracts, VA lacks assurance that SD/VOSBs are aware of subcontracting limitations. Additionally, VA’s Subcontracting Compliance Review Program has found subcontracting limitation violations and has identified some risk factors and practices for monitoring compliance with subcontracting limitations. Conducting a comprehensive assessment of fraud risk, using GAO’s Fraud Risk Framework, would help better position VA to detect potential fraud related to subcontracting limitations for the Veterans First program. Further, VA has not communicated identified risk factors and monitoring practices to stakeholders as called for in GAO’s Framework. Recommendations for Executive Action We are making the following six recommendations to VA. The Secretary of Veterans Affairs should ensure that VA’s Director of the Office of Acquisition and Logistics, in consultation with OSDBU, takes measures to ensure that VA contracting staff adhere to the requirements for documenting the required Vendor Information Pages searches in contract files. (Recommendation 1) The Secretary of Veterans Affairs should ensure that the Director of VA’s Office of Acquisition and Logistics directs the VA Acquisition Academy to provide more targeted training for the more challenging components of implementing the Veterans First policy, such as making fair and reasonable price determinations. (Recommendation 2) The Secretary of Veterans Affairs should, in consultation with VA’s Office of Human Resources and Administration, and the Director of VA’s Office of Acquisition and Logistics, assess whether training on the Veterans First policy should be designated as mandatory and take appropriate action based on the assessment results. (Recommendation 3) The Secretary of Veterans Affairs should ensure that the Director of the Office of Acquisition and Logistics establishes a mechanism to ensure that mandatory clauses relating to subcontracting limitations are consistently incorporated in all contracts that are set aside for SD/VOSBs. (Recommendation 4) The Secretary of Veterans Affairs should ensure that the Director of the Office of Acquisition and Logistics conducts a fraud risk assessment for the Veterans First program. (Recommendation 5) The Secretary of Veterans Affairs should ensure that the Director of the Office of Acquisition and Logistics directs the Risk Management and Compliance Service to share, through guidance, training, or other methods, subcontracting limitation risks and monitoring practices with contracting officers and their management. (Recommendation 6) Agency Comments We provided a draft of this report to the Department of Veterans Affairs and the Small Business Administration for review and comment. VA provided written comments on the draft report. In its comments, which are reprinted in appendix II, VA concurred with all of our 6 recommendations. SBA provided technical comments, which were incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Veterans Affairs, the Administrator of the Small Business Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by email at oakleys@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology You requested that we examine changes to how the Department of Veterans Affairs (VA) implements the Veterans First program as a result of the Supreme Court’s decision. In June 2016, the Supreme Court’s decision in Kingdomware Technologies, Inc. v. United States clarified conflicting interpretations of the requirement for the preference, concluding that VA must restrict competition to veteran-owned small businesses if the contracting officer reasonably expects that at least two such businesses will submit offers and the award can be made at a fair and reasonable price that offers best value to the United States. This report assesses: (1) how VA procurement obligations to veteran-owned small businesses changed in the period from fiscal years 2014 through 2017; (2) what actions VA has taken to update Veterans First policies and regulations and provide training following the Supreme Court’s decision; (3) what challenges, if any, VA is encountering in applying Veterans First policies; and (4) the extent to which VA has mechanisms in place to monitor compliance with subcontracting limitations by veteran-owned small businesses, and the effectiveness of such mechanisms. To assess how VA procurement obligations to veteran-owned small businesses changed in the period from fiscal years 2014 through 2017, we obtained data from VA’s Electronic Contract Management System (eCMS) on all contracts from fiscal years 2014 through 2017, chosen to provide data before and after the Supreme Court decision. We chose to exclude orders reported in Express Reports—summaries of multiple orders placed on existing contracts—from our analysis. These actions were only consistently reported in eCMS starting in 2017; because they represent billions of dollars of obligations with relatively little set-asides to service-disabled veteran-owned small businesses and veteran-owned small businesses (SD/VOSB), including them would have distorted year- to-year comparisons of percentages set aside for SD/VOSBs. We analyzed these eCMS data to determine changes in the use of set-asides for SD/VOSBs relative to overall VA contracting obligations during this period. We used this analysis to determine the extent to which VA set- aside contract obligations to SD/VOSBs in the period after the Kingdomware decision compared to the period before the decision. We adjusted obligations for inflation to fiscal year 2017 dollars using the fiscal year gross domestic product price index. We also analyzed the data to identify patterns of set-asides as a percentage of obligations among different contracting activities and across VA contracting organizations. To determine the extent to which new businesses are obtaining SD/VOSB certification, we obtained Vendor Information Pages (VIP) data from VA’s Office of Small and Disadvantaged Business Utilization (OSDBU) for fiscal years 2014 through 2017. We used these data to identify the change in the total number of certified SD/VOSBs in VIP during this period. We also analyzed VA’s eCMS data to determine the number of unique, individual SD/VOSBs that received awards for set-asides during the same period. With these data from VIP and eCMS, we compared the number of certified SD/VOSBs to the number of businesses awarded set- asides for each year during this period. To assess reliability of these data, we also reviewed available eCMS documentation and interviewed officials responsible for maintaining eCMS data to gather information on processes, accuracy, and completeness of these data. We determined that these eCMS and VIP data were sufficiently reliable for the purpose of describing changes in VA’s use of SD/VOSB set-asides over this period. To assess what actions VA has taken to update Veterans First policies and regulations and provide training following the Supreme Court’s decision, we analyzed policies, regulations, guidance, and training materials related to the program, and compared these to what VA had in place prior to the decision. We obtained and analyzed the program’s initial Information Letter, policy memorandum, and revisions to VA’s Acquisition Regulations, which detailed the Department’s intention to comply with federal statute. We also obtained and reviewed additional program documentation, including briefings, presentations, and training provided to contracting officers. We met with leadership at VA’s national contracting organizations to discuss implementation of the Veterans First policy within their organizations, and interviewed senior officials in VA’s Office of Acquisition and Logistics—including Office of Procurement Policy and VA Acquisition Academy—OSDBU, Office of General Counsel, and the Veterans Health Administration’s (VHA) Procurement and Logistics Office to discuss policies, guidance and training regarding the Veterans First program. To assess what challenges, if any, VA is encountering in applying the Veterans First policy, we gathered documentation from six contracting organizations across the VA. We conducted reviews of eCMS data to determine VA’s use of set-asides and the increase in the use of set- asides for all VA contracting organizations. Based on our analysis of these data, we determined that VHA had the greatest use of set-asides in fiscal year 2017. As such, we conducted site visits at a non-generalizable selection of three VHA regional offices, known as Veterans Integrated Service Networks (VISN). The three VISNs we selected are as follows: VISN 8: St. Petersburg, Florida Network Contracting Office 8 Orlando, Florida VA Medical Center Tampa, Florida VA Medical Center VISN 12: Westchester, Illinois Network Contracting Office 12 Hines, Illinois VA Medical Center Milwaukee, Wisconsin VA Medical Center VISN 16: Ridgeland, Mississippi Network Contracting Office 16 Jackson, Mississippi VA Medical Center New Orleans, Louisiana VA Medical Center We focused our site visits on VHA, because it is the largest contracting organization in the Department. We selected these VISNs primarily based on changes in total contract obligations to SDVOSBs and VOSBs from fiscal year 2015 to fiscal year 2017—the first full fiscal years before and after the Supreme Court decision—selecting two with among the largest percentage changes, and one with the lowest. The first site visit to VISN 8 was chosen because it had a high change in the percent of obligations on SD/VOSB set-asides from fiscal years 2015 through 2017 and high total obligations in fiscal year 2017. After completing the first site visit, we decided to exclude obligations for construction-related contracts, as our analysis of VA’s eCMS data found that construction had not been affected much by the 2016 Veterans First policy because the majority of construction contracts have always been—and continue to be—awarded to SD/VOSBs. The second site visit to VISN 12 was chosen because it had a low change in the percent of non-construction obligations on SD/VOSB set-asides from fiscal years 2015 through 2017 with high total non-construction obligations in fiscal year 2017. The final site visit to VISN 16 was chosen because it had a high change in the percent of non- construction obligations on SD/VOSB set-asides from fiscal years 2015 to 2017 with high total non-construction obligations in fiscal year 2017. At each selected VISN, we interviewed the VISN Deputy Network Director. We also obtained documentation from and interviewed leadership at the National Acquisition Center, Strategic Acquisition Center, and the Technology Acquisition Center. At the selected VISNs, we interviewed leadership at their respective Network Contracting Offices, and selected a non-generalizable sample of 35 total contracts and orders—29 of which were set aside for SDVOSBs or VOSBs—selected based on high dollar value, and for procurements of construction, services, or supplies. For each of the selected contracts and orders, we reviewed the contract files and interviewed both the contracting officer and the customer—in most cases the contracting officer’s representative. We also held roundtable discussions of Veterans First implementation, training, and other matters with 8 to 11 contracting officers at each location, randomly selected from the construction, services, and supply teams. We selected a non-generalizable sample of 12 contract actions from VISN 8, 11 contract actions from VISN 12, and 12 contract actions from VISN 16. The selection was based primarily on: contracts and orders that were set-aside to SD/VOSBs; product and service codes for services and supplies; and awards with a total value above $1 million as well as those between $150,000 and $1 million. We obtained and reviewed the contract files for each of the selected contract actions, which are also stored in eCMS, including signed awards, solicitations, market research reports, fair and reasonable price determinations, independent government cost estimates, statements of work, and other documents. We visited each of the Network Contracting Offices and interviewed the contracting officer for each of the selected contract actions and discussed the set-aside determination and their experiences with the Veterans First policy; because some were responsible for more than one, we interviewed 30 contracting officers for the 35 selected contracts and orders. We interviewed leadership at each location, and held 5 roundtable discussions with contracting officers from various product lines—supplies, services, construction, and leasing— whose contracts were not included in our non-generalizable sample. We also interviewed the customer—in most cases the contracting officer’s representative or subject matter expert—for each of the selected contract actions. Finally, we met with leadership at VA’s national contracting organizations—including the National Acquisition Center, Strategic Acquisition Center, Technology Acquisition Center, and Construction and Facilities Management—to discuss the implementation of the 2016 Veterans First policy within their organizations. To assess the extent to which VA has mechanisms in place to monitor compliance with subcontracting limitations by veteran-owned small businesses and the effectiveness of such mechanisms, we analyzed VA and Small Business Administration (SBA) acquisition policies and regulations to identify the monitoring mechanisms in place to ensure compliance with subcontracting limitations. To assess the effectiveness of VA’s mechanisms, we leveraged our reviews of files for the 29 selected contracts that were set aside, and we assessed whether the required set- aside and monitoring clauses were included. In cases where we selected an order, we reviewed the overarching indefinite delivery contract if it was awarded by VA. We also assessed the extent to which the files reflected evidence of monitoring. We reviewed VA’s Information Letter that established the Risk Management and Compliance Service’s Subcontracting Compliance Review Program and the program’s manual for conducting subcontracting limitations compliance audits and analyzed the audit results. We also assessed the extent to which these mechanisms met GAO internal control and fraud framework criteria. We interviewed senior VA procurement officials responsible for developing and/or implementing these mechanisms and providing training to contracting officers and contracting officers’ representatives. We also reviewed our prior work and SBA and VA Inspector General reports on VA and other agencies’ compliance with subcontracting limitations. We conducted this performance audit from October 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Lisa Gardner, Assistant Director; Pete Anderson; Matthew T. Crosby; Susan Ditto; Jeff Hartnett; Alexandra Jeszeck; Teague Lyons; Lorraine Ettaro; Suellen Foth; Ashley Rawson; Eric Schwab; Roxanna Sun; and Alyssa Weir made key contributions to this report.
Why GAO Did This Study VA spends billions every year to procure goods and services and is required to give preference to veteran-owned small businesses when awarding contracts—a program known as Veterans First. In turn, those firms must comply with limitations on the use of subcontracting. A 2006 statute established Veterans First, and a 2016 Supreme Court decision clarified conflicting interpretations, resulting in changes to how VA must now implement the program. GAO was asked to review VA's implementation of Veterans First since the Supreme Court decision. Among other things, this report assesses the extent to which (1) changes occurred in procurement obligations to veteran-owned small businesses from fiscal years 2014 through 2017; (2) VA has encountered any challenges in implementing Veterans First policies; and (3) VA has mechanisms to oversee contractor compliance with subcontracting limitations. GAO analyzed VA regulations, policies, and contracting data; conducted three site visits; and reviewed a non-generalizable sample, selected based on factors such as high dollar value, of 35 contracts and orders, 29 of which VA awarded under Veterans First. What GAO Found GAO found that the percentage of Department of Veterans Affairs (VA) obligations set aside for veteran-owned small businesses under its Veterans First program was higher in 2017—the first full year following the 2016 Supreme Court decision—than in previous years. In its decision, the court clarified that VA contract competitions must be restricted to these businesses if they meet two criteria: (1) the contracting officer reasonably expects that at least two such businesses will submit offers, and (2) the award can be made at a fair and reasonable price and best value to the government. This has become known as the “VA Rule of Two.” VA created a new policy for implementing Veterans First following the 2016 decision. The percentage of obligations set aside for veteran-owned small businesses increased from fiscal years 2014 to 2017 (see figure). Contracting officers face challenges implementing aspects of Veterans First, some of which VA has addressed through policy and optional training. However, 12 of the 30 contracting officers GAO interviewed cited difficulty in assessing the second criterion of the VA Rule of Two when making a set-aside decision. Eight of them stated that they sometimes lacked confidence in their fair and reasonable price determinations. VA's training, however, does not fully address these more challenging aspects of implementing the Veterans First policy. More targeted training would provide VA with greater assurance that its contracting officers have the knowledge and skills necessary to implement the policy. Additionally, assessing whether training on this policy should be mandatory would allow VA to determine if it would be beneficial for all contracting officers. GAO found that VA conducts limited oversight of contractor compliance with limitations on subcontracting and has few mechanisms for ensuring compliance. For example, GAO found that the required clause for ensuring that veteran-owned small business contractors perform the required portion of work was either missing entirely or an outdated version was used in 11 of the 29 set-aside contract actions GAO reviewed. Without better oversight, VA is limited in its ability to detect violations and ensure that the goal of Veterans First—to promote opportunities for veteran-owned small businesses—is not undermined. What GAO Recommends GAO is making six recommendations, including that VA provide more targeted training for contracting officers, assess whether training should be mandatory, ensure required clauses are included in contracts, and improve oversight of compliance with subcontracting limitations. VA agreed with GAO's recommendations.
gao_GAO-18-610
gao_GAO-18-610_0
Background The Great Lakes-Seaway system’s commercial shipping has traditionally been dominated by vessels carrying bulk commodities such as grain, coal, and iron ore, although there are differences between the shipping on the Great Lakes versus the St. Lawrence Seaway portions of the system. On the Great Lakes side, U.S.-flag (meaning registered in the United States) vessels are primarily “lakers”—meaning they stay on the Great Lakes and generally do not enter the St. Lawrence Seaway. This domestic Great Lakes traffic primarily consists of iron ore, limestone, and coal that are transported to serve the U.S. steelmaking industry. For example, U.S. lakers transport iron ore, mined in northern Minnesota, from Duluth to steel manufacturers at ports such as Burns Harbor, Indiana, and Toledo, Ohio, in the lower Great Lakes. U.S. law requires that maritime transport of cargo between U.S. ports be carried by U.S.- flag vessels. In contrast to the Great Lakes, the St. Lawrence Seaway is used primarily by Canadian- or foreign-flag vessels that carry cargo between and among U.S., Canadian, and overseas ports. For example, in 2015, 40 percent of St. Lawrence Seaway traffic, as measured by tonnage moved, consisted of cargos shipped between Canadian ports. Another 34 percent of 2015 Seaway traffic consisted of cross-border trade between U.S. and Canadian ports. Only 10 percent of Seaway traffic in 2015 was between overseas and U.S. ports. This trade is generally characterized as “steel in/ grain out”—with imported iron and steel products entering the system destined for U.S. ports and U.S. grain leaving the system destined for overseas ports. For example, foreign vessels transport fabricated steel through the Seaway to manufacturing facilities in the Great Lakes region and then carry grain from the region back through the Seaway to overseas destinations such as Europe. The Great Lakes and St. Lawrence Seaway portions of the system also differ in how they are managed. On the St. Lawrence Seaway, which opened in 1959, the U.S. Seaway Corporation manages the Snell and Eisenhower locks, which are located in Massena, New York. Like all locks on the St. Lawrence Seaway, the Snell and Eisenhower are single locks without parallel locks for redundancy and are the same dimensions— about 766 feet long and 80 feet wide. On the Great Lakes, the Army Corps manages the Soo locks, which consist of two parallel locks: the larger Poe lock, completed in 1968 (1,200 feet long and 110 feet wide) and the smaller MacArthur lock, completed in 1943 (800 feet long and 80 feet wide). Many U.S.-flag laker vessels are restricted to using the Poe lock, as they are too large to fit in the MacArthur lock. The construction of a second Poe-sized lock at the Soo locks is currently under consideration. In 1986, Congress authorized the construction of a second Poe-sized lock, but funds sufficient to begin construction were never appropriated. In 2005, the Army Corps calculated a benefit-cost ratio of 0.73 associated with the construction of a second Poe-sized lock, which was not high enough to request funding. In January 2016, the Army Corps initiated an economic reevaluation of the project’s benefit- cost ratio to update assumptions of the 2005 study. In July 2018, the Army Corps released its reevaluation study, which estimated the cost of constructing a new Poe-sized lock to be approximately $922 million with an updated benefit-cost ratio of 2.42. According to the Army Corps, the project will compete with other construction projects throughout the country through the agency’s budgeting process. The decision to fund the new lock also involves review by the Office of Management and Budget for inclusion in the President’s budget, and Congress will need to appropriate funds. The U.S. Seaway Corporation and Army Corps also differ in their size and role, for example: The U.S. Seaway Corporation. In addition to managing the two U.S.- operated locks on the St. Lawrence Seaway, the U.S. Seaway Corporation has a role in enhancing utilization of the entire Great Lakes- Seaway system. Its stated mission is to improve the operation and maintenance of a safe, reliable, and efficient waterway and to perform economic and trade development activities with the aim of enhancing utilization. In doing so, the Corporation works closely with its Canadian counterpart (the Canadian Seaway Corporation) to manage the binational St. Lawrence Seaway and provide information on the system to potential users. The U.S. Seaway Corporation is located within the U.S. Department of Transportation and has approximately 140 employees. The Army Corps. The Army Corps, located within the Department of Defense, maintains a wide range of water resources projects across the country—including the Soo locks—under its Civil Works Program. These projects include over 200 inland waterway locks, such as those along the Mississippi river and its tributaries. The Army Corps’ Civil Works Program is supported by approximately 22,000 civilian employees and is organized into three tiers: a national headquarters in Washington, D.C., eight regional divisions, and 38 local district offices. The Detroit District, which is responsible for the day-to-day maintenance and operation of the Soo locks, falls under the Great Lakes and Ohio River Division. Following the 2007 joint U.S.-Canadian study, the Army Corps and the U.S. Seaway Corporation developed asset renewal plans, which were originally intended to cover approximately 10 years and which focused on replacing or rehabilitating existing lock components to avoid unexpected lock closures. Both agencies complete routine maintenance and capital improvements on the locks during the 2–3 winter months the locks are closed to navigation every year due to weather conditions. Congress appropriates funding for both Army Corps’ and U.S. Seaway Corporation’s lock operations and maintenance from the Harbor Maintenance Trust Fund (trust fund). The trust fund is supported through collections of the Harbor Maintenance Tax (also sometimes called a fee), which is charged to vessels carrying U.S. domestic or imported cargo or passengers, primarily at coastal and Great Lakes ports. Congress also appropriates funds from the trust fund for other Great Lakes-Seaway purposes, including dredging (underwater debris removal) to maintain the depth of ports and channels for navigation. In the U.S. portions of the Great Lakes-Seaway, including ports and channels, dredging is primarily conducted by the Army Corps and to a lesser extent the U.S. Seaway Corporation. As of July 2013, the trust fund built up a balance of $8.5 billion. In 2014, Congress authorized targets to annually increase appropriations from the fund to reduce the balance, and required the Army Corps to allocate annually a minimum amount of funds for the Great Lakes-Seaway system. Two federal agencies within the Department of Homeland Security also have roles in the Great Lakes-Seaway. The U.S. Coast Guard ensures safety in various ways, including by ensuring a sufficient supply of certified U.S. pilots who board foreign vessels to ensure safe navigation. Specifically, the Coast Guard is responsible for annually setting the rates U.S. pilots on the Great Lakes-Seaway charge carriers (referred to as pilotage rates for the remainder of this report). In addition, the Coast Guard is also required by law to maintain heavy icebreaking capability on the Great Lakes to assist in keeping channels and ports open to navigation. Meanwhile U.S. Customs and Border Protection is responsible for screening cargo and passengers entering the United States at ports of entry, including Great Lakes ports. Great Lakes-Seaway Cargo Levels Have Decreased since 1980 due to Various Economic Factors but Selected Stakeholders Report Recent Increased Diversity of Uses Stakeholders Identified a Variety of Economic Factors Associated with Decreased Cargo Levels on the Great Lakes- Seaway since 1980 The amount of cargo transported annually on the Great Lakes-Seaway— specifically for U.S. Great Lakes domestic and Seaway cargo—has generally declined since 1980 (see fig. 3). The Great Lakes U.S. domestic “laker” cargo traffic declined from about 115 million tons in 1980 to about 78 million tons in 2016—a decline of 32 percent—according to data from the Army Corps’ Waterborne Commerce Statistics Center. As noted in figure 3, the trend includes many noticeable year-to-year changes over this time period, which may be in response to broader economic factors, as discussed below. Meanwhile, cargo traffic on the St. Lawrence Seaway, which as described earlier is primarily transported by Canadian and foreign vessels, declined by 48 percent over the same time period, from about 74 million tons in 1980 to about 39 million tons in 2016, according to Seaway Traffic Data. Between 2001 and 2016, domestic Great Lakes cargo traffic levels were driven primarily by iron ore, limestone, and coal—three commodities that are closely tied to the steel industry (see fig. 4). Specifically, these three commodities accounted for 90 percent of the total of about 78 million tons in domestic Great Lakes traffic in 2016—iron ore alone comprised 50 percent. Great Lakes domestic tonnage declined by about 22 million tons overall from 2001 to 2016, with declines in iron ore, limestone, and coal totaling about 21 million tons. Army Corps officials noted that other commodities such as wheat also have a presence on the Great Lakes, with over 5 million tons of wheat traveling on the Great Lakes in 2017 according to these officials. In contrast to the domestic Great Lakes cargo traffic, the top five commodities on the St. Lawrence Seaway, which comprised 70 percent of total cargo traffic in 2016, show a more varied picture of the types of commodities and trends from 2001 to 2016 (see fig. 5). Grain, the top commodity transported on the St. Lawrence Seaway, comprised nearly a third of total Seaway traffic in 2016. Like the domestic Great Lakes traffic, iron ore and coal have a significant presence on the St. Lawrence Seaway, together comprising 24 percent of cargo traffic in 2016. In contrast to domestic Great Lakes traffic, iron and steel constitute key commodities on the St. Lawrence Seaway, declining from about 3.2 million tons in 2001 to about 2.4 million tons in 2016. Nearly all such iron and steel transports are imports destined for U.S. or Canadian ports. For example, some specialty steel used to package food in cans is manufactured in Europe and imported for use in the United States. Several stakeholders we interviewed told us that a balance between inbound iron and steel shipments and outbound grain exports are important in providing shipping capacity in both directions. Stakeholders identified various economic factors that have affected Great Lakes-Seaway cargo traffic levels since the 1980s: Global economic factors. Many stakeholders noted that year-to-year trends in global prices for commodities such as grain, iron ore, and steel affect Great Lakes-Seaway cargo traffic levels. For example, two stakeholders told us that U.S. iron ore is exported through the St. Lawrence Seaway when global iron ore prices are high, allowing producers to cover the costs of shipping while also being price competitive internationally. Further, some stakeholders reported that the increase in globalization since 1980 has resulted in greater foreign competition to U.S. and Canadian commodities exported via the Great Lakes-Seaway. For example, one stakeholder noted that countries that were grain importers in the 1980s, such as Russia, have since become grain exporters, competing with U.S. and Canadian grain internationally. Grain traffic on the St. Lawrence Seaway fell by over 60 percent from about 32 million tons in 1980 to about 12 million tons in 2016, with nearly the entire decline occurring prior to 2001. Domestic economic factors. Several stakeholders told us that Great Lakes-Seaway cargo traffic rises and falls in conjunction with general economic conditions and trends, such as a sharp decline during the recession in 2009 (see fig. 3 above). For example, one stakeholder reported that a trend in the U.S. economy toward a more service- based rather than manufacturing-based economy has affected Great Lakes-Seaway traffic, reducing demand for manufacturing inputs such as iron ore. As we reported in 2013, manufacturing has accounted for a decreasing share of U.S. employment and economic output over the last several decades. Industry-specific changes. Changes in industries that have relied on the Great Lakes-Seaway for the transportation of input materials have affected cargo trends, according to several stakeholders. For example, demand for iron ore has been affected by the U.S. steel industry’s move towards smaller manufacturing plants, which are located away from the Great Lakes and which use recycled metal and do not require iron ore. Between 2001 and 2016, domestic Great Lakes tonnage of iron ore declined by 14 percent, from about 45 million to about 39 million tons (see fig. 4 above). Several stakeholders also told us that changes in the power generation industry have reduced shipments of coal. For example, environmental concerns and competitive natural gas prices have led some utilities in Canada and the United States to close coal-fired facilities. St. Lawrence Seaway coal tonnage from 2001 to 2016 declined by 53 percent, from about 5.3 million to about 2.5 million tons (see fig. 5 above). Greater competition among modes. Several stakeholders said that certain other transportation modes have become more competitive with the Great Lakes-Seaway. For example, several told us that the use of shipping containers—which enable easy intermodal transfer between waterways, highway, and rail—has grown dramatically worldwide in the past several decades with implications for modal competition and the Great Lakes-Seaway. As we previously reported, the largest container vessels in 2016 could carry nearly 18,000 standard 20-foot shipping containers, roughly twice as many as in 2005. However, most modern containerships are too large to use the Great Lakes-Seaway locks and container service on the system is limited. Three stakeholders that sometimes use the Great Lakes- Seaway to import cargo reported that they can also import cargo to the Midwest via coastal ports, where containers can be transferred from container ship to truck or rail for inland delivery. While traffic on the Great Lakes-Seaway has generally declined since 1980, according to data published by the U.S. Bureau of Transportation Statistics, U.S. railroad freight nearly doubled from 1980 to 2015, from 932,000 to 1.7 million ton-miles. Stakeholders Report Recent Increased Diversity in Uses of the Great Lakes-Seaway Stakeholders reported a recent increase in the diversity in the use of the Great Lakes-Seaway, although bulk commodities continue to constitute the majority of the 78 million and 39 million tons of domestic Great Lakes and St. Lawrence Seaway cargo traffic in 2016, respectively. The reported increase in the diversity of uses includes: Project cargo. Some stakeholders told us shipments of project cargo—specialty items that may be difficult to move by rail or truck due to width or weight limits, such as windmill blades, beer fermentation tanks, and mining equipment—have increased in recent years. The tonnage of St. Lawrence Seaway traffic comprised of machinery and other manufactured products, which encompass project cargo, grew from about 657,000 tons in 2001 to about 1.1 million tons in 2016. Project cargos are typically chartered on an as- needed basis. One stakeholder said that carriers would need to offer more ships capable of carrying project cargo as a prerequisite for any large future increases in project cargo. Containers. Although containers continue to represent a small fraction of total cargos on the St. Lawrence Seaway, container traffic on the Seaway more than tripled from 18,156 tons in 2001 to 64,984 tons in 2016. The only regular container service on the system began in 2014 and operates between ports in Cleveland and Antwerp, Belgium. The service is offered through a partnership between the Port of Cleveland, where officials told us they view the service as a way to attract traffic, and a Dutch carrier, whose representatives view it as a way to educate U.S. manufacturers on the advantages of maritime transportation. Representatives from the carrier said that the service offers 44 sailings annually. Cruises. Several stakeholders said that there is recent growth of small passenger cruises on the Great Lakes-Seaway with the potential for further growth. Some of those stakeholders said that the region affords advantages including a variety of scenic destinations. A typical cruise may begin and end in Chicago and Toronto, both of which have air connections for arriving and departing passengers. An official from the U.S. Seaway Corporation said that the number of cruise ships operating on the system grew from 5 to 8 and the number of voyages offered grew from 54 to 92 between 2014 and 2018. The official said that additional ships and voyages are expected in the future. Selected Stakeholders Identified Various Challenges to Using the Great Lakes- Seaway, but the U.S. Seaway Corporation Has Not Fully Assessed Risks Traditional and Emerging Great-Lakes Seaway Uses Face a Range of Challenges, according to Stakeholders Stakeholders we met with identified a range of challenges to using the Great Lakes-Seaway and noted that these challenges pose risks to the future use of the system. Although many of the challenges that stakeholders identified—such as the annual winter closure—affect all users of the system, some challenges may impact the system’s various users differently. Specifically, some challenges directly affect the “traditional use” of the system—including the transport of bulk cargos such as iron ore, grain, and steel—while other challenges primarily affect “emerging use” of the system, such as the cruise industry and container market, as discussed below. The cumulative effect of all the challenges represents costs and system reliability risks to shippers that can erode the advantages that the system has traditionally offered over other transportation modes. For example, a representative from one shipping company told us the company frequently compares the cost of using the Great Lakes-Seaway to other modes and noted that the margin favoring the Great Lakes-Seaway is becoming narrower due to the system’s various challenges. Challenges to Traditional System Use Stakeholders identified several challenges that affect traditional uses of the Great Lakes-Seaway, including transport of dry bulk commodities and imported steel. Recent Increase of Pilotage Rates: The majority of stakeholders we interviewed reported that recent rate increases in the costs of securing pilots, who are intended to ensure safe navigation, have significantly increased costs for foreign ocean going vessels operating in the Great Lakes-Seaway. Federal law requires that certified pilots board foreign vessels while in the Great Lakes-Seaway. A pilot may be on board for multiple days on a single voyage, given the size of the system. As part of its responsibility to set rates that pilots charge carriers for the Great Lakes-Seaway, the U.S. Coast Guard revised the methodology used to calculate the rates in 2016. Coast Guard officials told us the methodology had not changed since the mid- 1990s and changes were needed to bring rates up to a sufficient level to attract and retain pilots. Specifically, according to the Coast Guard the number of pilots in the region decreased from 44 in 2007 to 36 in 2014, resulting in pilot shortages and traffic delays. In response, the Coast Guard raised rates. For example, in the St. Lawrence River portion of the system, pilotage rates increased 23 percent between 2014 and 2016. According to one carrier association we interviewed, pilotage is one of the single largest cost items for foreign vessels entering the system. Similarly, representatives from a carrier association told us pilotage rates are a primary challenge affecting the cost competitiveness of the system compared to truck and rail. The methodology used to calculate rates was revised further in 2017 and 2018 and Coast Guard officials report that the recent updates have accounted for factors, such as eliminating a weighting factor based on the size of the vessel. According to Coast Guard officials, these changes corrected factors that were not properly accounted for in previous years and effectively lowered rates compared with 2016. The Coast Guard also authorized an increase in the number of registered pilots, from 36 in 2014 to 45 in 2017. Condition of the Poe-lock Infrastructure: Several stakeholders that operate on the Great Lakes told us that they are concerned about the condition of the Poe lock (see fig. 6). One Great Lakes shipper representative told us that they believe the Poe lock is at critical risk of lock failure that could result in an unplanned outage and disrupt the U.S. steel industry, which has limited alternatives (rail or truck) to move large amounts of iron ore from Minnesota and Michigan’s Upper Peninsula to steel manufacturing plants in the lower Great Lakes. As mentioned previously, many U.S. laker vessels can only fit in the larger Poe lock at the Soo locks due to vessel size. For example, the Army Corps estimated that 85 percent of the tons of cargo travelling through the Soo locks in 2017 were restricted to using the Poe lock. A representative from a Great Lakes carrier told us that a closure of the Poe lock for repairs during the shipping season could pose further challenges to using the system, since there is currently no redundant Poe-sized lock to which traffic could be diverted. As discussed below, Army Corps officials note they currently lack the means to replace the Poe lock’s upper miter gate—which was identified as critical in 2007—without disrupting navigation. The Army Corps’ asset renewal efforts to improve lock condition, including the Poe lock, are discussed in greater detail below. Regulatory Complexity Related to Ballast Water: Several agencies are involved in regulating ballast water in the Great Lakes-Seaway, and several stakeholders reported that the complexity of the regulatory environment poses a challenge to using the system. Ballast water is taken up or discharged in a vessel’s tanks to improve stability during voyages and when cargo is loaded or unloaded. Ballast water regulations are aimed at preventing the introduction of invasive species collected in foreign waters from transoceanic vessels and discharging them into the Great Lakes. These regulations involve joint U.S.-Canadian Seaway regulations as well as requirements from the U.S. Coast Guard, U.S. Environmental Protection Agency (EPA), and some states. Specifically, under the current framework, all oceanic vessels bound for the Great Lakes-Seaway are tested to meet the ballast water discharge standards established by the U.S. Coast Guard and the EPA. Most lakers, which are confined to the Great Lakes and unlikely to introduce new aquatic invasive species from outside the Lakes, are not subject to the Coast Guard and EPA requirements. In addition, states are authorized to establish their own vessel discharge control measures, and according to an industry association, several Great Lakes states have their own ballast water requirements. One carrier association representative told us that the various ballast water regulations can cause confusion over how the regulations apply across the system. U.S. Seaway Corporation officials said they are aware of these issues and since 2007, the U.S. and Canadian Seaway Corporations have been operating under harmonized, joint ballast water regulations intended to eliminate confusion among users of the system. In addition, both Corporations participate in the Great Lakes Seaway Ballast Water Working Group, which is comprised of representatives from the U.S. Coast Guard and others. The group’s mission is to coordinate regulatory, compliance, and research efforts to reduce the introduction of aquatic invasive species via ballast water. The working group reported in 2018 that such coordination will help minimize the creation of a patchwork of inconsistent regulations. Effect of insufficient dredging: Several stakeholders we met with said that insufficient dredging—removal of sediment and debris from the bottom of ports to maintain water levels for maximum vessel load— can pose a challenge to using the Great Lakes-Seaway. In particular, a stakeholder noted the Army Corps, which is responsible for dredging the major U.S. ports on the Great Lakes, has limited capacity to keep up with all ports’ dredging needs, and that this situation can lead to vessels having to engage in “light loading”— filling to a lower capacity to reduce vessel weight—to access affected ports. The Army Corps reported in 2018 that its dredging backlog has decreased to 13.5 million cubic yards from a high of 18 million in 2013. One stakeholder that uses the Great-Lakes Seaway to ship iron ore told us that light loading causes steel mills to operate at lower capacity when they do not receive the required amount of iron ore. Army Corps officials told us that high water levels in recent years have allowed vessels to carry more tons of cargo. However, because water levels fluctuate over time, those conditions could change and affect load efficiency. Challenges to Emerging System Use Stakeholders also identified challenges that particularly affect emerging uses of the Great Lakes-Seaway, such as the cruise industry and container market. Winter closure: The majority of stakeholders we interviewed told us the annual winter closure hurts the system’s competitiveness because shippers must either stockpile their cargo or find alternative modes of transport during the winter months. While winter closure has been a long-standing feature of the system, it poses a particular challenge for the emerging container market since, as a stakeholder from a carrier association noted, containerized cargo is often time-sensitive and cannot be stockpiled. Securing an alternative transportation mode during the winter closure may be challenging because railroads, for example, prefer to sign year-round contracts for shipping rather than shorter-term winter arrangements. Additionally, some stakeholders told us lack of icebreaking during the start and end of the season, particularly during severe winters, has caused vessel delays. The U.S. Coast Guard’s icebreaking fleet consists of nine vessels on the Great Lakes. In 2016, a U.S. Coast Guard report identified some ice breaking issues that led to 3- and 6-week delays in 2010. The report detailed actions the U.S. Coast Guard took to mitigate future delays, including moving an icebreaking vessel’s home port to a Great Lakes port, but also noted that procuring an additional heavy icebreaker is not cost-effective. An example of potential delays caused by ice was demonstrated in January 2018 when a vessel became frozen in the U.S. Seaway Corporation’s Snell lock during extreme weather conditions, delaying five vessels and necessitating the system’s closure for 11 days. Efforts to free the vessel included ice melting equipment and tug boats. Limited U.S. Customs and Border Protection resources for clearing passengers and container cargo: Several stakeholders we interviewed told us that the limited capacity of U.S. Customs and Border Protection’s processing of container cargo and passengers poses a challenge for emerging system uses. U.S. Customs and Border Protection is responsible for inspecting travelers and imported cargo that enters the U.S., including at the ports of entry in the Great Lakes regions. U.S. Customs and Border Protection officials told us that their procedures for processing containers and passengers are more involved than traditional bulk cargos and that processes differ by port. For example at the Port of Detroit, cruise passengers are transported by bus to facilities a few miles away for processing. According to a representative from a cruise industry association, this processing creates delays and poses a challenge to the developing cruise industry. Officials from U.S. Customs Border and Protection offices in the Great Lakes region told us that their resources for processing passengers and cargos are located at main ports of entry (such as airports) and that at the Great Lakes ports are lacking appropriate facilities, tools, technology, equipment, and personnel. These same officials said that if the Great Lakes ports were to handle increasing numbers of passengers and containers, U.S. Customs and Border Protection would need sufficient time and budget to add inspection equipment, but that port operators would need to bear the costs of upgrading their facilities. Inadequate portside infrastructure: Some stakeholders told us that many of the ports along the Great Lakes-Seaway were developed to support bulk commodities—such as iron ore, coal, and grain—and are not equipped to easily handle containers. Bulk commodities do not require portside equipment at destination ports since they are transported by self-unloading vessels and are often delivered straight to private docks, such as iron ore delivered to a steel manufacturing facility. As such, Great Lakes ports generally lack multimodal connections that enable transfer of containers from vessel to truck and rail routes. A representative from a company that ships containers on the Great Lakes-Seaway told us that the port nearest its location does not have cranes to handle containers. Instead, the company uses a different port that is further away because it has the infrastructure necessary to ship containers. Port representatives told us that financing options exist to make upgrades to port infrastructure but consistent and sustainable traffic levels are needed in order to justify investments. For example, an official from the Port of Cleveland told us they have access to their own financing and have added infrastructure to create their container business, including cranes, storage warehouses, and right-of-way for rail connections using revenue bonds issued by the board that oversees the port. An official from the Port of Indiana told us that the port lacks infrastructure to handle containers, but it would find the financing to make investments in container equipment if there were a consistent stream of business. The U.S. Seaway Corporation Has Not Fully Assessed the Risks That Challenges Pose to System Utilization Although U.S Seaway Corporation officials told us they are aware of system challenges cited by stakeholders, the Corporation has not fully assessed the extent to which the challenges pose risks to the use of the Great Lakes-Seaway. As previously noted, the U.S. Seaway Corporation’s stated mission is to improve the operation and maintenance of a safe, reliable, and efficient waterway and to improve regional economic and trade development by enhancing utilization of the entire Great Lakes Seaway system. To achieve this mission, the U.S. Seaway Corporation’s strategic plan includes several goals, such as increasing the volume and value of commercial trade through the Great Lakes Seaway System, while promoting cost-effective competition for all users. To achieve these goals, the plan lists several actions, including developing initiatives to improve capacity of the system, and working with carriers, ports, pilots, and other stakeholders to contain costs and foster increased trade in the region. For example, the U.S. Seaway Corporation has taken steps to improve the condition of lock infrastructure—as discussed in greater detail below—and in 2015, hired a full-time employee, stationed in Cleveland, Ohio, who is responsible for advancing the Corporation’s trade and economic development activities in the Great Lakes region. However, the Corporation has not taken steps to identify, analyze and monitor challenges that affect use of the system, such as those identified by the stakeholders we interviewed. The Standards for Internal Control in the Federal Government states that assessing risks and monitoring changes are key to achieving objectives. Specifically, management should analyze identified risks to estimate their significance, which provides a basis for responding to the risks, and design responses to the analyzed risks so that risks are within the defined risk tolerance for the defined objective. The standards also note that monitoring is key to ensuring that the process used by management to help achieve its objectives remains aligned with changing environments, laws, and resources. The importance of understanding risks to system use in the Great Lakes Seaway was also emphasized by the Conference of Great Lakes and St. Lawrence Governors and Premiers. This conference, made up of Governors and Premiers of the eight states and two Canadian provinces along the Great Lakes-Seaway, developed a 2016 strategy that delineated system challenges and called for an analysis of the total costs of moving cargo through the system and how this compares to other modes. U.S. Seaway Corporation officials told us they are supportive of the Conference’s strategy but are not working to implement this analysis or other elements of the strategy. Although some actions have been taken to address challenges, officials from the U.S. Seaway Corporation told us that the Corporation has not fully assessed risks to Great Lakes-Seaway use, in part because the Corporation does not have a formal or standing process to monitor risks over time. The U.S. Seaway Corporation has worked closely with other federal agencies over the years, including the Army Corps and Coast Guard, to address challenges. For example, in 2007, it played a role in the joint U.S.-Canadian study that focused attention on the system’s infrastructure, and the Corporation has worked with the Coast Guard and others in the Great Lakes Seaway Ballast Water Working Group. In addition, although U.S. Seaway Corporation officials told us that they have a limited role in addressing challenges involving other agencies, the U.S. Seaway Corporation has some experience assessing system risks that could be useful in better understanding and addressing challenges facing system users. For example, in 2012, the U.S. Seaway Corporation was involved in a study led by the Canadian Seaway Corporation that examined the cost-competitiveness of the Great Lakes-Seaway and included a discussion of risks. These efforts could be useful in developing a process to track risks and monitor how they evolve over time and in relation to current shipping trends so that further actions could be taken to address challenges faced by traditional and emerging users of the system. Establishing a process to assess and monitor system risks would provide the U.S. Seaway Corporation with greater assurance that the actions taken by the Corporation, including those listed in its strategic plan, and by other stakeholders are working to improve future utilization and ensure efficient use of the system. Without a formal assessment of risks, the U.S. Seaway Corporation lacks information on the cumulative effect of the challenges faced by users of the system, limiting its ability to inform its future actions to help address those challenges. The U.S. Seaway Corporation and the Army Corps Have Made Progress on Lock Asset Renewal Efforts, but the Army Corps Lacks Associated Goals and Measures Both Agencies Have Made Progress on Lock Asset Renewal Efforts, but the Army Corps Has Yet to Start Work on a Project Identified as Critical in 2007 The Army Corps and the U.S. Seaway Corporation developed asset renewal plans, in fiscal year 2007 and 2009 respectively, which were originally intended to cover approximately 10 years and focused on modernizing, rehabilitating, or replacing existing lock components to avoid unexpected lock closures. Within a lock there are a number of structural, mechanical, and electrical components that must work together (see fig. 7). Key lock components included in the agencies’ asset renewal plans include: Approach walls—Help guide the vessel as it approaches the lock chamber and provides a place for the vessel to tie up to wait to enter the lock chamber. Lock chamber—Concrete structure with rock or concrete floors that contain the vessel while water flows to empty or fill the chamber. The lock structure houses the culvert valves, which fill and empty the lock. Miter gates—Steel structures that first function as a dam to prevent free flow of water through a lock, then open and close to allow vessels to transit through the lock. The end of the gates are mitered (angled) and use the difference in water levels to provide the force necessary to achieve a nearly water-tight seal. Embedded anchorages—The connection point between the miter gates and lock walls, which transfers the load from the gate to the lock wall during the opening and closing of the gates. Over the past decade since beginning these efforts, the Army Corps and U.S. Seaway Corporation have made progress on asset renewal efforts. The Army Corps’ asset renewal efforts have a total estimated cost of about $310 million for work through 2035. Meanwhile the U.S. Seaway Corporation’s asset renewal efforts have a total estimated cost of $189 million for work through 2023 (see fig. 8). (See appendix II for a complete list of both agencies’ asset renewal projects.) According to the Army Corps’ most recent asset renewal plan from 2016 and updates provided by Army Corps officials in May 2018, to date, the Army Corps has spent about $53 million on 18 completed projects out of the about $86 million it has received since 2008 (see below for more information on funding received per year for both agencies). The U.S. Seaway Corporation estimates it has spent $45 million on 16 completed projects of the about $137 million it has received since 2009. According to the Army Corps’ estimates, it has about $257 million in remaining and ongoing work through 2035. Meanwhile, the U.S. Seaway Corporation estimates it has almost $144 million in remaining and ongoing work through 2023. Officials from both agencies stated that asset renewal plans will transition to ongoing capital investment programs that will continue into the foreseeable future. Army Corps Detroit District Officials also emphasized that the list of asset renewal projects frequently changes to account for new information such as results of facility inspections. These officials also noted that a project’s inclusion in the asset renewal plan does not obligate future funds on behalf of the Army Corps, since all projects must compete for funding as part of the annual budget process. Furthermore, these Army Corps officials noted that the total cost estimate could decrease if a second Poe-sized lock is constructed, since traffic could be diverted to the new lock, allowing the current lock to be taken out of service for repairs. Both agencies have also made progress addressing critical projects identified in the 2007 study, but the Army Corps faces obstacles in finishing key projects without disrupting traffic through the Poe Lock. In the 2007 study, the U.S. Seaway Corporation and the Army Corps identified several critical projects to improve the condition of their respective locks (see table 1). The U.S. Seaway Corporation has completed its rehabilitation of the downstream miter gates on both locks and started work on a long-term project to rehabilitate concrete on the Eisenhower lock. Of the three key Army Corps projects identified in the 2007 study, one is complete, one is ongoing, and the other is remaining. Specifically, the Army Corps has not started work to replace the Poe lock’s upper miter gate because Army Corps officials say they lack the means to replace the gate without disrupting navigation. In the short term, Army Corps officials say they now plan to repair the gate and have requested $2 million in appropriated funds in fiscal year 2019 for the first phase of this work. Army Corps officials also noted they have ongoing work to reinforce the West Center Pier, which has eroded over time and which forms the approach channel for both the Poe and MacArthur locks. However, these officials reported that the cost to complete the work differs greatly ($82.6 million versus $7.5 million) depending on whether a second Poe-sized lock is constructed, since more expensive construction methods are currently needed to avoid disrupting traffic. In addition to addressing key projects from the 2007 report, over the past decade the Army Corps and U.S. Seaway Corporation have undertaken projects to address emergent issues and make operational improvements to lock infrastructure. For example, in late July 2015, the Army Corps identified the MacArthur lock’s embedded gate anchorages as a critical issue requiring immediate attention. It closed the MacArthur lock for 19 days during the navigation season in August 2015 in order to address the issue at a project cost of $5.8 million. Meanwhile, the U.S. Seaway Corporation is working to install “hands-free mooring” at both of its locks, which is intended to improve the efficiency of lock operations. Hands-free mooring was developed by the Canadian Seaway Corporation, is being deployed on all Seaway locks, and eliminates the need for conventional lines to secure a vessel during the lockage process—instead, arms along the side of the locks extend and secure the vessel using vacuum pads. Once fully implemented, the system is expected to produce benefits such as improved workplace safety and reducing the time to transit a Seaway lock by approximately 7–10 minutes each direction. The U.S. Seaway Corporation expects to have the system completed by the end of the 2019 shipping season, at a total cost of about $18 million, about $7 million of which had been spent through 2016. The Army Corps and the U.S. Seaway Corporation differ in the level of funding they have received for asset renewal efforts in the past decade, which may have influenced the agencies’ pace of asset renewal efforts. Through fiscal year 2017, the Army Corps received about $86 million (starting in fiscal year 2008) and the U.S. Seaway Corporation received about $137 million (starting in fiscal year 2009) (see fig. 9). Army Corps officials noted they received an increase in funds in 2009 due to the American Recovery and Reinvestment Act of 2009 as well as more stable recent funding due to the Water Resources and Development Act of 2014 which, as mentioned earlier, required the Army Corps to allocate annually a minimum amount of funds for the Great Lakes- Seaway. However, individual Soo Lock asset renewal projects must compete for funding with other Army Corps projects across the country at the district, division, and headquarters level, based in part on a project’s risk rating. In contrast, the U.S. Seaway Corporation is a much smaller organization and directly allocates its funding to projects based on its own condition assessments. The U.S. Seaway Corporation Has Established Goals and Measures for Asset Renewal Efforts but the Army Corps Lacks Goals and Measures for the Soo Locks The U.S. Seaway Corporation has a lock performance goal and measure that officials use to monitor its asset renewal efforts, in accordance with government internal control standards, but the Army Corps does not have such a goal specific to the Soo locks. Standards for Internal Control in the Federal Government states that agencies should define objectives clearly and in measurable terms so that performance toward achieving those objectives can be assessed. Similarly, Leading Practices in Capital Decision-Making states that organizational goals should be integrated into the capital decision-making process and that agencies should use performance measures to evaluate results of capital projects to determine if goals have been met. As part of the Department of Transportation’s annual performance reports, the U.S. Seaway Corporation reports its annual progress toward its goal of maintaining 99 percent system availability of the U.S. portion of the Seaway during the navigation season. This measure includes times the system is unavailable for three key reasons: vessel incidents, weather, and lock outages. Of these reasons, the U.S. Seaway Corporation has the most direct control over lock outages. U.S. Seaway Corporation officials told us they use this information, particularly on lock outages, to assess the effect of its asset renewal efforts on lock performance, as part of its agency goal to reduce the risk of delays due to lock equipment failure. The Army Corps has not established specific operational goals or metrics for the Soo locks that can be used to evaluate the outcomes of its asset renewal efforts. In its annual financial report, the Army Corps Civil Works program has a nationwide strategic goal to facilitate the transportation of commerce goods on the nation’s coastal channels and inland waterways and a corresponding goal and measure for the number of instances where mechanically-driven failure at locks resulted in delays of more than a day or week. This national measure aggregates Army Corps locks across the country, including over 200 in the inland waterways such as the Mississippi River. However, this national goal and measure does not provide information on the operational performance of individual locks, including the Soo locks. Detroit District Army Corps officials told us that they have not established operational goals or measures specific to the Soo Locks because the Army Corps’ project approval process involves prioritization based on risk rather than operational performance. Specifically, these officials noted that asset renewal efforts are measured by improved risk scores, which indicate higher reliability and less likelihood of unscheduled outages. While this process allows the Army Corps to prioritize individual investment decisions according to risk, it does not define a specific measurable goal for the operational performance of the Soo Locks. As a result, the Army Corps lacks a key tool to assess whether the investments made in the locks have resulted in improved lock performance, such as reductions in outages and delays to its users. Furthermore, the Detroit District has access to information that could be used to develop measure performances for the Soo Locks—specifically the Lock Performance Monitoring System, which contains lock operations data such as scheduled and unscheduled outages. According to Detroit District officials, these data are used for the Army Corps’ nationwide lock performance measure. The Army Corps has previously noted the need for local lock performance goals and measures to improve asset management. In December 2006, the Great Lakes and Ohio River Division, which has the Soo locks in its jurisdiction, recommended in a 5-year plan the development of specific goals for the Great Lakes navigation system for use in prioritizing investments, but the plan has not been updated since then. Furthermore, a 2013 Army Corps commissioned report on best practices in asset management recommended the development of key performance indicator target values to monitor the effectiveness of asset management. Likewise a senior official in the Army Corps’ Asset Management Program Office—which shares leading asset management practices across the Corps— stated that local and regional offices have the ability to develop local lock performance goals and measures to assess the local results. This official also noted the goals and measures to evaluate the progress of asset renewal efforts and lock performance would allow for greater transparency to stakeholders. Without goals and associated measures for the Soo locks, the Army Corps cannot link its asset renewal efforts to improved lock performance and cannot demonstrate the effect of these efforts to stakeholders. Conclusion The Great Lakes-Seaway serves as an essential transportation route linking U.S. manufacturing, agricultural, and other industries in the nation’s interior to the global economy. Yet, this system faces various challenges that, according to stakeholders, pose risks to traditional and emerging uses that could limit the system’s ability to enhance the region’s economy. The U.S. Seaway Corporation’s mission to improve the system’s utilization and reliability provides it with a unique vantage point for assessing the cumulative risks that these challenges pose on the system’s current and future utilization. Establishing a process for identifying, analyzing, and monitoring the system’s risks would better enable the U.S. Seaway Corporation to design future actions that it, and other stakeholders, could take to address those risks. Similarly, the Army Corps’ efforts to rehabilitate the Soo locks are critical to U.S. manufacturing and trade in the Great Lakes region. Regardless of the outcome of the decision on whether to build another Soo lock, the importance of the Poe lock remains, as indicated by the concerns raised by stakeholders regarding its condition. Given the criticality of the Poe lock and the more stable funding for asset renewal since 2014, it is important that the Army Corps assess these funds’ potential effect on the Soo locks’ performance. Without establishing goals and measures for the Soo locks, the Army Corps is not able to demonstrate whether the substantial investments made so far and planned in the future will improve the Soo locks’ performance and by extension, the reliability of the Great Lakes navigation infrastructure. Recommendations for Executive Action We are making the following two recommendations: The Administrator of the U.S. Seaway Corporation should establish a process to identify, analyze, and monitor risks to the system’s use to inform future actions to address those risks. (Recommendation 1) The Army Corps Director of Civil Works should, in coordination with the Commanders of the Great Lakes and Ohio River Division and the Detroit District, develop and adopt goals and measures to assess the performance of the Soo Locks and assess outcomes of asset renewal efforts. (Recommendation 2) Agency Comments We provided a draft of this product to the Departments of Defense, Transportation, and Homeland Security for comment. In comments, reproduced in appendixes III and IV, the Departments of Transportation and Defense concurred with our recommendations. All three departments also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Secretary of Defense, the Secretary of Homeland Security, and other interested parties. In addition, this report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines (1) how Great Lakes-St. Lawrence Seaway (Great Lakes-Seaway) shipping trends have changed since 1980 and what factors have shaped recent trends, (2) selected stakeholders’ perspectives on challenges to using the Great Lakes-Seaway, and (3) to what extent the U.S. Army Corps of Engineers (Army Corps) and the Saint Lawrence Seaway Development Corporation (U.S. Seaway Corporation) have made progress on lock infrastructure renewal efforts and how the agencies measure performance of these efforts. To understand shipping trends, we analyzed cargo traffic by tonnage for both the St. Lawrence Seaway (published jointly by Canada’s St. Lawrence Seaway Management Corporation and the U.S. Seaway Corporation) and for domestic Great Lakes cargo traffic (from the Army Corps’ Waterborne Commerce Statistics Center) from 1980 to 2016. Although the Seaway data represents all cargo traffic that travels on the St. Lawrence Seaway, we analyzed the Army Corps’ domestic data, which accounts exclusively for traffic between U.S. ports on the Great Lakes system. As a result, some cargos that travel on the Great Lakes— such as between U.S. and Canadian ports or between Canadian ports— are not included, although such movements would be captured in the Seaway data to the extent they enter the Seaway. Although the Army Corps’ data include information on Canadian and foreign cargo, we did not analyze or report this information because (1) of the limitation, which we confirmed with Army Corps officials, that the data exclude Great Lakes cargo movements between Canadian ports and (2) including this information would potentially double-count trips that also entered the St. Lawrence Seaway. We selected the 1980 to 2016 timeframe because it provides a sufficient timeframe to describe long-term trends using consistently collected data from both sources and 2016 is the most recent year for which both sources have published data. We also analyzed cargo trends for the top five commodities by tonnage from 2001 to 2016 for domestic Great Lakes and St. Lawrence Seaway traffic. We selected the years 2001 to 2016 to capture trends over the past approximately 15 years. The selected commodities represent the majority of cargo traffic for both sources. Specifically, the top five domestic Great Lakes commodities made up 96 percent of total cargo tonnage from 2001 to 2016, while the five commodities for the St. Lawrence Seaway represented 71 percent of total St. Lawrence Seaway cargo tonnage for the same time period. We assessed the reliability of the data by reviewing documentation and interviewing Army Corps and U.S. and Canadian Seaway Corporation officials and determined these data were sufficiently reliable for our purpose of describing trends. To describe factors that have shaped recent trends, we reviewed available government and industry reports, such as the 2007 Great Lakes-Seaway study, the 2013 U.S. Department of Transportation Maritime Administration’s Status of the U.S.-Flag Great Lakes Water Transportation Industry, and the 2016 Conference of Great Lakes and St. Lawrence Governors and Premiers’ Strategy for the Great Lakes-St. Lawrence River Maritime Transportation System. To understand factors affecting recent trends and challenges to using the system, we interviewed 24 stakeholders representing a range of traditional and emerging system users and experts. We interviewed representatives from three carriers that transport goods on the system and three associations that represent current U.S., Canadian, and foreign vessel traffic: Interlake Steamship Company, FedNav, Spliethoff, Lake Carriers Association, Chamber of Marine Commerce, and the Shipping Federation of Canada. We interviewed four Great Lake ports stakeholders, including three ports that represent a range of cargo levels and mix of cargos—Port of Duluth, Port of Cleveland, and Port of Indiana, Burns Harbor—and their association, the American Great Lake Ports Association. We interviewed six stakeholders that represent traditional or emerging shipping uses (e.g., cruises and containers) on the system: Cleveland-Cliffs Inc.; Tata Steel; CHS Inc.; General Motors; American Iron and Steel Institute; and the Great Lakes Cruising Coalition. We interviewed two maritime experts and a freight forwarder which helps arrange shipping logistics: Dr. Walter Kemmsies, Martin Associates, and Midwest Transatlantic Lines. Lastly, we interviewed representatives from five Great-Lakes Seaway region and maritime stakeholder groups: Conference of Great Lakes and St. Lawrence Governors and Premiers, Great Lakes Commission, Council of the Great Lakes Region, Committee on the Marine Transportation System, and the American Pilots’ Association. We grouped the challenges identified by stakeholders based on whether challenges affect traditional use of the system or emerging use of the system. Although the results are non-generalizable, stakeholders were selected to represent a range of known perspectives. To better understand the context of these challenges, we interviewed officials from the Army Corps, U.S. Seaway Corporation, U.S. Coast Guard, and Customs and Border Protection. To understand the agencies’ progress on asset renewal efforts and how they measure performance of these efforts, we analyzed available information on projects, status, and estimated cost from both agencies. To assess the agencies’ asset renewal progress we reviewed the Army Corps’ most recent asset renewal plan from 2016 with updates provided by the Army Corps in May 2018. Likewise, we analyzed information provided by U.S. Seaway Corporation officials in March 2018 on project- by-project expenditures from 2009 to 2016 and cost estimates from 2017 to 2023. Although we describe the agencies’ cost estimates for their asset renewal efforts, it was beyond the scope of this engagement to check these cost estimates for accuracy and completeness. Likewise, although we describe the agencies’ processes for selecting projects for funding, we did not verify these processes by, for example, selecting projects and ensuring the selection met the agencies’ established procedures for selection. We reviewed U.S. Seaway Corporation and Army Corps relevant reports, available asset renewal plans, and documentation related to program goals and performance measures, such as annual financial and performance reports, from 2007 through 2018. We also visited the Soo locks at Sault Ste. Marie, Michigan, and the Seaway locks at Massena, New York, in summer 2017 and interviewed officials from both agencies. For example, within the Army Corps we interviewed officials from the Detroit District, headquarters’ navigation and Asset Management Program offices, the Inland Navigation Design Center, and the Institute for Water Resources. We compared agencies’ efforts to GAO’s Standards for Internal Control in the Federal Government and to Leading Practices in Capital Decision-Making. Although the Great Lakes- Seaway system is binational, we are not evaluating the Canadian agencies, although we did interview officials from the Canadian St. Lawrence Seaway Management Corporation to understand their process for asset renewal. Appendix II: List of Asset Renewal Projects The Army Corps information below is based on the most recent asset renewal plan report from 2016 for the Soo locks, with updates provided by the Army Corps in May 2018. The U.S. Seaway Corporation information includes project-by-project expenditures for fiscal years 2009 through 2016 and cost estimates for work from fiscal years 2017 through 2023 provided by U.S. Seaway Corporation officials in March 2018. To align projects between the two agencies, we removed from the U.S. Seaway Corporation list: a dredging project (since the Army Corps information does not include dredging), one Seaway International Bridge project that lacked an associated cost estimate, and discontinued projects. It was beyond the scope of this review to check these cost estimates for accuracy and completeness. Replace lock utility lines and steam system, used for de-icing Fabrication of second set of stoplogs to allow for full dewatering of the lock Poe Replacement of quoin and miter blocks that help transfer load from the gate to the lock wall Replace gate latches to protect the miter gates Replace bevel gears that help move the miter gates Replace protective relays for power plant Replace switchgear assembly B, to assist with de-watering Replace sluice gate valves for Poe and Davis pump well which are used to dewater the locks Repair west center pier, which forms the north wall of the approach channel (outer portion of the wall) Modernize steamplant, which supports de-icing Repair west center pier, which forms the north wall of the approach channel (inner portion of wall closest to lock chamber) Rehabilitation of Davis pump well which is used to dewater locks for winter maintenance Rehabilitate ship arrestor booms that are designed to protect miter gates from vessel impact Gate 1 coating/ weld repairs (upstream end of lock) New miter gate replacement (spare) for upstream end Rehabilitation of Poe pump well used to dewater Poe lock for winter maintenance Fabrication of replacement stoplogs (replacement for originals from initial Poe Lock construction) Rehabilitate ship arrestor booms that are designed to protect miter gates Rehabilitate lock fill/ empty valve machinery Rehabilitate gate skin plate and replace gate coating Repair southwest pier, which serves as south upstream approach wall Reinforce piers mooring bollards along approach wall (Southwest Pier) Project Project Appendix III: Comments from the Department of Transportation Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Matt Barranca (Assistant Director), Emily Larson (Analyst in Charge), Amy Abramowitz, Melissa Bodeau, Michelle Everett, Aaron Gluck, David Hooper, Alyssa Hundrup, SaraAnn Moessbauer, Joshua Ormond, and Shane Spencer made key contributions to this report.
Why GAO Did This Study The Great Lakes-Seaway system extends 2,300 miles and serves more than 100 ports in the United States and Canada. Four of the 17 locks that enable navigation are managed by the Army Corps (within the Department of Defense) and U.S. Seaway Corporation (within the Department of Transportation). The rest are managed by Canada. A 2007 U.S.-Canada study noted the system could absorb additional traffic and led to U.S. asset renewal plans to improve lock infrastructure condition. GAO was asked to review efforts to modernize the Great Lakes-Seaway. This report examines (1) shipping trends since 1980 and factors affecting recent trends, (2) stakeholder views on challenges to use, and (3) the extent to which the Army Corps and the U.S. Seaway Corporation have made progress on and measure performance of lock renewal efforts. GAO analyzed Seaway and Army Corps shipping data from 1980 through 2016, the agencies' asset renewal plans, and interviewed 24 stakeholders, including port and shipper representatives, selected to represent a range of perspectives. What GAO Found The tons of cargo moved by domestic Great Lakes and St. Lawrence Seaway traffic have declined since 1980—by 32 and 48 percent, respectively, according to U.S. Army Corps of Engineers (Army Corps) and Saint Lawrence Seaway Development Corporation (U.S. Seaway Corporation) data. Stakeholders identified various factors for this decrease such as the U.S. economy's shift away from manufacturing. Traffic on the Great Lakes-St. Lawrence Seaway (Great Lakes-Seaway) is traditionally dominated by bulk commodities like iron ore, although stakeholders noted emerging uses like containerized cargo and cruises. Stakeholders identified a range of challenges to using the Great Lakes- Seaway—such as inadequate portside infrastructure for intermodal transfers of shipping containers—that together pose risks for both traditional bulk cargos and emerging uses. Although the U.S. Seaway Corporation's mission is to improve the system's utilization and reliability, the Corporation has not fully assessed the risks that challenges pose to the system's users. Establishing a process to assess and monitor risks, in accordance with federal internal control standards, would help inform future actions to address identified and emerging challenges. The U.S. Seaway Corporation and the Army Corps have made progress on lock asset renewal efforts, but the Army Corps lacks goals and measures to assess performance and outcomes of these efforts. According to estimates provided by the Army Corps, it has completed 18 projects totaling about $53 million to date, and has about $257 million in remaining and ongoing work through 2035. Meanwhile, the U.S. Seaway Corporation has completed 16 projects totaling $45 million and has almost $144 million in remaining and ongoing work through 2023. The Army Corps has not developed goals and measures to assess its asset renewal results, as the U.S. Seaway Corporation has done. As a result, the Army Corps lacks tools to assess the outcomes of these efforts and demonstrate the extent to which its asset renewal efforts have improved operational performance of the Soo Locks. What GAO Recommends GAO recommends that (1) the U.S. Seaway Corporation establish a process to identify, analyze, and monitor risks to the system's use to inform future actions, and (2) the Army Corps develop and adopt goals and measures to assess the performance of the Soo Locks and assess of asset renewal outcomes. The Departments of Transportation and Defense concurred with our recommendations and provided technical comments which we incorporated as appropriate.
gao_GAO-18-57
gao_GAO-18-57_0
Background The Commercial Space Launch Act Amendments of 1988 established the foundation for the current U.S. policy to potentially provide federal payment for a portion of claims by third parties for injury, damage, or loss that results from a commercial launch or reentry accident. A stated goal of the act was to provide a competitive environment for the U.S. commercial space launch industry. The act also provided for, among other things, government protection against some losses—referred to as indemnification—while still minimizing the cost to taxpayers. All FAA- licensed commercial launches and reentries by U.S. companies, whether unmanned or manned and from the United States or overseas, are covered by federal indemnification for third-party damages that result from the launch or reentry. According to agency officials, in 2016 FAA issued five active licenses, which had an average third-party MPL of about $51 million and ranged from $10 million to $99 million. The amount of insurance coverage that FAA requires launch companies to purchase—the MPL value—is intended to reflect the greatest dollar amount of loss to third parties and the federal government for bodily injury and property damage that can be reasonably expected to result from a launch or reentry accident. FAA calculates separate MPL values for potential damages to third parties and the federal government. For each launch license that it issues, FAA determines MPL values for third parties with the intent of estimating the greatest dollar amount of losses that reasonably could be expected from a launch or reentry accident, which have no less than a 1 in 10 million chance of occurring. For damages to the federal government, FAA determines MPL values with the intent of estimating the greatest dollar amount of losses that reasonably could be expected from a launch or reentry accident, which have no less than a 1 in 100,000 chance of occurring. According to FAA, the agency defines these probability thresholds to estimate the federal government’s exposure to losses above the MPL. Agency officials said that the current probability thresholds are set such that losses are very unlikely to exceed launch companies’ private insurance and become potential costs for the government under CSLCA. FAA’s process for determining the MPL value for a launch or reentry license generally includes three elements: 1. Number of casualties. Estimating the number of third-party casualties involves adding the number of direct and secondary casualties that could result from a launch accident. Direct casualty estimates include serious injuries and deaths. Secondary casualties include those resulting from fires and collapsing buildings. 2. Cost of casualties. FAA uses $3 million as an estimate of the average loss per casualty, which is multiplied by the number of estimated casualties. 3. Property damage. FAA applies a predetermined factor—which it recently changed from 50 percent to 25 percent—to the estimated cost of casualties to derive estimated losses from property damage. The total MPL is equal to the estimated cost of casualties plus property damage. FAA has revised two components of its MPL methodology since our 2012 report. For example, in April 2016, the agency adopted a new method for estimating the number of casualties, known as the risk profile method. This method uses different tools to simulate a range of possible scenarios to create a distribution of potential casualty numbers and the simulated probability of different levels of casualty numbers. The risk profile method replaced FAA’s “overlay method,” which was a method it had used since the early 1990s which the agency said did not work well for launches of small launch vehicles in remote areas, or for reentries. In addition, FAA reduced the factor it uses to estimate losses due to property damage, based on tests of a new process for estimating such losses that showed the previous factor was too high. GAO Previously Reviewed FAA’s MPL Methodology We have previously reviewed FAA’s MPL methodology in 2012 and 2017. In 2012 we examined the U.S. government’s indemnification policy, the federal government’s potential costs for indemnification, and the effects of ending indemnification on the competitiveness of U.S. launch companies, among other aspects of FAA’s MPL methodology. In 2017 we examined the extent to which FAA had revised its MPL methodology since our 2012 report to address previously cited weaknesses and the potential effect of any changes to that methodology on financial liabilities for the federal government. The findings and recommendations of those reports, including any unaddressed weaknesses, are discussed later in this report. FAA Did Not Fully Address the CSLCA’s Three Mandated Requirements CSLCA required FAA to evaluate its MPL methodology incorporating three requirements, but the agency’s report did not fully address these requirements. First, the act required FAA to ensure a balance of risk between launch companies and the federal government. However, agency officials told us that they did not re-evaluate the probability thresholds—which are used to divide the risk of loss between launch companies and the federal government—as part of evaluating its MPL methodology when implementing the risk profile method due to resource constraints. Second, the act required FAA to consider the cost impact of implementing an updated MPL methodology, but the agency did not evaluate the impact of implementing its revised methodology on the direct costs to launch companies (insurance premiums) and to the federal government (indemnification liability). Third, the act required FAA to consult with the commercial space sector and insurance providers in evaluating its MPL, but they did not consult such parties in response to the act. Without fully addressing CSLCA’s mandated requirements, FAA cannot ensure that the federal government is not exposed to greater liability costs than intended or that launch companies are not required to purchase more insurance coverage than necessary. FAA Has Not Fully Evaluated the Balance between Government Liability Exposure and Industry Insurance Costs In its report, FAA states that implementing an updated MPL methodology in April 2016—the risk profile method—helps ensure that the federal government is not exposed to greater liability costs than intended and that launch companies are not required to purchase more insurance coverage than necessary, as required under CSLCA. Further, agency officials told us that their updated methodology is technically more valid and improves their ability to avoid overestimating MPL values (which can cause launch companies to purchase more insurance coverage than necessary) or significantly underestimating MPL values (which can expose the federal government to greater costs than intended). While an improved model may provide a more realistic calculation of the MPL, by changing the resulting estimates it can also change the balance between the federal government’s exposure to liability costs and the amount of insurance launch companies are required to purchase. For example, if the more realistic results produced by the revised methodology increased the MPL estimates, this would increase insurance costs for the launch companies and reduce the federal government’s exposure, thereby shifting the balance of costs between the two and suggesting a reevaluation of the thresholds. In addition, FAA officials told us that they had not reevaluated the probability thresholds upon implementing the revised MPL methodology, although defining these thresholds is their primary mechanism for adjusting the balance of risk between launch companies and the federal government. Agency officials acknowledged that an examination of the thresholds’ continued appropriateness would be warranted in the future. However, they told us that changing the probability thresholds would require significant effort because it would require them to change federal regulations and that resources are currently allocated to other rulemaking priorities. Nevertheless, without evaluating the appropriateness of the probability threshold as part of the mandated evaluation of the MPL methodology, FAA cannot ensure that the federal government is not exposed to greater liability costs than intended or that launch companies are not required to purchase more insurance coverage than necessary. FAA Evaluated Only Indirect Costs to Industry and Government of Implementing a New Methodology CSLCA also required FAA to consider the cost impact on both the commercial space launch industry and the federal government of implementing an updated MPL methodology. In its report to Congress, the agency discussed indirect costs to launch applicants and the federal government. For example, FAA discussed indirect data burden costs on launch company applicants and FAA analysts associated with the agency’s risk profile method implementation. The report states that the risk profile method requires more data from a launch applicant than the previous method, but that the added burden is minimal because the information is similar to the type of information required by FAA for a risk analysis. Agency officials also said that the risk profile method requires more of an FAA analyst’s time than the overlay method, but that the added burden is minimal because the work done by FAA on risk analysis provides much of the foundation for an MPL analysis. However, FAA’s report did not include an evaluation of the direct costs to launch companies and the federal government of implementing an updated MPL methodology. The report identifies the direct cost to the launch industry as insurance premiums, and the direct cost to the federal government include potential indemnification payments. Agency officials also told us that the agency does not track commercial space launch insurance costs, and that they do not have meaningful insights on insurance premiums paid by commercial launch companies. FAA officials told us that they only have a general notion of insurance premiums because the industry is reluctant to share such information. FAA officials also told us that, outside of the work done for the report, they have not evaluated the economic implications for launch companies of implementing an updated MPL methodology. Without evaluating direct costs to both the launch companies and the federal government, FAA will be limited in its ability to consider the impact of the cost to both the industry and the federal government of implementing an updated methodology. FAA Obtained Limited Input from the Commercial Space Sector and Insurance Providers Although CSLCA required FAA to consult with the commercial space sector and insurance providers in evaluating its MPL methodology for the mandated report, it obtained limited input. For example, FAA officials told us that they obtained input from their Commercial Space Transportation Advisory Committee (COMSTAC) in April 2016 about what to include in their report to Congress, but did not consult with the commercial space sector and insurance providers to evaluate their MPL methodology in response to CSLCA. FAA officials also said that, to respond to CSLCA’s consultation requirement, they did not think they needed to repeat the consultations they took in 2013. In January 2013, the agency solicited input from COMSTAC’s Business/Legal Working Group about how to best conduct a review of FAA’s methodology for calculating MPL, in response to our July 2012 report. FAA also briefed the Business/Legal Working Group in May 2013 to solicit input on MPL methodologies, including the risk profile method. In the January 2013 meeting, a COMSTAC member suggested several contractors for a study by outside experts of the complete MPL methodology, and FAA subsequently hired one of these contractors to develop the risk profile method that it implemented in April 2016. However, the agency did not solicit input from COMSTAC about its risk profile methodology prior to its April 2016 implementation or following CSLCA’s November 2015 mandated evaluation. As a result, FAA lacks input on the effect of its revised MPL methodology on launch companies and the federal government, making it difficult to evaluate the balance of risk between the two. FAA’s Revised MPL Methodology Does Not Fully Address Certain Previously Identified Weaknesses Our 2012 report identified concerns with all three components of FAA’s MPL methodology: estimating the number of casualties, estimating the cost of casualties, and deriving estimated property damage costs from estimated casualty costs. In that report we recommended that the agency reassess its methodology, including the reasonableness of several key elements. As noted in our 2017 report, FAA has since made improvements to its methodology. However, it still has not yet updated the cost of a casualty. In addition, in our 2017 report we also noted that there are instances where deriving estimated property damage from estimated casualty costs is inappropriate. As of November 2017, FAA does not have guidance to identify such instances or to guide decisions on which tools to use in developing the MPL estimate. FAA Has Made Improvements to Its MPL Methodology but Has Not Updated the Cost-of- Casualty Amount FAA has taken steps designed to improve two of three elements of its MPL methodology, including revising its methodology for estimating the number of potential casualties for a given launch and changing the factor it uses to derive estimated property damage from estimated casualties. However, the agency has not updated the third element, the amount it uses for the cost of an individual casualty, leaving a previously identified weakness unaddressed. Our 2012 report raised concerns with each of the three components of FAA’s MPL calculation methodology. First, we found that FAA’s method for estimating the number of casualties involved use of a single loss scenario instead of applying the insurance industry’s standard practice of catastrophe modeling, and that the agency’s method might significantly understate the number of potential casualties. Catastrophe modeling, unlike the single-loss approach, generally estimates losses by using various tools to simulate tens of thousands of scenarios to create a distribution of potential losses and the simulated probability of different levels of loss. Second, we reported that FAA had been using an outdated and likely understated figure of $3 million to estimate the cost of a single casualty—including injury or death—which Office of Commercial Space Transportation officials said has not been updated since they began using it in 1988. Third, we reported that the agency’s approach of estimating potential property damage by adding a flat 50 percent to the estimated casualty damage could lead to estimates that were too high in some cases. Given these weaknesses, we recommended that FAA reassess its MPL methodology, including assessing the reasonableness of the cost-of- casualty amount and other assumptions used. Because the agency took actions to assess its MPL methodology, we closed the recommendation as implemented. In March 2017 we reported that FAA had taken steps to address weaknesses in two of these three areas. Specifically, we reported that FAA’s adoption of the risk profile method in April 2016 had improved its estimates of the number of potential casualties associated with a particular license launch. In addition, we reported that the agency had revised the factor it uses to estimate losses from property damage in the MPL calculation from 50 percent to 25 percent. This change has resulted in property damage estimates that FAA officials believe are still conservative but more realistic than previous estimates. However, in our March 2017 report we also determined that FAA had not yet addressed weaknesses in the cost-of-casualty amount we had previously identified; despite the conclusion by a contractor it had hired to study the cost-of-casualty that it was too low. Agency officials told us that they had not addressed this weakness because of other priorities. Given the significance of the cost-of-casualty amount to the MPL calculations, we recommended that FAA prioritize the development of a plan to address the identified weakness in the cost-of-casualty amount, including setting time frames for action, and update the amount based on current information. In October 2017, FAA officials told us that they had not yet updated the cost-of-casualty because they have continued to prioritize completing other work with their limited resources, such as reviewing launch applications and fulfilling other safety responsibilities. As a result, our recommendation remains open. FAA officials told us that they have identified potential steps to update the cost-of-casualty amount, including seeking public input on whether and how to revise the amount, but that they do not expect to make a decision on whether to make any changes to the cost-of-casualty amount until June 30, 2018, at the earliest. FAA officials told us that in order to prioritize the development of a plan to address the identified weakness in the cost-of-casualty amount they will need to consult with both the commercial space and insurance industries about the necessity and implications of any potential increase in the cost-of-casualty amount. Agency officials said that they plan to do such consultations through COMSTAC. However, because COMSTAC was just reestablished in June 2017 after not having been active since November 2016 and new members had not been approved as of October 2017, the anticipated decision date of June 2018 could be further delayed. As we reported in March 2017, an understated cost-of-casualty amount can lead to an inaccurate loss calculation, which in turn understates the amount of insurance a launch company must obtain. This could increase the potential exposure to the federal government, as the insurance amount would be less than the potential losses associated with the launch activity and the property would be inadequately protected. Because of this potential exposure, we maintain that addressing this weakness is a priority. FAA Does Not Have Guidance for When to Estimate Property Damage Separately from the Number of Casualties and Which Analytical Tool to Use As noted above, in our 2012 report we raised concerns about the first element of FAA’s MPL methodology, which is estimating the number of potential casualties. FAA officials said that they have implemented two tools for estimating the number of potential casualties, and that each tool requires a different level of resources and is more appropriate for different launch scenarios. The Range Risk Analysis Tool creates physics-based simulations of possible accidents using launch vehicle data, such as launch trajectory and types of failures, and assigns each simulated accident a probability of occurrence based on the failure rates of the different elements of the launch vehicle. According to agency officials, the Range Risk Analysis Tool is a comprehensive, high-fidelity tool and is the most appropriate tool for coastal launch sites, which are often located in heavily populated areas, and is labor intensive. The Risk Estimator Sub- orbital and Orbital Launch Vehicle and Entry tool, which in contrast to the Range Risk Analysis Tool, is a medium-fidelity tool that can be used for low-risk launches, such as launch sites located in very sparsely populated areas and reentry operations that do not need the use of a high-fidelity tool. According to FAA, this tool significantly reduces the time required to estimate the risk from launch and reentry vehicle operations. In our 2017 report, we also reiterated that there are cases where the third element of FAA’s methodology, deriving estimated property damage from estimated casualties, could lead to misleading MPL calculations. Specifically, in March 2017 we reported that estimating losses from property damage as a percentage of losses from casualties could lead to overestimates. For example, FAA’s contractor found that, if a launch accident affected a residential area, the agency’s practice of estimating property damage based on casualties would likely overstate property damages because residential structures have relatively low values compared to losses from casualties. We also reported in March 2017 that in some accidents the number of casualties may be low but property losses could still be very large, in which case FAA’s estimating property losses based on casualties would likely understate potential property damage. For example, a launch vehicle could strike an unoccupied structure that is very expensive, such as a neighboring launch complex. Agency officials said that while deriving property losses from casualty losses is a simpler method that may be an effective use of limited FAA resources, it could be inappropriate in scenarios where the number of casualties might be low but property losses could still be very large. In October 2017, agency officials said that FAA had not developed guidance for determining, for a given launch license, which of the available tools would be most appropriate to estimate the number of potential casualties, and whether it would be more appropriate to estimate property losses separately rather than derive them from estimated casualties. While FAA officials said they believe their current decision process is adequate and that they do not need more formal guidance at this time, they also told us that they were in the process of developing internal guidance on the most appropriate tool to use for future launches. The officials said that they did not have a projected completion date for the guidance, primarily because the agency has other priorities and resource limitations. As noted earlier, these priorities include reviewing commercial space launch license applications and managing program safety. Federal internal control standards state that, as part of an entity’s risk assessment component, management should identify, analyze, and respond to risks to achieving objectives. For example, the standards state that management should design control activities in response to the entity’s objectives and risks to achieve an effective internal control system. Without such guidance, FAA could face challenges in ensuring that it is using the most appropriate method to calculate an MPL for a given launch and is making the most efficient use of its resources. Such guidance could become more important as the number of commercial space launches increases, potentially creating greater demands on its resources. We have previously reported that the commercial space launch industry has experienced significant growth in the number and complexity of launches in the past half-decade. FAA has also reported that its licensed launches have increased 60 percent and industry revenue has increased 471 percent since 2012. Conclusions FAA’s MPL methodology is critical in balancing the encouragement of the U.S. commercial space industry with the need to manage the federal government’s risk exposure because it determines how much risk each party will bear for third-party damages resulting from potential space launch accidents. However, despite changes to the methodology, the probability threshold that the agency uses to achieve this balance of risk has been the same since the 1990s, and has not been reviewed for appropriateness. In addition, while FAA evaluated the effect of its MPL methodology on the indirect costs of launch companies and the federal government, it did not similarly evaluate direct costs. Further, although FAA has obtained input from some stakeholders on certain aspects of its MPL methodology, it has not consulted with launch providers and insurance companies to evaluate effects on key potential costs to launch companies and the federal government, as required under CSLCA. FAA officials told us that resource issues and pursuing other priorities have prevented them from taking these actions. However, the longstanding nature of these issues, as well as their importance in determining the federal government’s financial exposure, makes their completion a priority. FAA has also begun improving other aspects of its MPL process, but important actions remain incomplete. For example, the cost of a casualty, a key component of the methodology, has not been updated since 1988. While FAA has identified potential steps to update this amount, it has not implemented these steps and our March 2017 recommendation to prioritize the updating of this amount remains open. Further, agency officials said they have begun to develop internal guidance on how to determine which methodological tools should be used for a given launch, but are not sure when this process will be completed. These are important steps to help ensure the validity of the MPL methodology and the results obtained for each launch, which in turn determine the balance between the amount of insurance launch companies are required to purchase and the potential financial exposure for the federal government. Recommendations for Executive Action We are making the following four recommendations to FAA: The FAA Administrator should fulfill the CSLCA mandate to include ensuring a balance of risk between the federal government and launch companies as part of FAA’s MPL methodology evaluation by reexamining the current probability thresholds. (Recommendation 1) The FAA Administrator should fulfill the CSLCA mandate to analyze the cost impact of implementing its revised MPL methodology by evaluating the impact on the direct costs of launch companies and the federal government. (Recommendation 2) The FAA Administrator should fulfill the CSLCA mandate to evaluate its MPL methodology in consultation with the commercial space sector and insurance providers by consulting with those entities on the cost impact of its revised MPL methodology, including an updated cost-of-casualty amount, on the launch industry and the federal government. (Recommendation 3) The FAA Administrator should establish an estimated completion date for developing and implementing a plan to establish guidance on the most appropriate MPL methodologies and tools to use for each launch. (Recommendation 4) Agency Comments We provided a draft of this report to the Department of Transportation for their review and comment. In its comments, reproduced in appendix I, the Department of Transportation concurred with our recommendations. The Department of Transportation also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of the Department of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact Alicia Puente Cackley at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Transportation Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Patrick Ward (Assistant Director), Jessica Artis, Isidro Gomez (Analyst in Charge), Courtney La Fountain, Maureen Luna-Long, Jessica Sandler, Jennifer Schwartz, Joseph Silvestri, and Shana Wallace made key contributions to this report.
Why GAO Did This Study The federal government shares liability risks with the commercial space launch industry for accidents that result in damages to third parties or federal property. This arrangement requires space launch companies to have a specific amount of insurance to cover these damages. The government is potentially liable for damages above that amount, up to a cap GAO estimated to be $3.1 billion in 2017, subject to appropriations in advance. CSLCA, enacted in 2015, directed the Department of Transportation, of which FAA is a part, to evaluate its MPL methodology and, if necessary, develop a plan to update that methodology. The act also included a provision requiring GAO to assess FAA's evaluation and any actions needed to update the methodology. This report discusses the extent to which (1) FAA's evaluation report addresses the requirements in CSLCA and (2) FAA has addressed previously identified weaknesses in the MPL methodology. GAO reviewed documents and interviewed FAA on its loss methodology evaluation and actions to address weaknesses. What GAO Found The Federal Aviation Administration's (FAA) report evaluating its maximum probable loss (MPL) methodology did not fully address the evaluation and consultation requirements specified by the U.S. Commercial Space Launch Competitiveness Act (CSLCA). Balance of Risk. CSLCA required FAA to include ensuring that the federal government is not exposed to greater indemnification costs and that launch companies are not required to purchase more insurance coverage than necessary as a result of FAA's MPL methodology. FAA said that it ensured this balance by improving its methodology, but it did not reevaluate its probability thresholds after revising its methodology. These thresholds are used to divide the risk of loss between launch companies and the government. Impact on Costs. The act required FAA to consider the costs to both the industry and the federal government of implementing an updated methodology. FAA's report discussed the impact on indirect costs, such as data collection, but did not discuss direct costs: insurance premiums for launch companies and indemnification liability for the federal government. Consultation. The act also required FAA to consult with the commercial space sector and insurance providers in evaluating its MPL methodology in accordance with the preceding requirements. While the agency consulted with some stakeholders, these consultations were limited in scope. FAA officials said they have not been able to take the actions needed to fully satisfy the mandated elements because of issues such as resource limitations and the lack of available data. However, by not resolving these issues, FAA lacks assurance that launch companies are not purchasing more insurance than needed or that the federal government is not being exposed to greater indemnification costs than expected. FAA has addressed two of three previously identified weaknesses in its MPL methodology but has not yet dealt with the remaining weakness. Specifically, the agency has revised its methodology for estimating the number of potential casualties for a launch and changed the factor it uses to derive estimated property damage from estimated casualties. However, FAA has not updated the amount used for the cost of an individual casualty. GAO recommended in a March 2017 report (GAO-17-366) that FAA update this amount. Not doing so could understate the amount of insurance launch companies are required to purchase, exposing the federal government to excess risk. GAO also determined that while FAA has two tools and methods it can use in making its MPL estimates, it does not have guidance on determining which are most appropriate for a given launch scenario. For example, one tool is more comprehensive but also labor intensive to use, while the other is inappropriate for certain launch scenarios and could result in misleading MPL amounts. Officials said they have begun to create such guidance but do not have an estimated completion date. Without such guidance, FAA cannot ensure that the most appropriate MPL methodology is used for each launch. What GAO Recommends FAA should fully address mandated requirements in evaluating its MPL—probability thresholds, direct costs, and stakeholder consultations— and establish an estimated completion date for developing guidance on tools and methods to use for specific launch scenarios. The Department of Transportation concurred with the recommendations, and provided technical comments.
gao_GAO-18-701T
gao_GAO-18-701T_0
The Immigration Court Backlog Grew and EOIR Has Faced Long-Standing Management Challenges The Immigration Courts’ Caseload and Case Backlog Grew As Immigration Courts Completed Fewer Cases We reported in June 2017 that our analysis of EOIR’s annual immigration court system caseload—the number of open cases before the court during a single fiscal year—showed that it grew 44 percent from fiscal years 2006 through 2015 due to an increase in the case backlog, while case receipts remained steady and the immigration courts completed fewer cases. For the purpose of our analysis, the immigration courts’ annual caseload was comprised of three parts: (1) the number of new cases filed by DHS; (2) the number of other case receipts resulting from remands from the Board of Immigration Appeals and motions to reopen cases, reconsider prior decisions, or recalendar proceedings; and (3) the case backlog—the number of cases pending from previous years that remain open at the start of a new fiscal year. During this 10-year period, the immigration courts’ overall annual caseload grew from approximately 517,000 cases in fiscal year 2006 to about 747,000 cases in fiscal year 2015, as shown in figure 1. We further reported in June 2017 that, according to our analysis, total case receipts remained about the same in fiscal years 2006 and 2015 but fluctuated over the 10-year period, with new case receipts generally decreasing and other case receipts generally increasing. Over the same period, EOIR’s case backlog more than doubled. Specifically, immigration courts had a backlog of about 212,000 cases pending at the start of fiscal year 2006 and the median pending time for those cases was 198 days. By the beginning of fiscal year 2009, the case backlog declined slightly to 208,000 cases. From fiscal years 2010 through 2015, the case backlog grew an average of 38,000 cases per year. At the start of fiscal year 2015, immigration courts had a backlog of about 437,000 cases pending and the median pending time for those cases was 404 days. The increase in the immigration court case backlog occurred as immigration courts completed fewer cases annually. In particular, the number of immigration court cases completed annually declined by 31 percent from fiscal year 2006 to fiscal year 2015—from about 287,000 cases completed in fiscal year 2006 to about 199,000 completed in 2015. According to our analysis, while the number of cases completed annually declined, the number of immigration judges increased between fiscal year 2006 and fiscal year 2015. This resulted in a lower number of case completions per immigration judge at the end of the 10-year period. Additionally, we reported in June 2017 that initial immigration court case completion time increased more than fivefold between fiscal year 2006 and fiscal year 2015. Overall, the median initial completion time for cases increased from 43 days in fiscal year 2006 to 286 days in fiscal year 2015. However, case completion times varied by case type and detention status. For example, the median number of days to complete a removal case, which comprised 97 percent of EOIR’s caseload for this time period, increased by 700 percent from 42 days in fiscal year 2006 to 336 days in fiscal year 2015. However, the median length of time it took to complete a credible fear case, which comprised less than 1 percent of EOIR’s caseload during this period, took 5 days to complete in fiscal year 2006 as well as in fiscal year 2015. Initial case completion times for both detained and non-detained respondents more than quadrupled from fiscal year 2006 through fiscal year 2015. The median case completion time for non-detained cases, which comprised 79 percent of EOIR’s caseload from fiscal year 2006 to fiscal year 2015, grew more than fivefold from 96 days to 535 days during this period. Similarly, the median number of days to complete a detained case, which judges are to prioritize on their dockets, quadrupled over the 10-year period, increasing from 7 days in fiscal year 2006 to 28 days in fiscal year 2015. EOIR officials, immigration court staff, DHS attorneys, and other experts and stakeholders we interviewed provided various potential reasons why the case backlog may have increased and case completion times slowed in recent years. These reasons included: a lack of court personnel, such as immigration judges, legal clerks, and other support staff; insufficient funding to appropriately staff the immigration courts; a surge in new unaccompanied children cases, beginning in 2014, which may take longer to adjudicate than other types of cases; frequent use of continuances—temporary case adjournments until a different day or time—by immigration judges; and issues with the availability and quality of foreign language translation. EOIR Has Initiated Actions to Improve Its Management of the Immigration Courts, but Has Faced Long-Standing Challenges We also reported in June 2017 that EOIR has faced long-standing management and operational challenges. In particular, we identified challenges related to EOIR’s workforce planning, hiring, and technology utilization, among other things. We recommended actions to improve EOIR’s management in these areas. EOIR generally concurred and has initiated actions to address our recommendations. However, EOIR needs to take additional steps to fully implement our recommendations to help strengthen the agency’s management and reduce the case backlog. Workforce planning. In June 2017, we reported that EOIR estimated staffing needs using an informal approach that did not account for long- term staffing needs, reflect EOIR’s performance goals, or account for differences in the complexity of court cases. For example, in developing its staffing estimate, EOIR did not calculate staffing needs beyond the next fiscal year or take into account resources needed to achieve the agency’s case completion goals, which establish target time frames in which immigration judges are to complete a specific percentage of certain types of cases. Furthermore, we found that, according to EOIR data, approximately 39 percent of all immigration judges were eligible to retire as of June 2017, but EOIR had not systematically accounted for these impending retirements in its staffing estimate. At the time of our review, EOIR had begun to take steps to account for long-term staffing needs, such as by initiating a workforce planning report and a study on the time it takes court staff to complete key activities. However, we found that these efforts did not align with key principles of strategic workforce planning that would help EOIR better address current and future staffing needs. EOIR officials also stated that the agency had begun to develop a strategic plan for fiscal years 2018 through 2023 that could address its human capital needs. We recommended that EOIR develop and implement a strategic workforce plan that addresses key principles of strategic workforce planning. EOIR agreed with our recommendation. In February 2018, EOIR officials told us that they had established a committee and working group to examine the agency’s workforce needs and would include workforce planning as a key component in EOIR’s forthcoming strategic plan. Specifically, EOIR officials stated that the agency had established the Immigration Court Staffing Committee in April 2017 to examine how to best leverage its existing judicial and court staff workload model to address its short- and long-term staffing needs, assess the critical skills and competencies needed to achieve future programmatic results, and develop strategies to address human capital gaps, among other things. In February 2018, EOIR officials stated that the agency replaced this committee, which had completed its work, with a smaller working group of human resource employees charged with addressing the agency’s strategic workforce planning. These are positive steps, but to fully address our recommendation, EOIR needs to continue to develop, and then implement a strategic workforce plan that: (1) addresses the agency’s short- and long-term staffing needs; (2) identifies the critical skills and competencies needed to achieve future programmatic results; and (3) includes strategies to address human capital gaps. Once this strategic workforce plan is completed, EOIR needs to monitor and evaluate the agency’s progress toward its human capital goals. Hiring. Additionally, in our June 2017 report, we found that EOIR did not have efficient practices for hiring new immigration judges, which has contributed to immigration judges being staffed below authorized levels and to staffing shortfalls. For example, in fiscal year 2016, EOIR received an appropriation supporting 374 immigration judge positions but had 289 judges on board at the end of the fiscal year. EOIR officials attributed these gaps to delays in the hiring process. Our analysis of EOIR hiring data supported their conclusion. Specifically, we found that from February 2014 through August 2016, EOIR took an average of 647 days to hire an immigration judge—more than 21 months. As a result, we recommended that EOIR (1) assess the immigration judge hiring process to identify opportunities for efficiency; (2) use the assessment results to develop a hiring strategy that targets short- and long-term human capital needs; and (3) implement any corrective actions related to the hiring process resulting from this assessment. In response to our report, EOIR stated that it concurred with our recommendation and was implementing a new hiring plan as announced by the Attorney General in April 2017 intended to streamline hiring. Among other things, EOIR stated that the new hiring plan sets clear deadlines for assessing applicants moving through different stages of the process and for making decisions on advancing applicants to the next stage, and allows for temporary appointments for selected judges pending full background investigations. In February 2018, EOIR indicated to us that it had begun to use the process outlined in its hiring plan to fill judge vacancies. The Attorney General also announced in April 2017 that the agency would commit to hire an additional 50 judges in 2018 and 75 additional judges in 2019. In January 2018, EOIR officials told us that the agency had a total of 330 immigration judges, an increase of 41 judges since September 2016. However, EOIR remained below its fiscal year 2017 authorized level of 384 immigration judges based on funding provided in fiscal years 2016 and 2017. Additionally, the Consolidated Appropriations Act, 2018 provided funding for EOIR to hire at least 100 additional immigration judge teams, including judges and supporting staff, with a goal of fielding 484 immigration judge teams nationwide by 2019. In September 2018, EOIR reported it had a total of 351 immigration judges and was continuing to hire additional judges. Hiring additional judges is a positive step; however, EOIR has not assessed its hiring process to identify opportunities for efficiency, and we found in our June 2017 report that EOIR was not aware of the factors most affecting its hiring process. For example, we reported that EOIR officials attributed the length of the hiring process to delays in the Federal Bureau of Investigation background check process, which is largely outside of EOIR’s control. However, our analysis found that while background checks accounted for an average of 41 days from fiscal year 2015 through August 2016, other processes within EOIR’s control accounted for a greater share of the total hiring time. For example, for the same period our analysis found that an average of 135 days elapsed between the date EOIR posted a vacancy announcement and the date EOIR officials began working to fill the vacancy. By assessing its hiring process, EOIR could better ensure that it is accurately and completely identifying opportunities for efficiency. To fully address our recommendation, EOIR will need to continue to improve its hiring process by (1) assessing the prior hiring process to identify opportunities for efficiency; (2) developing a hiring strategy targeting short- and long-term human capital needs; and (3) implementing corrective actions in response to the results of its assessment of the hiring process. Technology utilization. In June 2017 we also reported on EOIR’s technology utilization, including the agency’s oversight of the ongoing development of a comprehensive electronic-filing (e-filing) capability—a means of transmitting documents and other information to immigration courts through an electronic medium, rather than on paper. EOIR identified the implementation of an e-filing system as a goal in 2001, but had not, as of September 2018, fully implemented this system. In 2001, EOIR issued an executive staff briefing for an e-filing system that stated that only through a fully electronic case management and filing system would the agency be able to accomplish its goals. This briefing also cited several benefits of an e-filing system, including, among other things, reducing the data entry, filing, and other administrative tasks associated with processing paper case files; and providing the ability to file court documents from private home and office computers. As we reported in June 2017, EOIR initiated a comprehensive e-filing effort in 2016—the EOIR Court and Appeals System (ECAS)—for which EOIR had documented policies and procedures governing how its primary ECAS oversight body—the ECAS Executive Committee—would oversee ECAS through the development of a proposed ECAS solution. However, we found that EOIR had not yet designated an entity to oversee ECAS after selection of a proposed solution during critical stages of its development and implementation. We recommended that in order to help ensure EOIR meets its cost and schedule expectations for ECAS, the agency identify and establish the appropriate entity to oversee ECAS through full implementation. EOIR concurred and stated that it had selected and convened the EOIR Investment Review Board to serve as the ECAS oversight body with the EOIR Office of Information Technology directly responsible for the management of the ECAS program. EOIR officials told us in February 2018 that the board convened in October 2017 and January 2018 to discuss, among other things, the ECAS program. However, as we reported in June 2017, EOIR officials previously told us that the EOIR Investment Review Board was never intended to oversee ECAS implementation due to the detailed nature of this system’s implementation. As of September 2018, EOIR has not demonstrated its selection of, or how the EOIR Investment Review Board is to serve as the oversight body for ECAS. Additionally, we recommended in June 2017 EOIR develop and implement a plan that is consistent with best practices for overseeing ECAS to better position the agency to identify and address any risks and implement ECAS in accordance with its cost, schedule, and operational expectations. As of September 2018, EOIR has not indicated that it has developed such a plan. ATD Participation Increased and Costs Less than Detention; ICE Established Program Performance Measures Participation in the ATD Program Increased and Average Daily Cost of the Program Was Lower than the Average Daily Cost of Detention In November 2014 we reported that the number of foreign nationals who participated in the ATD program increased from 32,065 in fiscal year 2011 to 40,864 in fiscal year 2013 in part because of increases in either enrollments or the average length of time foreign nationals spent in one of the program’s components. For example, during this time period, the number of foreign nationals enrolled in the component of the program that was run by a contractor who maintained in-person contact with the foreign national and monitored the foreign national with either GPS equipment or a telephonic reporting system, increased by 60 percent. In addition, the average length of time foreign nationals spent in the other component of the program, which offered a lower level of supervision at a lower contract cost but still involved ICE monitoring of foreign nationals using either telephonic reporting or GPS equipment provided by a contractor, increased by 80 percent—from about 10 months to about 18 months. ICE officials stated that how long a foreign national is in the ATD program before receiving a final decision on his or her immigration proceedings depends on how quickly EOIR can process immigration cases. We also found in our November 2014 report that the average daily cost of the ATD program was $10.55 in fiscal year 2013, while the average daily cost of detention was $158. While our analyses showed that the average daily cost of the ATD program was significantly less than the average daily cost of detention, the length of immigration proceedings affected the cost-effectiveness of the ATD program to varying extents under different scenarios. As previously discussed, immigration judges are to prioritize detained cases, and our June 2017 report found that EOIR data showed that median case completion times for non-detained cases were greater than for detained cases. Accordingly, the length of immigration proceedings for foreign nationals in detention may be shorter than those in the ATD program. Specifically, in our November 2014 report, we conducted two analyses to estimate when the cost of keeping foreign nationals in the ATD program would have surpassed the cost of detaining a foreign national in a facility. Under our first analysis, we considered the average costs of ATD and detention and the average length of time foreign nationals in detention spent awaiting an immigration judge’s final decision. We found that the ATD program would have surpassed the cost of detention after a foreign national was in the program for 1,229 days in fiscal year 2013— significantly longer than the average length of time foreign nationals spent in the ATD program in that year (383 days). In our second analysis, we considered the average costs of ATD and detention and the average length of time foreign nationals spent in detention—regardless of whether they had received a final decision from an immigration judge—since some foreign nationals may not be in immigration proceedings or may not have reached their final hearing before ICE released them from detention. ICE reported that the average length of time that a foreign national was in detention in fiscal year 2013 was 29 days. Using this average, we calculated the average length of time foreign nationals could have stayed in the ATD program before they surpassed the cost of detention would have been 435 days in fiscal year 2013. ICE Established ATD Performance Measures, and Took Actions to Ensure the Measures Monitored All Foreign Nationals Enrolled in the Program We found in our November 2014 report that ICE established two program performance measures to assess the ATD program’s effectiveness in (1) ensuring foreign national compliance with court appearance requirements and (2) ensuring removals from the United States, but limitations in data collection hindered ICE’s ability to assess overall program performance. Compliance with court appearances. For the component of the ATD program managed by the contractor, data collected by the ATD contractor from fiscal years 2011 through 2013 showed that over 99 percent of foreign nationals with a scheduled court hearing appeared at their scheduled court hearings while participating in the ATD program. The court appearance rate dropped slightly to over 95 percent of foreign nationals with a scheduled final hearing appearing at their hearing. However, we reported that ICE did not collect similar court compliance data for foreign nationals in the component of the ATD program that ICE was responsible for managing—which accounted for 39 percent of the overall ATD program in fiscal year 2013. As a result, we recommended that ICE collect and report data on foreign national compliance with court appearance requirements for participants in this component of the ATD program. As of June 2017, ICE reported that the ATD contractor was collecting data on foreign nationals’ court appearance compliance for foreign nationals in both components of the ATD program, and at that time, was collecting data for approximately 88 percent of foreign nationals that were awaiting a hearing. ICE officials stated that they did not expect that 100 percent of foreign nationals in the ATD program would be tracked for court appearance compliance by the contractor because there may be instances where ICE has chosen to monitor a foreign national directly, rather than have the contractor track a foreign national’s compliance with court appearance requirements. Officials stated that ICE officers may decide to monitor a foreign national directly because they determined that it is in the government’s best interest, or it was fiscally responsible when a foreign national’s court date was far in the future and court tracking conducted by the contractor would be costly. In July 2017, ICE reported that they assessed whether ICE officers that directly monitor foreign nationals in the ATD program had reliable data to determine court appearance compliance and found no practical or appropriate way to obtain such data without devoting a significant amount of ICE’s limited resources. Although ICE is not collecting court appearance compliance data for all foreign nationals in both components of the ATD program, as of July 2017, it has met the intent of our recommendation by collecting and reporting on all available data on the majority of foreign nationals in both components of the ATD program. Removals from the United States. For this program performance measure, a removal is attributed to the ATD program if the foreign national (1) was enrolled in ATD for at least 1 day, and (2) was removed or had departed voluntarily from the United States in the same fiscal year, regardless of whether the foreign national was enrolled in ATD at the time the foreign national left the country. The ATD program met its goal for removals in fiscal years 2012 and 2013. For example, in fiscal year 2013, ICE reported 2,901 removals of foreign nationals in the ATD program—surpassing its goal of 2,899 removals. ATD program performance measures provide limited information about the foreign nationals who are terminated from the ATD program prior to receiving the final disposition of their immigration proceedings, or who were removed or voluntarily departed from the country. Specifically, ICE counts a foreign national who was terminated from the program and was subsequently removed from the United States toward the ATD removal performance measure as long as the foreign national was in the program during the same fiscal year he or she was removed from the country. However, foreign nationals who were terminated from the program do not count toward court appearance rates if they subsequently do not appear for court. ICE officials reported that it would be challenging to determine a foreign national’s compliance with the terms of his or her release after termination from the ATD program given insufficient resources and the size of the nondetained foreign national population. In accordance with ICE guidance, staff resources are instead directed toward apprehending and removing foreign nationals from the United States who are considered enforcement and removal priorities. Chairman Johnson, Ranking Member McCaskill, and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you or the members of the committee may have. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Taylor Matheson (Assistant Director), Tracey Cross, Ashley Davis, Paul Hobart, Sasan J. “Jon” Najmi, and Michele Fejfar. Key contributors for the previous work on which this testimony is based are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Department of Justice's EOIR is responsible for conducting immigration court proceedings, appellate reviews, and administrative hearings to fairly, expeditiously, and uniformly administer and interpret U.S. immigration laws. The Department of Homeland Security's ICE manages the U.S. immigration detention system, which houses foreign nationals, including families, whose immigration cases are pending or who have been ordered removed from the country. ICE implemented the ATD program in 2004 to be a cost-effective alternative to detention that uses case management and electronic monitoring. This statement addresses (1) EOIR's caseload, including the backlog, and how EOIR manages immigration court operations, including hiring, workforce planning, and technology use; and (2) participation in and the cost of the ATD program and the extent to which ICE has measured the performance of the ATD program. This statement is based on two reports and a testimony GAO issued from November 2014 through April 2018, as well as actions agencies have taken, as of September 2018, to address resulting recommendations. For the previous reports and testimony, GAO analyzed EOIR and ICE data, reviewed documentation, and interviewed officials. What GAO Found In June 2017, GAO reported that the Executive Office for Immigration Review's (EOIR) immigration court case backlog—cases pending from previous years still open at the start of a new fiscal year—more than doubled from fiscal years 2006 through 2015 (see figure), primarily due to declining cases completed per year. GAO also reported in June 2017 that EOIR could take several actions to address management challenges related to hiring, workforce planning, and technology utilization, among other things. For example, EOIR did not have efficient practices for hiring immigration judges. EOIR data showed that on average from February 2014 through August 2016, EOIR took more than 21 months to hire a judge. GAO also found that EOIR was not aware of the factors most affecting the length of its hiring process. GAO recommended that EOIR assess its hiring process to identify efficiency opportunities. As of January 2018, EOIR had made progress in increasing its number of judges but remained below its fiscal year 2017 authorized level. To better ensure that it accurately and completely identifies opportunities for efficiency, EOIR needs to assess its hiring process. In November 2014, GAO reported that the number of aliens who participated in U.S. Immigration and Customs Enforcement's (ICE) Alternatives to Detention (ATD) program increased from 32,065 in fiscal year 2011 to 40,864 in fiscal year 2013. GAO also found that the average daily cost of the program—$10.55—was significantly less than the average daily cost of detention—$158—in fiscal year 2013. Additionally, ICE established two performance measures to assess the ATD program's effectiveness, but limitations in data collection hindered ICE's ability to assess program performance. GAO recommended that ICE collect and report on additional court appearance data to improve ATD program performance assessment, and ICE implemented the recommendation. What GAO Recommends GAO previously made recommendations to EOIR to improve its hiring process, among other things, and to ICE to improve ATD performance assessment. EOIR and ICE generally agreed and implemented or reported actions planned to address the recommendations.
gao_GAO-18-487
gao_GAO-18-487_0
Background CBP’s Law Enforcement Positions Within CBP’s three operational components—OFO, Border Patrol, and AMO—there are five categories of law enforcement officer positions, each with different job requirements and responsibilities. First, OFO’s CBP officers conduct immigration and customs inspections at ports of entry to prevent the illicit entry of travelers, cargo, merchandise, and other items. Second, Border Patrol agents are responsible for securing the U.S. border between ports of entry and responding to cross-border threats. Third, AMO has three categories of law enforcement officers—Air Interdiction Agents, Aviation Enforcement Agents, and Marine Interdiction Agents—who interdict and disrupt threats to the United States in the air and maritime environments at and beyond the border. For more information on CBP’s law enforcement officer positions, see figure 1. CBP Staffing Levels for Law Enforcement Officer Positions In recent years, CBP has not been able to attain statutorily-established minimum staffing levels for its Border Patrol agent positions or meet its staffing targets for other law enforcement officer positions. Figure 2 shows the difference between CBP’s onboard staffing levels and its authorized staffing levels from fiscal years 2013 through 2017. CBP’s Hiring Process for Law Enforcement Officer Positions CBP’s law enforcement applicants undergo a lengthy and rigorous hiring process that includes nearly a dozen steps, including a background investigation, medical examination, physical fitness test, and polygraph examination. Several of these steps can be done concurrently—for example, CBP can begin the background investigation while the candidate completes the physical fitness test and medical examination process steps. Figure 3 depicts the hiring process for Border Patrol agent and CBP officer positions. Financial Incentives and Other Human Capital Flexibilities Available to CBP CBP is able to use financial incentives and other compensation-based human capital flexibilities to help recruit and retain qualified law enforcement personnel. According to OPM, federal agencies have broad discretionary authority to provide additional compensation in certain circumstances to support workforce needs and address human capital challenges, including through the use of financial incentives such as recruitment, relocation, and retention incentives. Table 1 below provides an overview of these incentives. In addition to these incentives, CBP can also offer other compensation- based human capital flexibilities to employees. For example, with OPM approval, CBP may establish a special salary rate, or a higher rate of pay for employees, either nationwide or in a specific geographic area where CBP’s recruitment or retention efforts are, or would likely become, significantly handicapped without those higher rates. CBP Has Enhanced Its Recruitment Efforts and Applications for Law Enforcement Officer Positions Have Increased CBP Established a Centralized Recruitment Office to Manage Recruitment Efforts across Components CBP officials stated they established the National Frontline Recruitment Command (NFRC)—a formal task force housed within CBP’s Office of Human Resources Management (HRM)—in February 2016. The NFRC is charged with, among other things, developing recruitment strategies, providing strategic guidance, and managing recruitment efforts across all three operational components. CBP officials stated that, prior to the creation of the NFRC, the recruitment and hiring of law enforcement officers was done at the component level and there was no integrated CBP-wide approach to coordinate efforts and address challenges. Based on our literature search, we identified leading practices that may be applicable to federal law enforcement agencies in recruiting, hiring, and retaining law enforcement personnel. Having a centralized entity or office in charge of developing recruitment strategies and overseeing recruitment activities is consistent with leading practices we identified for recruiting for law enforcement positions specifically. All three other selected law enforcement agencies we reviewed also had recruitment strategies that outlined their respective agencies’ recruitment roles and responsibilities, while two had a centralized entity in charge of recruitment. In particular, officials from both ICE and the Secret Service stated they have a central office in charge of recruitment efforts while BOP officials told us that recruitment for Correctional Officers is mainly handled at the local prison level. The NFRC is responsible for setting CBP’s strategic recruitment goals and overseeing agency-wide recruitment initiatives. For example, CBP officials are finalizing the Frontline Hiring and Recruiting FY 18 to FY 24 Strategy & Implementation Plan, which outlines specific initiatives designed to increase the number and quality of applicants applying for law enforcement officer positions. The strategy describes ways CBP can target its recruitment efforts more effectively and develop brand identities for each component to provide the foundation for a comprehensive marketing strategy. In addition to setting strategic initiatives, the NFRC manages the recruitment budget and allocates recruitment funding for CBP and the operational components. For example, NFRC officials stated that the NFRC funds CBP-wide recruitment initiatives such as Special Emphasis Recruitment Teams—teams of specially trained recruiters from each component who attend events specific to different demographic groups such as females or veterans. The NFRC also funds other initiatives such as strategic partnerships with major businesses, which allow CBP to advertise and recruit at their events. For example, CBP previously participated in strategic partnerships with the Big 10 and Big XII athletic conferences, and in 2016 and 2017, the NFRC spent $500,000 for a strategic partnership with the Spartan Race program, which allowed CBP and its components to advertise, set up recruitment booths, and sign up applicants at events. NFRC officials told us they ended their partnership with the Spartan Race in December 2017 and are evaluating options for future strategic partnerships. The NFRC also allocates funding for both joint recruitment events—those attended by two or three components—and single-component events attended by one operational component. For example, NFRC officials stated that career and job fairs provide opportunities for CBP to leverage its resources and attract potential applicants to all three components. At these events, applicants can talk to uniformed recruiters to learn more about their respective career paths. NFRC officials stated that in addition to CBP-wide efforts, the NFRC manages and allocates recruitment funding for each operational component to cover the cost of recruitment events or other initiatives that meet the specific needs of that component. For example, AMO officials stated that they use NFRC funding to attend events such as helicopter shows where there is a higher potential to attract qualified pilots. As shown in table 2, CBP’s recruitment budget allocated by the NFRC almost doubled from approximately $6.4 million in fiscal year 2015 to more than $12.7 million in fiscal year 2017. The budget allocated by the NFRC specific to the operational components—while a small percentage of CBP’s overall recruitment budget—increased during this time frame as well. For example, Border Patrol’s recruitment budget increased from approximately $433,000 in fiscal year 2015 to more than $1 million in fiscal year 2017, while OFO’s budget increased from approximately $116,000 to nearly $525,000. In addition to recruitment funding managed by the NFRC, components may use additional funding from their own budgets that is not allocated or managed by the NFRC to fund recruitment initiatives. For example, two of the three components funded their own strategic partnerships. Border Patrol officials stated they spent $1.5 million on a strategic partnership with the Professional Bull Riders Association which allowed them to target specific applicants who fit Border Patrol’s applicant profile. This partnership provided Border Patrol with the opportunity to advertise and recruit at more than 70 events over the course of 18 months. Likewise, OFO officials told us they spent $15,000 to be the sole sponsor of the 2018 National Police Week race in Washington, D.C., which includes a recruitment booth, a logo on the official T-shirt, and a prominent speaker at the start of the race. AMO officials stated that while they generally do not use their own funding to pay for strategic partnerships, they do partner with the University of North Dakota, which has a large flight school, where they give presentations in classrooms and recruit on campus. CBP Has Increased Its Participation in Recruitment Events and Standardized Its Recruiter Training CBP has increased its emphasis on recruitment and increased the number of recruitment events it has participated in since fiscal year 2015. Specifically, CBP more than tripled the total number of recruitment events it participated in, from 905 events in fiscal year 2015 to roughly 3,000 in both fiscal years 2016 and 2017 (see fig. 4). CBP components generally attend two different types of recruitment events—outreach events designed to promote CBP’s brand and events such as job and career fairs designed to actively cultivate potential applicants. For example, AMO officials stated their attendance at the 2018 Border Security Expo technology trade fair was an outreach event designed to promote the component at a high-visibility event despite the low likelihood of directly reaching qualified applicants, such as pilots. AMO officials also stated they participate in the HELISUCCESS career fair at the annual Heli-Expo trade show where individuals from across the helicopter industry gather to attend seminars and interact with recruiters. They noted that this event provides a great opportunity to recruit qualified applicants who have a license to fly helicopters. While CBP increased its participation in recruitment events from fiscal years 2015 through 2017, officials across all three components told us the NFRC canceled a number of events during the first half of fiscal year 2018 because of a lack of certainty regarding the agency’s budget while functioning under continuing resolutions, which extended fiscal year 2017 funding until the enactment of the Consolidated Appropriations Act, 2018, in March 2018. Additionally, CBP officials stated that the agency was responsible for providing humanitarian support for multiple hurricanes during this time frame which put a strain on CBP’s resources. Overall, these officials explained that the NFRC canceled 36 percent of all recruitment events during the first half of fiscal year 2018 until the enactment of the Consolidated Appropriations Act, 2018. They stated that during this period, they focused on attending free local events and online events such as webinars, but noted that the lack of consistent year-to- year funding for recruitment activities directly affected their ability to attend recruitment events and thus to recruit qualified personnel. To attend recruitment events and promote their brand, CBP components utilize their own law enforcement personnel to act as recruiters. As shown in table 3, as of March 2018, CBP had 1,663 recruiters across the three components, which included 57 full-time and 1,606 part-time recruiters. CBP officials stated that most recruiters do not conduct recruitment activities on a full-time basis and recruitment is considered a collateral responsibility in addition to regular duties. In addition, officials stated these recruiters must be approved by their component leadership and funding for their positions comes from the components’ budgets. In July 2017, CBP implemented a 5-day standardized training program for all component recruiters focused on effective public speaking and engagement tactics as well as specific, in-depth information on each operational component and the CBP hiring process. CBP officials stated that a goal of this training, among other things, is to ensure that recruiters provide standardized, accurate information to all potential applicants. As of April 2018, 636 recruiters had completed the training, according to CBP officials, and the agency plans to train 1,300 recruiters by the end of fiscal year 2018. CBP Increased Its Use of Recruitment Incentives, Although Use by Components Varied In addition to establishing the NFRC and increasing participation in recruitment events, CBP has increased its use of recruitment incentives from fiscal years 2015 through 2017 to help staff hard-to-fill locations. A recruitment incentive may be paid to a newly-appointed employee if an agency determines that a position is likely to be difficult to fill in the absence of such an incentive. From fiscal years 2015 through 2017, OFO increased the number of recruitment incentives it paid to CBP officers from 9 incentives in 2 locations at a total cost of about $77,600 to 446 incentives across 18 locations at a cost of approximately $4.3 million. AMO and Border Patrol did not use recruitment incentives from fiscal years 2015 through 2017 (see fig. 5). OFO officials told us that recruitment incentives have been effective in filling staffing shortages at hard-to-fill locations. For example, they noted that since they began offering recruitment incentives in fiscal year 2015, 14 of the 18 locations where these incentives are used have not experienced a decrease in staffing levels as of February 2018. Additionally, OFO officials told us that in larger ports-of-entry—such as San Ysidro, California, where staffing levels have consistently remained below authorized targets—staffing levels have increased by up to 15 percent. AMO officials stated while they did not use recruitment incentives from fiscal years 2015 through 2017, as of April 2018 they are using them to fill remote locations in the Caribbean. Specifically, AMO paid two recruitment incentives for Air Interdiction Agents and two for Marine Interdiction Agents at locations in Puerto Rico and the U.S. Virgin Islands. AMO officials stated that they began using these incentives to staff hard- to-fill locations because of a nationwide shortage of pilots as well as increased competition with commercial airlines. However, as AMO has only recently started using these incentives, it is too early to gauge whether it will be effective in increasing staffing levels at these hard-to-fill locations. Border Patrol officials stated the main reason they do not use recruitment incentives is that in the past these incentives created resentment among current employees that did not receive extra pay to do the same job in the same location. Additionally, these officials told us that job announcements for Border Patrol agent positions do not specify particular duty locations, but represent a general announcement that can be used to fill numerous duty locations, as necessary. Applications for Law Enforcement Positions Have Tripled since Fiscal Year 2013 As a result of its efforts, CBP has experienced an increase in the number of applications it received for law enforcement officer positions across all three operational components from fiscal years 2013 through 2017. For example, with the exception of fiscal year 2014, applications for Border Patrol agent positions increased every year from roughly 27,000 applications in fiscal year 2013 to more than 91,000 applications in fiscal year 2017. Further, during the same period, applications for CBP officer positions increased from approximately 22,500 to more than 85,000, and applications for AMO’s law enforcement officer positions increased from roughly 2,000 to more than 5,800 (see fig. 6). CBP’s Accenture Contract Is Intended to Further Enhance CBP’s Recruitment Efforts In November 2017, CBP signed a contract with a total potential period of 5 years at a not-to-exceed value of $297 million with Accenture Federal Services, LLC, to help the agency recruit and hire the 5,000 Border Patrol agents called for in Executive Order 13767 as well as an additional 2,000 CBP officers and 500 AMO personnel. Under this performance-based contract, Accenture will be responsible for enhancing CBP’s recruitment efforts and managing the hiring process for those applicants it recruits. The contract includes a base year and four 1-year option periods which CBP may exercise at its discretion for a total potential period of 5 years. The $297 million represents the maximum amount CBP may obligate on the contract during the potential 5-year period. CBP obligated $43 million on the Accenture contract in November 2017 for startup costs, security- related services, and for the hiring of 440 CBP officers, 150 Border Patrol agents, and 23 AMO law enforcement officers. Under the terms of the contract, CBP will pay the contractor a set dollar amount for each law enforcement officer hired. For example, in the first year of the contract, CBP has agreed to pay Accenture approximately $40,000 for each Border Patrol agent hired with 80 percent paid when a candidate receives an official job offer and the remaining 20 percent paid upon the candidate’s entry-on-duty date. The Accenture contract is intended to enhance CBP’s recruitment efforts by improving its marketing strategy and utilizing new ways to capture and analyze data to better inform recruitment efforts, according to CBP officials. For example, HRM officials stated that, in February 2018, Accenture began its digital marketing campaign and started posting electronic ads to target potential applicants for CBP’s law enforcement positions. In addition, Accenture is using advertisements, e-mail blasts, and other strategic marketing tools to specifically target various categories of potential applicants, such as women, veterans, minorities, and current law enforcement officers. CBP officials told us that they are not concerned about Accenture’s recruiting efforts encroaching on the agency’s current applicant pool as Accenture’s activities will largely target populations that CBP has not historically pursued. They also stated that for populations that CBP does target (e.g., veterans and women), the agency expects to benefit from Accenture’s recruitment efforts by increasing the number of applicants from these populations to all job announcements for CBP positions. Further, they noted that if Accenture’s tactics are successful, there is nothing prohibiting the agency from replicating such tactics to garner more applicants. CBP officials also stated that Accenture plans to provide opportunities to better enhance the agency’s data analytics on its recruitment efforts. For example, Accenture is using recruitment data and software to identify potential candidates and increase traffic to websites where these individuals can submit applications. CBP officials told us they would benefit from these and other insights that Accenture’s data analytics will provide as CBP can evaluate the contractor’s recruitment efforts and initiatives and, based upon Accenture’s success, incorporate them into CBP’s own efforts. While these efforts seem promising, it is too early to determine whether these initiatives will help increase the number and quality of applicants for CBP’s law enforcement officer positions. CBP Has Taken Steps to Improve Its Hiring Process, but the Process Remains Lengthy CBP Has Improved Its Performance in Two Key Hiring Metrics Since fiscal year 2015, CBP’s performance in two key metrics that it uses to assess the efficiency and effectiveness of its hiring process for law enforcement officer positions has generally improved. Specifically, CBP reduced its time-to-hire and increased its overall applicant pass rates for all three components. Time-to-Hire. CBP’s average time-to-hire metric calculates the average number of calendar days that elapsed between the closing date of a job announcement and an applicant’s entry-on-duty date. CBP’s time-to-hire for all law enforcement officer positions decreased from fiscal years 2015 through 2017. Specifically, during this period, the time-to-hire for CBP officers decreased by 78 days (20 percent) to an average of 318 days for fiscal year 2017. For AMO Air and Marine Interdiction Agents, CBP’s time-to-hire decreased by 103 days (28 percent) to an average of 262 days for fiscal year 2017. The agency’s time-to-hire for Border Patrol agents was the longest at 628 days in fiscal year 2015. As discussed earlier, Border Patrol officials stated that there were no job announcements for Border Patrol agent positions in fiscal year 2014; therefore, many of the agents hired in fiscal year 2015 had applied in fiscal year 2013, accounting for this protracted time-to-hire. Even so, from fiscal year 2016 to 2017, CBP’s time-to-hire for Border Patrol agents decreased by 32 days (11 percent) to an average of 274 days for fiscal year 2017 (see table 4). We also compared CBP’s time-to-hire with that of the Secret Service because its hiring process for law enforcement officers is the most similar to CBP’s. Specifically, the Secret Service’s hiring process comprises roughly the same number of hiring steps and also includes a polygraph examination—one of the more challenging and time-consuming steps in the process—as well as a written assessment, background investigation, medical examination, and interview. We found that CBP’s time-to-hire for its law enforcement positions was shorter than the Secret Service’s in fiscal years 2016 and 2017. For example, in fiscal year 2017, CBP’s time-to-hire for CBP officers and Border Patrol agents was 73 days and 117 days shorter, respectively, than the Secret Service’s. Further, CBP’s time-to-hire for AMO’s law enforcement positions was shorter than the Secret Service’s in every fiscal year from 2015 through 2017. Overall Applicant Pass Rates. CBP’s overall applicant pass rate metric calculates the estimated percentage of applicants who successfully complete the hiring process and enter on duty. CBP data indicate that overall applicant pass rates more than doubled for CBP officer and Border Patrol agent positions from fiscal years 2016 to 2017 (see table 5). CBP officials told us that higher overall applicant pass rates paired with recent increases in the number of applications received by the agency are starting to result in an increase in the number of law enforcement officers hired as applicants complete CBP’s hiring process and officially enter on duty. As shown in table 6, CBP data indicate that more law enforcement officers entered on duty in the first half of fiscal year 2018 than entered on duty in the first half of fiscal year 2017. Specifically, the total number of CBP officers and Border Patrol agents that entered on duty in the first half of fiscal year 2018 increased by roughly 50 percent and 83 percent, respectively, when compared to the same period of the prior fiscal year. Further, the total number of AMO law enforcement officers that entered on duty in the first half of fiscal year 2018 more than doubled from the same period of fiscal year 2017. CBP officials noted that they hope to consistently maintain this trend of increased hires to offset attrition and attain target staffing levels. For example, although 328 Border Patrol agents entered on duty in the first half of fiscal year 2018, 404 agents departed Border Patrol during this same period, resulting in a net loss of 76 agents. Likewise, in the first half of fiscal year 2018, a total of 449 CBP officers entered on duty while 488 officers departed OFO, resulting in a net loss of 39 officers. These data indicate that CBP continues to face challenges in retaining qualified law enforcement personnel and attaining target staffing levels for these positions. We discuss this issue later in this report. CBP Has Taken Steps to Improve Its Hiring Process for Law Enforcement Officers CBP has made efforts to improve its hiring process by revising certain aspects of the process and piloting two key hiring initiatives—Hiring Hub events and the Applicant Care program. According to agency officials, these efforts to streamline and improve CBP’s overall hiring process have collectively resulted in the decreased time-to-hire and increased overall applicant pass rates discussed above. In addition to these efforts, CBP’s contract with Accenture is designed to provide surge hiring capacity to help supplement the agency’s efforts to meet its staffing goals, according to agency officials. Hiring Process Revisions. CBP has implemented changes aimed at streamlining its hiring process for law enforcement officers and made adjustments to specific hiring steps. For example, among other changes, CBP took the following steps: In fiscal year 2015, CBP replaced its paper-based fingerprinting process with an electronic format, reducing the costs and effort required to physically process and mail paper fingerprinting cards. In fiscal year 2016, CBP increased the frequency of its job announcements on USAJOBS.gov to solicit applications on a continuous basis instead of only posting announcements for set periods of time. In addition, DHS was directed by statute to enhance its efforts to recruit members of the Armed Forces to serve as CBP officers through identifying shared activities and opportunities for reciprocity related to steps in hiring so as to minimize the time required to hire qualified applicants. In March 2017, CBP was granted the authority to waive the polygraph examination for veterans who meet certain criteria, including those who hold a current, active top-secret/sensitive-compartmented- information clearance. In April 2017, CBP received OPM approval to use direct-hire authority for law enforcement positions, which allows CBP to expedite the typical hiring process by eliminating competitive rating and ranking procedures and veterans’ preference. As of March 31, 2018, 77 CBP officers and 107 Border Patrol agents had entered on duty through this authority, but HRM officials told us that more applicants continue to progress through CBP’s hiring pipeline. CBP has also made revisions to specific steps in its hiring process, including the application, entrance examination, physical fitness test, and polygraph examination, among others. For example, in May 2014, CBP incorporated questions into its electronic application that are designed to automatically disqualify applicants who, based on their responses, could not pass CBP’s background investigation. Specifically, applicants that provide a disqualifying response to any of these questions would not be able to submit an application, thereby saving CBP the effort and resources associated with processing nonviable applicants. Further, in fiscal year 2016, CBP reordered its hiring process to place the entrance examination as the first step directly after an applicant submitted an application. Prior to this change, CBP conducted qualification reviews on applicants to ensure they met position requirements before inviting them to take the entrance exam. According to CBP officials, this updated process provided applicants with the opportunity to obtain a realistic preview of the job they were applying for earlier in the hiring process. These officials explained that this helps to ensure that only those applicants who are committed to completing the hiring process and entering on duty at CBP continue through the hiring pipeline, which may help to address high applicant discontinue rates (e.g., roughly half of all eligible applicants in fiscal year 2015 did not take the exam). According to CBP documentation, this revision also created efficiencies as the agency no longer has to spend time and resources on completing qualification reviews for applicants who either did not show up to take the exam or failed the exam itself. CBP data show recent improvements in both the pass rates for the entrance examination process step as well as its average duration—the average amount of time it took applicants to complete this step. Specifically, from fiscal years 2016 to 2017, pass rates increased by about 40 percent for both CBP officer and Border Patrol agent candidates, and the average duration shortened from 17 days to 13 days for CBP officer candidates and from 19 days to 12 days for Border Patrol agent applicants. CBP officials told us they are also exploring options to allow applicants to complete the entrance examination remotely— eliminating the need for candidates to travel to physical testing sites and potentially further reducing the amount of time spent completing this step. In fiscal year 2016, the physical fitness test process step was amended for all law enforcement officer applicants to provide those who fail another chance to complete this requirement, according to CBP officials. Further, in fiscal year 2017, CBP eliminated the second physical fitness test— which had been the last process step in CBP’s hiring process—for CBP officer, Border Patrol agent, and AMO applicants. In addition to shortening the overall process, officials told us this change provided the small percentage of applicants that passed every other hiring process step with an opportunity to demonstrate they meet CBP’s physical ability standards during basic training. CBP has also made several changes to its polygraph examination process step, which has consistently had the lowest pass rate of any step in its hiring process. For example, among other things, CBP has increased the number of polygraph examiners available to administer the test, according to agency officials, and is piloting a new type of polygraph exam—the Test for Espionage, Sabotage, and Corruption. According to CBP officials, the new examination focuses on identifying serious crimes and is sufficiently rigorous to ensure that only qualified applicants are able to pass. Preliminary data from CBP’s pilot show that this new exam has demonstrated higher pass rates when compared with CBP’s traditional polygraph exam while also taking less time, on average, per test to complete. In addition, in response to recommendations made by the DHS OIG in August 2017, CBP implemented a policy requiring polygraph examiners to take steps to terminate an ongoing examination if disqualifying information is obtained from an applicant during the exam. Further, CBP officials told us they are continuing to work on developing and deploying a presecurity interview to identify unsuitable applicants prior to spending resources on conducting the polygraph examination. While it remains too early to tell if these efforts will result in improvements to the polygraph examination step, available CBP data indicate mixed results. Specifically, while the average duration to complete this step decreased for all law enforcement officer positions from fiscal years 2015 through 2017, pass rates also declined slightly over this same period (see table 7). Hiring Hub Events. In August 2015, CBP piloted its first Hiring Hub event where applicants could complete the structured interview and polygraph examination in one location over the course of several days. In fiscal year 2016, CBP expanded its use of these events, holding additional Hiring Hubs in New York, New York; San Diego, California; and Laredo, Texas; among other locations. The use of consolidated hiring events is consistent with a leading practice we identified in hiring for law enforcement officer positions, and officials at both ICE and the Secret Service stated their agencies are using similar events to process applicants. Although CBP could not provide specific data on its Hiring Hub events, CBP officials stated that the use of these events reduced the agency’s time-to-hire by consolidating hiring process steps that traditionally took applicants weeks to complete into just a few days— effectively enhancing the applicant experience and helping to reduce the number of individuals that drop out of the hiring process. Despite attributing a reduction in the agency’s time-to-hire to the Hiring Hubs, CBP discontinued their use in fiscal year 2017 because of their high costs, according to CBP officials. Specifically, CBP officials told us the agency spent $878,000 and $426,000 in fiscal years 2016 and 2017, respectively, which included renting physical space for the Hiring Hub events and funding the travel expenses of CBP employees sent to staff them. However, CBP officials told us that the best practices and process improvements CBP learned from these events have been incorporated into the agency’s new expedited hiring model, which has been used to process all CBP law enforcement applicants since April 2017. According to CBP officials, this model utilizes existing CBP facilities where applicants can complete the structured interview and polygraph examination near where they live while also providing CBP with cost savings by avoiding the need to rent physical office space. Applicant Care. In fiscal year 2017, CBP supplemented its traditional applicant outreach efforts by piloting the Applicant Care program across all three components. This program is intended to pair viable applicants with a trained recruiter who can answer questions and provide individuals with guidance and support throughout the lengthy hiring process. Formally pairing trained recruiters with applicants is a leading practice we identified in hiring for law enforcement positions, and of the three other selected agencies we reviewed, the Secret Service also had a similar program, according to Secret Service officials. According to CBP data, 806 applicants across all three operational components have participated in the Applicant Care pilot program and, as of May 2018, 28 of these have entered on duty at CBP. CBP officials in OFO, AMO, and HRM told us that the Applicant Care program had been useful in providing an effective way to communicate with applicants. According to a senior AMO official, AMO has fully incorporated the program into its hiring efforts and has paired every applicant since June 2017 with an AMO recruiter. Specifically, this official told us the program has been beneficial by keeping candidates engaged and steadily progressing through the process. HRM officials concurred, stating that the Applicant Care program has been successful in reducing the number of individuals that fail to complete CBP’s lengthy hiring process. According to CBP officials, the Applicant Care program also helps to reduce CBP’s time-to-hire since recruiters can actively encourage candidates to promptly progress through aspects of the hiring process that applicants are responsible for completing, such as the submission of OPM’s Standard Form 86 (SF- 86). CBP officials told us that the agency is collecting data to evaluate the effectiveness of the Applicant Care pilot, including the average time-to- hire and overall pass rates of participating applicants. However, since the pilot is ongoing and some applicants continue to progress through CBP’s hiring pipeline, information on the program’s effectiveness remains preliminary. CBP officials also told us that scaling the Applicant Care initiative to include all applicants may present a challenge, especially given the recent increase in the number of law enforcement applications CBP has received. For example, a senior AMO official noted that, as of January 2018, 10 AMO recruiters were managing a total of about 200 applicants as part of the program, and that more recruiters would be needed to reduce employee workload to a more manageable level. Further, Border Patrol officials said that scaling the initiative to include the tens of thousands of individuals that annually apply for Border Patrol agent positions will be challenging as recruiters do not have the capacity to directly communicate with each one. Accenture Contract. According to CBP officials, the Accenture contract is intended to enhance the agency’s ability to achieve its primary goal— hiring law enforcement officers to meet target staffing levels—by augmenting CBP’s current hiring infrastructure and pursuing new and innovative hiring initiatives. HRM officials told us that Accenture will establish its own hiring infrastructure where Accenture personnel will administer most of the hiring process steps to those applicants it recruits. Specifically, the contractor is responsible for implementing the same hiring process steps and maintaining CBP’s standards to ensure that all applicants recruited by Accenture meet those standards. According to HRM officials, Accenture’s efforts are expected to provide CBP with surge hiring capacity without affecting CBP’s current hiring infrastructure, which will continue to function throughout the contract’s duration. According to CBP officials, Accenture began processing an initial trial group of random applicants in May 2018 to ensure that the contractor is able to process candidates through its hiring pipeline as required by the contract. CBP officials also told us that Accenture has the flexibility to pursue novel hiring tactics and pilot initiatives that CBP may not have considered or been able to undertake. For example, Accenture plans to pilot innovative ways to reduce the time-to-hire, including by streamlining steps in the hiring process, which could help to improve CBP’s overall process and generate increased hires for law enforcement positions. Further, because the contractor will only be paid for individuals that receive final job offers and enter on duty—and not for implementing these new methods and initiatives—CBP does not bear the financial risk if such initiatives prove not to be cost-effective. On the other hand, if hiring methods piloted by Accenture are successful in reducing CBP’s time-to-hire and generating increased law enforcement officer hires, CBP can incorporate these methods into its own process. As of March 2018, some key issues were still being negotiated between CBP and the contractor. For example, while HRM officials told us that the main metric used to assess Accenture’s effectiveness will be the total number of hires the contractor produces, they were still working to finalize other key metrics for evaluating the contractor’s effectiveness as well as an oversight plan to ensure the contractor operates according to agency requirements. In addition, a senior HRM official told us that the costs associated with hiring a law enforcement officer are generally the same regardless of whether an applicant is processed by Accenture or CBP. Specifically, CBP officials explained that the requirements to hire a law enforcement officer are rigorous and include administering entrance examinations, background investigations, physical fitness and medical tests, and polygraph examinations, among other process steps. CBP officials stated that the costs associated with conducting these process steps for all applicants—and not just the small percentage who successfully complete the hiring process and enter on duty at CBP—are incurred whether the process is administered by Accenture or CBP. As a result, these officials explained that CBP is most focused on processing as many qualified candidates as possible to increase law enforcement officer staffing levels. As Accenture’s hiring infrastructure will not become fully operational until June 2018, it is too early to evaluate whether the contractor will be able to efficiently and effectively provide the surge hiring capacity CBP needs to achieve its staffing goals. Certain Factors Affect CBP’s Hiring Process for Law Enforcement Positions While CBP has reduced its time-to-hire and made efforts to improve its hiring process for law enforcement officers, CBP officials have noted that the hiring process remains lengthy, which they said directly affected the agency’s ability to recruit and hire for law enforcement positions. CBP officials also stated that their ability to further improve CBP’s time-to-hire and increase law enforcement hires is affected by hiring process steps that can be challenging and time-consuming for applicants to complete as well as CBP’s reliance on applicants to promptly complete certain aspects of the process. As noted above, in fiscal year 2017, it took an average of 274 days for Border Patrol agent applicants and more than 300 days for CBP officer applicants to complete all hiring steps and enter on duty. According to a leading practice we identified in hiring for such positions, agencies should ensure that the hiring process is not protracted or onerous for applicants. While OPM’s time-to-hire target for federal agencies is 80 days, officials at CBP, ICE, and the Secret Service told us that such a target is not feasible for law enforcement positions given the rigor and complexity of the hiring process. Further, according to CBP officials, the agency’s multistep hiring process for its law enforcement officer positions is intentionally rigorous and involves extensive applicant screening to ensure that only qualified candidates meet the technical, physical, and suitability requirements for employment at CBP. Even so, CBP officials across several components told us that the agency’s time-to-hire was too long and directly affected the component’s ability to recruit and hire for law enforcement positions. For example, OFO officials told us that the longer the hiring process takes to complete, the more likely it is that an applicant will drop out. Further, qualified applicants may also decide to apply for employment at a competing law enforcement agency such as ICE that may have a less rigorous process than CBP’s, according to CBP officials. One factor that affects CBP’s ability to efficiently process and onboard law enforcement officers are specific hiring process steps that are time- consuming and challenging for candidates to complete. For example, CBP officials across all three operational components and HRM cited the polygraph examination as a significant bottleneck within CBP’s hiring process. In addition to having the lowest pass rate of any step in CBP’s process, as noted above, the polygraph examination also took CBP officer and Border Patrol agent applicants, on average, the longest amount of time to complete in fiscal year 2017—74 days and 94 days, respectively. Further, Border Patrol and HRM officials both told us that these already lengthy time frames may increase further because of the growing number of applicants for CBP’s law enforcement positions. In addition, CBP’s background investigation and medical examination process steps as well as the SF-86 submission and preemployment complete hiring phases had the five longest average durations for law enforcement applicants in fiscal year 2017. For example, on average, it took CBP law enforcement officer applicants across all three components 55 days or more to complete the medical examination and more than 60 days to complete the background investigation. For more information on the average durations of these selected aspects of CBP’s hiring process, see table 8. Another factor that affects CBP’s ability to reduce its time-to-hire is CBP’s reliance on applicants to complete certain aspects of the hiring process in a timely manner. While the agency has taken steps to mitigate this issue—most notably through its Applicant Care program and the Accenture contract—its ability to ensure that applicants quickly complete those aspects of the hiring process they are responsible for remains limited. For example, as discussed above, applicants are responsible for completing their own SF-86, and CBP officials noted that applicants often take weeks to accurately complete and submit this form. Further, one senior HRM official told us that each time a mistake is identified in this paperwork, applicants receive an additional 5 days to fix the error, which adds up over time. CBP data indicate that while the average duration for this process step has decreased since fiscal year 2015, it continues to take more than 45 days for the average applicant to complete, as noted in table 8 above. As this completed paperwork is required to begin the background investigation and, according to CBP officials, schedule a structured interview, this inherently affects CBP’s ability to reduce its time-to-hire. Further, for the medical examination process step, applicants are responsible for, among other things, scheduling the examination itself and providing pertinent documentation, such as any medical waivers required to pass the exam. According to a senior HRM official, as of February 2018, CBP had to conduct follow-up outreach to roughly 65 percent of applicants during this process step to obtain the information required to complete this step. CBP Has Enhanced Its Retention Efforts, but Does Not Systematically Collect and Analyze Data on Departing Law Enforcement Personnel CBP’s Retention of Law Enforcement Officers Varies by Position From fiscal years 2013 through 2017, CBP’s annual rates of attrition varied across its five law enforcement officer positions. Specifically, OFO’s annual attrition rates for the CBP officer position were consistent at roughly 3 percent, while rates for Border Patrol agent and AMO’s Marine Interdiction Agent positions were below 5 percent in 4 out of the 5 fiscal years we reviewed. When we compared CBP’s annual attrition rates for these positions to those of the other selected law enforcement agencies, we found that CBP’s attrition rates were similar to ICE’s annual attrition rates for its law enforcement positions and generally lower than those of the Secret Service and BOP. Annual attrition rates for AMO’s aviation positions were higher, ranging from 5.0 percent to 9.2 percent for the Air Interdiction Agent position and 7.8 percent to 11.1 percent for the Aviation Enforcement Agent position. Even so, in the last 3 fiscal years, attrition rates for these positions have generally remained lower than those of the Secret Service and BOP (see table 9). In addition, from fiscal years 2013 through 2017, CBP’s ability to hire more law enforcement officers than it lost varied across positions. Specifically, CBP consistently hired more CBP officers and Aviation Enforcement Agents than it lost. Further, while CBP generally maintained its staffing levels for Marine Interdiction Agents, the agency consistently lost more Border Patrol agents and Air Interdiction Agents than it hired. Even so, onboard staffing levels for all five of CBP’s law enforcement officer positions have consistently remained below authorized staffing levels. OFO. With the exception of fiscal year 2016, CBP hired more CBP officers than it lost each fiscal year. Specifically, from fiscal years 2013 through 2017, CBP hired an average of 978 CBP officers and lost an average of 719 officers each year, resulting in an average annual gain of 258 CBP officers and an increase in its overall staffing level of nearly 1,300 officers over this 5-year period. However, as OFO’s staffing targets for CBP officers also increased each year during this period, OFO remained below its authorized levels from fiscal years 2014 through 2017. In fact, OFO ended fiscal year 2017 more than 1,100 CBP officers below its annual staffing target (see fig. 7). Border Patrol. From fiscal years 2013 through 2017, CBP hired an average of 522 Border Patrol agents and lost an average of 890 agents each year, resulting in an average annual loss of 368 Border Patrol agents over this 5-year period. Therefore, despite having an annual attrition rate that mostly remained below 5 percent, Border Patrol was not able to replace departing Border Patrol agents with new hires from fiscal years 2014 through 2017. As a result, staffing levels for Border Patrol agents decreased by 1,838 total agents over our review period and the gap between Border Patrol’s onboard staffing levels and its congressionally-mandated minimum staffing floor has expanded each year from fiscal years 2014 through 2017. Border Patrol ended fiscal year 2017 with 19,437 agents—nearly 2,000 agents below its fiscal year 2016 statutorily-established minimum and 7,000 below the staffing target established in response to Executive Order 13767 (see fig. 8). AMO. From fiscal years 2013 through 2017, CBP (1) gained Aviation Enforcement Agent staff, (2) generally maintained staffing levels for its Marine Interdiction Agent position, and (3) consistently lost Air Interdiction Agent staff. First, despite the Aviation Enforcement Agent position generally having CBP’s highest annual attrition rates, CBP hired more Aviation Enforcement Agents than it lost each fiscal year and increased its overall staffing level by 79 positions during our review period. Even so, AMO staffing levels for these positions remained below its authorized targets in 4 out of the 5 fiscal years we reviewed. Second, AMO staffing levels for the Marine Interdiction Agent position remained level as AMO lost a net total of 3 Marine Interdiction Agents from fiscal years 2013 through 2017. Nevertheless, onboard staffing levels for these positions remained below the annual authorized levels in 4 of the 5 fiscal years we reviewed. Third, on average, CBP hired 25 Air Interdiction Agents and lost 52 agents each fiscal year, resulting in an average annual loss of 27 agents and a net decrease of 136 positions between fiscal years 2013 and 2017. Further, even though the authorized staffing targets for these positions decreased every year since fiscal year 2013, AMO’s onboard Air Interdiction Agent staffing levels remained below authorized levels in 4 of the 5 fiscal years we reviewed (see fig. 9). Retaining Law Enforcement Officers in Hard-to-Fill Locations Has Been Challenging for CBP CBP has acknowledged that improving its retention of qualified law enforcement personnel is critical in addressing staffing shortfalls, but officials identified difficulties in retaining key law enforcement staff as a result of geographically remote and hard-to-fill duty locations. CBP officials across all three operational components and HRM cited location—and specifically employees’ inability to relocate to posts in more desirable locations—as a primary challenge facing the agency in retaining qualified personnel. Border Patrol officials explained that duty stations in certain remote locations present retention challenges due to quality-of-life factors—for example, agents may not want to live with their families in an area without a hospital, with low-performing schools, or with relatively long commutes from their homes to their duty station. Border Patrol’s difficulty in retaining law enforcement staff in such locations is exacerbated by competition with other federal, state, and local law enforcement organizations for qualified personnel. According to Border Patrol officials, other agencies are often able to offer more desirable duty locations—such as major cities—and, in some cases, higher compensation. CBP data indicate that Border Patrol agents consistently leave the component for employment with other law enforcement agencies, including OFO as well as other DHS components such as ICE. For example, while retirements accounted for more than half of annual CBP officer losses from fiscal years 2013 through 2017, they accounted for less than a quarter of annual Border Patrol agent losses, indicating that the majority of these agents are not retiring but are generally leaving to pursue other employment. Further, according to CBP data, the number of Border Patrol agents departing for employment at other federal agencies increased steadily from 75 agents in fiscal year 2013 to 348 agents in fiscal year 2017—or nearly 40 percent of all Border Patrol agent losses in that fiscal year (see fig. 10). Further, of the 113 Border Patrol agents who departed CBP for other federal agencies during the first half of fiscal year 2018, 72 agents (64 percent) went to ICE. Border Patrol officials told us that working a standard day shift at ICE in a controlled indoor environment located in a major metropolitan area for similar or even lower salaries presents an attractive career alternative for Border Patrol agents who often work night shifts in extreme weather in geographically remote locations. The President of the National Border Patrol Council also cited this challenge, stating that unless Border Patrol agents have a strong incentive to remain in remote, undesirable locations—such as higher compensation when compared with other law enforcement agencies—they are likely to leave the agency for similar positions located in more desirable locations. While OFO officials told us the component did not face an across-the- board challenge in retaining CBP officers, they have had difficulty retaining officers in certain hard-to-fill locations that may be geographically remote or unattractive for families, such as Nogales, Arizona, and San Ysidro, California. As a result, CBP officer staffing levels in these locations have consistently remained below authorized targets. For example, OFO ended fiscal year 2017 approximately 300 positions below its authorized staffing level in both its Tucson, Arizona, field office, which includes the port of Nogales, and its San Diego, California, field office, which includes the port of San Ysidro. See figure 11 for more information on the OFO field offices with the four largest gaps between onboard and authorized staffing levels for CBP officer positions from fiscal years 2015 through 2017. OFO officials stated that CBP officers regularly leave posts in remote or hard-to-fill locations to transfer to similar positions in more desirable locations, both internally within OFO as well as at other law enforcement agencies. In addition, officials from the National Treasury Employees Union, which represents CBP officers, told us that excessive overtime and stressful employment conditions—including forced temporary duty travel—also contributed to CBP officers leaving the agency for positions at other law enforcement entities. CBP data indicate that the number of CBP officers who left CBP for employment at other federal agencies increased from 33 in fiscal year 2013 to 108 in fiscal year 2017—or 15 percent of all CBP officer losses in that fiscal year. Likewise, of the 66 CBP officers who departed CBP for other federal agencies during the first half of fiscal year 2018, 34 officers (52 percent) went to ICE. AMO has also had difficulty retaining its law enforcement personnel—and particularly its Air Interdiction Agent staff—in hard-to-fill locations, such as Aguadilla, Puerto Rico, and Laredo, Texas. However, given the unique qualifications and competencies required for the Air Interdiction Agent position, AMO does not compete with other law enforcement organizations. Instead, AMO officials told us they compete with the commercial airline industry for qualified pilots. Specifically, they stated that this competition is exacerbated by a nationwide shortage of pilots. In addition, AMO officials explained that there is a perception among applicants that commercial airlines are able to offer pilots more desirable locations and higher compensation. However, they told us that AMO generally provided pilots with higher starting salaries than many regional airlines as well as most career options available to helicopter pilots. CBP Has Taken Steps to Address Retention Challenges All three CBP operational components have taken steps to retain qualified law enforcement personnel by offering opportunities for employees to relocate to more desirable locations and pursuing the use of financial incentives, special salary rates, and other payments and allowances. Relocation Opportunities. Border Patrol, OFO, and AMO have formal programs providing law enforcement officers with opportunities to relocate. For example, in fiscal year 2017, Border Patrol implemented its Operational Mobility Program and received initial funding to relocate about 500 Border Patrol agents to new locations based on the component’s staffing needs. According to Border Patrol officials, retaining current employees is a top focus for leadership at the component and this program provides Border Patrol agents with opportunities for a paid relocation to a more desirable location at a lower cost to CBP than an official permanent change of station transfer. As of April 2018, Border Patrol officials told us that 322 Border Patrol agents had accepted reassignment opportunities through the program so far and the component hopes to continue receiving funding to provide these opportunities. Likewise, OFO’s National Reassignment Opportunity Bulletin provides CBP officers with opportunities to voluntarily relocate to new ports of entry at their own expense. CBP officers are able to submit reassignment requests multiple times throughout the year and selections are made based on OFO’s staffing needs as well as employees’ seniority and other eligibility requirements. According to OFO officials, the program has been in place since February 2012, and OFO data indicate a recent increase in reassignments from 122 participating CBP officers in calendar year 2016 to 202 officers in 2017. Further, these officials noted that CBP officers are also able to relocate to new duty stations through partner swaps—when two employees assigned to different duty locations agree to switch—and hardship reassignments—for example, when a CBP officer must relocate because a spouse has been transferred to a new location for work. Also, AMO personnel who are non-bargaining unit employees and have served for at least 3 years in their current location are eligible for noncompetitive paid relocations. AMO officials told us that opportunities for relocations are posted every few months in which eligible personnel can apply for transfers to specific duty locations based on the needs of the operational component. Financial Incentives and Other Payments and Allowances. CBP’s three operational components have also recently taken steps to supplement employees’ salaries through the use of human capital flexibilities—such as retention and relocation incentives and special salary rates—as well as other payments and allowances. CBP’s goal in pursuing these human capital flexibilities is to retain current employees— especially in remote or hard-to-fill locations—who are likely to internally relocate within CBP to more desirable duty locations or depart the agency for similar positions at other law enforcement organizations or commercial airlines. Supplementing the salaries of its employees is consistent with a leading practice we identified in retaining qualified law enforcement personnel—specifically, agencies should ensure they are offering pay and compensation comparable with other law enforcement agencies. Further, two of the three other selected law enforcement agencies we reviewed regularly used retention incentives and other human capital flexibilities to help retain qualified law enforcement personnel in cases where filling the position would be difficult or recruitment costs would be high. However, we found that from fiscal years 2013 through 2017, CBP’s use of such financial incentives and other payments was limited as the agency paid a total of 4 retention incentives and 13 relocation incentives, and implemented 1 special salary rate for all positions during this 5-year period. From fiscal year 2013 through 2017, Border Patrol did not offer retention incentives to agents and paid 2 relocation incentives to transfer Border Patrol agents to Artesia, New Mexico, and Washington, D.C., at a cost of roughly $78,000. However, in fiscal year 2018, Border Patrol increased its use of relocation incentives to facilitate the transfer of agents to duty stations along the southwest border that are less desirable due to the remoteness of the location and lack of basic amenities and infrastructure. Specifically, as of April 2018, 67 Border Patrol agents had received such incentives to relocate to duty stations in Ajo, Arizona; Calexico, California; and the Big Bend region in Texas; among others. While Border Patrol did not offer retention incentives during our review period, it submitted a formal request to CBP leadership in February 2018 for a 10 percent across-the-board retention incentive for all Border Patrol agents at the GS-13 level and below, which represents the majority of the component’s frontline workforce. According to Border Patrol documentation, these incentives, if implemented, could help reduce Border Patrol’s attrition rate—which has consistently outpaced its hiring rate—by helping retain agents who may have otherwise left Border Patrol for similar positions in OFO, ICE, or other law enforcement agencies. According to HRM officials, as of April 2018, CBP leadership was evaluating Border Patrol’s group retention incentive request, including the costs associated with implementing this 10 percent across-the-board incentive. In addition, as the incentive would benefit Border Patrol agents in all of the component’s duty locations, the extent to which this effort would be effective in targeting agent attrition in the remote locations that represent CBP’s largest staffing challenges remains to be seen. In addition, as of May 2018, CBP was planning to submit a request to OPM for a $10 per day remote duty location allowance for Border Patrol agents staffed to 17 geographically remote stations. These stations meet OPM’s definition of “remote worksites” and have quality-of-life conditions that are substantially below the standard at most other CBP duty locations. According to the agency, this allowance could help to address the attrition of Border Patrol agents at these duty stations. However, like its group retention incentive request, it is not yet known whether this proposal will be approved. From fiscal years 2013 through 2017, OFO paid a total of 4 retention incentives at a cost of $149,000 to retain CBP officers in Tucson, Arizona; Detroit, Michigan; Carbury, North Dakota; and Laredo, Texas. Further, OFO paid 7 relocation incentives at a cost of approximately $160,000 to relocate personnel to the hard-to-fill ports of Alcan and Nome, Alaska; Coburn Gore, Maine; and Detroit, Michigan. One OFO official told us OFO did not regularly use retention incentives because its relatively low annual attrition rates make it difficult to propose a persuasive business case to CBP leadership that such incentives are necessary. Further, another OFO official explained that OFO’s strategy is focused on using recruitment incentives to staff hard-to-fill locations with new employees. As discussed above, OFO officials told us this strategy has been effective in retaining CBP officers in most of the hard-to-fill locations where recruitment incentives have been used since fiscal year 2015. In addition to relocation and retention incentives, OFO received OPM approval in fiscal year 2017 to implement a special salary rate for CBP officers staffed to the hard-to-fill location of Portal, North Dakota—a port that consistently experienced CBP officer losses of more than 10 percent each year. Specifically, this special salary rate supplements CBP officers’ base salaries up to 40 percent and, according to OFO officials as of February 2018, there had not been any CBP officer departures from the port since this rate was implemented in June 2017. OFO officials stated that while recruitment incentives can bring applicants to hard-to-fill locations, special salary rates may be able to retain them for longer periods. However, while OFO officials have cited the effectiveness of this special salary rate in retaining personnel, this rate only applies to one hard-to-fill location and does not address OFO’s ongoing staffing challenges in other chronically understaffed locations. According to OFO officials, the component is considering requesting additional special salary rates for such locations where attaining authorized staffing levels has proved difficult, but these officials noted that such discussions are in the preliminary stage due to the extensive effort and amount of time required to pursue this option. Specifically, these officials told us that requesting OPM approval for a special salary rate in Portal, North Dakota, was an onerous and extensive process that took CBP and OPM more than 2 years to complete from start to finish. From fiscal years 2013 through 2017, AMO did not offer retention incentives to law enforcement personnel and paid a total of 4 relocation incentives to transfer three Air Interdiction Agents and one Marine Interdiction Agent to Puerto Rico at a cost of approximately $84,000. However, AMO has taken steps to pursue additional human capital flexibilities to address its difficulty in retaining Air Interdiction Agents, including a group retention incentive and a special salary rate. Specifically, in September 2017, AMO submitted an official request to HRM for a 10 percent group retention incentive for Air Interdiction Agents staffed to duty locations in Yuma and Sierra Vista, Arizona; Grand Forks, North Dakota; Laredo, Alpine, and McAllen, Texas; and Aguadilla, Puerto Rico. According to the request, the incentive is intended to help AMO retain qualified pilots in these hard-to-fill locations by raising their salaries to be more competitive with commercial airlines. HRM officials told us in March 2018 they were working with AMO and CBP’s Office of Finance to assess the proposal’s cost. In addition, as of April 2018, AMO was in the process of drafting a special salary rate request for all Air Interdiction Agents from GS-11 through GS- 13 at all AMO locations. HRM officials confirmed they were working with AMO officials on this request, including evaluating whether AMO meets OPM’s criteria. HRM officials told us that OPM’s criteria for approving the use of special salary rates represent a high bar and AMO will have to present a strong business case that demonstrates a regular pattern of component-wide Air Interdiction Agent losses. CBP Does Not Have a Systematic Process to Capture and Analyze Data on Departing Law Enforcement Officers CBP does not have a systematic process for capturing and analyzing information on law enforcement officers who are leaving, such as an exit interview or survey. As a result, the agency does not have important information it could use to help inform future retention efforts. CBP officials across all three components confirmed that they do not systematically conduct formal exit interviews to collect data on departing employees. Officials from OFO and AMO told us that departing law enforcement officers receive the DHS exit survey and therefore have the option to provide these data. However, while CBP officials explained that DHS provides the survey response data to CBP on a quarterly basis, AMO officials told us that this information was of limited value due to low response rates. Further, when we requested these data, CBP was unable to provide the survey response data—or the percentage of departing employees who had completed the survey—citing a technical reporting error in DHS’s system. In addition, according to CBP officials, in August 2017, DHS communicated that it no longer required CBP (or DHS’s other components) to use the DHS exit survey. In the third quarter of fiscal year 2017, Border Patrol implemented its own exit survey, which includes questions gauging departing employees’ reasons for leaving, length of service, and, if applicable, what organization they are departing for, among other questions. While such questions should provide CBP with useful data on the factors affecting Border Patrol agent departures, Border Patrol officials told us that the response rate was 9 percent as of January 2018. When we asked these officials about the steps they were taking to improve this response rate, they replied that individual Border Patrol sectors were responsible for disseminating these surveys and the headquarters officials were unsure of the extent to which sector-level officials were sending the surveys to departing employees. To ensure the surveys were being sent, a senior Border Patrol headquarters official explained that sector-level officials have been told to copy him on all e-mails disseminating the survey. According to CBP officials, in April 2018, the agency launched an initiative to develop a CBP-wide exit survey. The agency plans to develop customized questions for the survey, conduct a pilot of the survey in July 2018, and integrate the survey into CBP’s off-boarding process by the beginning of fiscal year 2019. While CBP provided us with these project milestone dates, the agency did not provide any documentation describing key aspects of the initiative, such as whether CBP will develop a strategy focused on encouraging departing employees to complete the survey to foster higher response rates. Further, CBP did not provide any information on how the agency planned to analyze and use data collected by the exit survey to inform its efforts to retain qualified law enforcement personnel. Two of the other selected law enforcement agencies we reviewed—BOP and the Secret Service—use exit surveys to collect a wide range of information on departing employees, while ICE is currently developing its own survey. For example, similar to Border Patrol’s exit survey, BOP’s uses a mix of multiple-choice and open-ended questions to assess reasons for departures as well as employee attitudes toward compensation, work-life balance and other working conditions, and supervisors. Further, both BOP’s and the Secret Service’s surveys inquire about actions the agencies could have taken that would have prevented the employee’s departure. CBP officials said that management is generally aware of the factors that influence law enforcement officer departures, including the main reason— they want to relocate to more desirable locations. Specifically, Border Patrol officials stated that managers have anecdotal knowledge through informal conversations or meetings at the local level with departing Border Patrol agents, and OFO officials stated that when a CBP officer leaves, there is a general understanding among their colleagues as to the reasons for their departure. In contrast to OFO and Border Patrol officials, AMO officials stated that because of the low participation rates on the DHS survey, the component does not have enough data to understand and address the factors that influence employees’ decisions to leave. Standards for Internal Control in the Federal Government state that management should obtain relevant data from reliable sources and process these data into quality information to make informed decisions in achieving key objectives. Taking steps to ensure that the agency’s operational components are systematically collecting and analyzing complete and accurate information on all departing law enforcement officers—including the factors that influenced their decision to separate— would better position CBP to understand its retention challenges and take appropriate action to address them. Conclusions CBP has made progress in improving its recruitment, hiring, and retention of law enforcement officers, including increasing the total number of applications it receives for these positions and reducing the amount of time it takes to hire applicants. Further, CBP has taken steps to address its primary challenge in retaining qualified law enforcement officers by offering opportunities for these personnel to relocate and pursuing the use of financial incentives and other payments to supplement employee salaries. Even so, retaining law enforcement officers in hard-to-fill locations continues to be challenging for CBP. Although CBP management may be aware of the primary reason law enforcement personnel leave the agency, CBP does not have a systematic process in place across its three operational components to capture and analyze information on these departures, such as an exit interview or survey. Taking steps to ensure that the agency’s operational components are systematically capturing and analyzing a wide range of information on all departing law enforcement officers and the factors that influenced their decisions to leave would better position CBP to understand its retention challenges and take appropriate action to address them. Recommendation for Executive Action The Commissioner of CBP should ensure that its operational components systematically collect and analyze data on departing law enforcement officers and use this information to inform retention efforts. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this product to DHS for review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix I, and technical comments, which we incorporated as appropriate. We also provided the draft report to the Federal Bureau of Prisons for review and comment, which indicated via e-mail that it did not have any comments on the draft report. DHS concurred with our recommendation and described the actions it plans to take in response. Specifically, DHS stated that CBP is taking steps to develop an agency-wide exit survey to collect information on departing law enforcement officers for implementation in fiscal year 2019. DHS also stated that CBP is working to develop a mass communications plan to facilitate the completion of the survey by exiting employees to ensure an effective response rate. Systematically capturing and analyzing quality information on departing law enforcement officers will help CBP to understand its retention challenges. To fully address the intent of our recommendation, CBP will also need to use this information to address its retention challenges and inform its overall retention efforts. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Adam Hoffman (Assistant Director), Bryan Bourgault, Eric Hauswirth, Tyler Kent, Amanda Miller, Sasan J. “Jon” Najmi, Leslie Sarapu, Michael Steinberg, and Adam Vogt made key contributions to this report.
Why GAO Did This Study CBP is responsible for securing U.S. borders and employs nearly 45,000 law enforcement officers across its three operational components at and between U.S. ports of entry, in the air and maritime environment, and at certain overseas locations. In recent years, CBP has not attained target staffing levels for its law enforcement positions, citing high attrition rates in some locations, a protracted hiring process, and competition from other law enforcement agencies. GAO was asked to review CBP's efforts to recruit, hire, and retain law enforcement personnel. This report examines CBP's efforts to (1) recruit qualified law enforcement officers, (2) more efficiently hire law enforcement applicants, and (3) retain law enforcement officers. GAO analyzed CBP data on recruitment, hiring, and retention from FY 2013 through 2017, as well as selected data for the first two quarters of FY 2018. GAO also reviewed CBP strategies and the recent contract it awarded to augment its recruiting and hiring activities and interviewed officials from CBP and three other selected law enforcement agencies. What GAO Found U.S. Customs and Border Protection (CBP) increased its emphasis on recruitment by establishing a central recruitment office and increasing its participation in recruitment events, among other things. As a result, the number of applications it received for law enforcement positions across its operational components—the Office of Field Operations, U.S. Border Patrol, and Air and Marine Operations—from fiscal years (FY) 2013 through 2017 more than tripled. Also, in November 2017, CBP hired a contractor to more effectively target potential applicants and better utilize data to enhance CBP's recruitment efforts. However, it is too early to gauge whether the contractor will be effective in helping CBP to achieve its goal to recruit and hire more law enforcement officers. CBP improved its hiring process as demonstrated by two key metrics—reducing its time-to-hire and increasing the percentage of applicants that are hired. As shown in the table, CBP's time-to-hire has decreased since FY 2015. CBP officials stated these improvements, paired with increases in applications, have resulted in more hires. For example, the number of Border Patrol agents hired in the first half of FY 2018 increased by about 83 percent when compared to the same period for FY 2017. However, the hiring process remains lengthy—for example, in FY 2017 it took more than 300 days, on average, for CBP officer applicants to complete the process. Certain factors contribute to the lengthy time-to-hire, including process steps that can be challenging and time-consuming for applicants to complete—such as the polygraph exam—as well as CBP's reliance on applicants to promptly complete certain aspects of the process—such as submitting their background investigation form. CBP enhanced its efforts to address retention challenges. However, staffing levels for law enforcement positions consistently remained below target levels. For example, CBP ended FY 2017 more than 1,100 CBP officers below its target staffing level. Officials cited employees' inability to relocate to more desirable locations as a key retention challenge. CBP has offered some relocation opportunities to law enforcement personnel and has recently pursued the use of financial incentives and other payments to supplement salaries, especially for those staffed to remote or hard-to-fill locations. However, CBP does not have a formal process for capturing information on all departing employees, such as an exit survey. Ensuring that operational components are systematically collecting and analyzing such information would better position CBP to understand its retention challenges and take appropriate action to address them. What GAO Recommends GAO recommends that CBP systematically collect and analyze data on departing law enforcement officers and use this information to inform retention efforts. DHS concurred with this recommendation.
gao_GAO-18-163
gao_GAO-18-163_0
Background Direct Loan Program Education administers federal student financial aid programs, including the William D. Ford Federal Direct Loan (Direct Loan) program, through the Office of Federal Student Aid. Education issues several types of loans under the Direct Loan program, including subsidized and unsubsidized loans. Prospective borrowers apply and are approved for loans through Education, which then disburses the loan through the borrowers’ school. Upon disbursement of funds, Education assigns each loan to a contracted loan servicer responsible for communicating information to borrowers while they are in school and when they enter repayment. Borrowers receive additional information about their loans and related rights and responsibilities through their loan’s promissory note, Education’s website, and mandatory entrance and exit counseling provided by their school. When borrowers enter repayment, generally 6 months after leaving school, they make payments directly to the assigned servicer. Federal Student Loan Repayment and Postponement Options Education offers a variety of repayment plan options that can help Direct Loan borrowers avoid delinquency and default, including Standard, Graduated, Extended, and Income-Driven. Income-Driven Repayment plans can ease repayment by setting loan payment amounts as a percentage of a borrower’s income and extending the repayment period up to 25 years. Unlike Standard, Graduated, and Extended repayment plans, Income-Driven Repayment plans offer loan forgiveness at the end of the repayment term and monthly payments may be as low as $0 for some borrowers. Extending the repayment period may also result in some borrowers paying more interest over the life of the loan than they would under 10-year Standard repayment. In addition to making monthly payments more manageable and offering the potential for loan forgiveness, Income-Driven Repayment plans may also reduce the risk of default. For example, in 2015, we reported that borrowers in two such plans had substantially lower default rates than borrowers in the Standard repayment plan. Eligible borrowers may also temporarily postpone loan payments through deferment or forbearance. Several different types of deferment are currently available to borrowers, each with their own eligibility criteria. Under deferment, the interest generally does not accrue on subsidized loans, but it continues to accrue on unsubsidized loans. Eligible borrowers can also postpone or reduce loan payments through either a general or mandatory forbearance; however, interest on the loan continues to accrue in each type (see table 1). Most borrowers choose general forbearance, which, unlike most types of mandatory forbearance and deferment, can be issued over the phone with no supporting documentation. As of September 2017, $69.9 billion in outstanding Direct Loans was in general forbearance compared to $6.3 billion in mandatory forbearance, according to Education data. Cohort Default Rate Calculation Education computes CDRs each year for all schools that enroll students who receive funds through the Direct Loan program. To compute a school’s CDR, Education divides the number of student loan borrowers in a CDR cohort—those entering repayment in the same fiscal year—who have defaulted on their loans in the initial 3 years of repayment by the total number of a school’s student loan borrowers in that CDR cohort (see fig. 1). The CDR does not hold schools accountable for borrowers who default after the 3-year period. Borrowers in deferment and forbearance are considered to be “in repayment” and current on their loans for the purpose of calculating a school’s CDR, even though borrowers in these loan statuses are not expected to make any monthly payments. For the 2014 CDR cohort, the national 3-year CDR was 11.5 percent, meaning 11.5 percent of borrowers who first entered repayment in fiscal year 2014 had defaulted on one or more loans by the end of fiscal year 2016. The national CDR has changed over time, peaking at 22.4 percent for the 1990 CDR cohort and declining to a historic low of 4.5 percent for the 2003 CDR cohort (see fig. 2). Beginning with the 2009 CDR cohort, Education switched from a 2-year measurement to a 3-year measurement as required by the Higher Education Opportunity Act. According to Education officials, there are several possible explanations for the general decrease in the national CDR from the 1990 cohort to the 2003 cohort. They include: 1) Education’s efforts to provide schools with default prevention training; 2) the loss of eligibility to participate in federal student aid programs and subsequent closure of many schools with chronically high CDRs in the early 1990s; 3) enactment of legislation in 1998 that increased the length of time a loan can go unpaid before being considered in default, which decreased the likelihood that a borrower would default within the CDR period; and 4) an increase in borrowers consolidating their loans while in school, an option that was eliminated in 2006. Use of the Cohort Default Rate to Hold Schools Accountable Schools with high CDRs may lose eligibility to participate in federal student aid programs. Specifically, Education generally excludes schools from participation in the Direct Loan program if their CDR is above 40 percent for a single year and from participation in the Direct Loan and Federal Pell Grant programs if their CDRs are 30 percent or greater for 3 consecutive years. Schools potentially subject to these sanctions can pursue an appeal. The CDR is the only borrower outcome measure used to determine eligibility for participation in federal student aid programs for all schools. Schools with high CDRs that do not cross these thresholds may also be subject to additional oversight. For example, schools are certified for up to 6 years to maintain eligibility to participate in federal student aid, but schools with high CDRs may only be granted certification for 2 years, according to Education policy. Education policy also prioritizes selection of schools with high CDRs for program review. Further, schools whose CDRs are equal to or exceed 30 percent for any cohort are required to create a Default Prevention Taskforce that develops and submits a default prevention plan to Education to reduce defaults, among other things. Consequences of Student Loan Defaults When borrowers do not make payments on their federal student loans, and the loans are in default, the federal government and taxpayers are left with the costs. Borrowers also face severe financial burdens when their federal student loans go into default. For example, upon default the entire unpaid balance of the loan and any accrued interest is immediately due. The amount owed may increase due to late fees, additional interest, and costs associated with the collection process, including court costs, collection fees, and attorney’s fees. The federal government also has tools to collect on defaulted student loans. For example, under the Treasury Offset Program, the federal government can withhold certain federal or state payments to borrowers, including federal or state income tax refunds and some Social Security benefits, to collect on defaulted student loans. The federal government can generally also garnish up to 15 percent of a defaulted borrower’s disposable pay and apply those funds toward the defaulted loan. There is no limit on how long the government can attempt to collect on defaulted student loans, and student loans are more difficult to eliminate in bankruptcy proceedings than other types of consumer debt. Default Management Consultants Some schools hire default management consultants to help them reduce their CDRs. Education classifies default management consultants as “third-party servicers” and generally has the authority to oversee the services they provide to schools and their students. Schools are required to notify Education when they enter into, modify, or terminate a contract with a third-party servicer. Based on concerns that a significant number of schools had not reported information on the third-party servicers they use as required, Education issued guidance to remind schools of the requirement in January 2015. In addition, Education requires third-party servicers to submit information about the services they provide to schools. As of June 2017, Education reported that it had information on 187 third-party servicers, including 48 that reported providing default management services. Schools must ensure that their third-party servicers, including default management consultants, comply with relevant federal regulations and program requirements. Education also requires third-party servicers to submit an annual compliance audit report that covers the administration of the federal student aid related services they perform to determine compliance with applicable statutes, regulations, and policies. Some Schools’ Consultants Encourage Borrowers to Postpone Loan Payments, Which Can Lower Cohort Default Rates and Increase Borrowers’ Loan Costs Some Schools’ Consultants Encourage Borrowers to Postpone Loan Payments When Better Borrower Options May be Available To help manage their default rates, some schools hired default management consultants that encouraged borrowers with past-due student loans to postpone loan payments through forbearance, even when better borrower options may be available. The nine default management consultants we selected, which served over 1,300 schools, used various methods to contact borrowers and attempted to connect them with their loan servicer for assistance (see fig. 3). Seven of the nine participated in three-way conference calls with the borrower and the loan servicer. Further, one consultant visited past-due borrowers at their home to provide in-person loan counseling and connect them to their loan servicer. Income-Driven Repayment Plans May Be Better Options for Some Struggling Borrowers According to Education, postponing payments through forbearance may be appropriate for some borrowers who face temporary hardships. On the other hand, Income-Driven Repayment plans may be a better option for borrowers who are having difficulty repaying their loans for an extended period of time. These plans base monthly payments on income and family size, and payments may be as low as $0 for those who qualify. Income- Driven Repayment plans also feature the potential for forgiveness of remaining loan balances after 20 or 25 years of repayment. Interest generally continues to accrue on loans in both forbearance and Income-Driven Repayment. Under forbearance, accumulated interest that is not paid during the forbearance period will generally be added to the loan balance, resulting in higher monthly payments when forbearance ends. In contrast, the federal government does not charge the unpaid interest for up to 3 years for some borrowers repaying their loans on Income- Driven Repayment plans, and struggling borrowers on these plans are not generally expected to make higher monthly payments until their financial situation improves. In addition, GAO’s past work found that borrowers in Income-Driven Repayment had substantially lower rates of default than those in Standard repayment. GAO previously found that it is difficult for Education to estimate which borrowers have incomes low enough to benefit from or be eligible for Income-Driven Repayment plans because only borrowers who apply for these plans are required to submit income information to Education. Four consultants sent borrowers who were past due on their loans unsolicited emails and letters that included only a forbearance application and instructed borrowers to return the application to them instead of their loan servicer. Representatives of one consultant said that this practice was to ensure that borrowers completed the forms accurately. According to Education, the application provides an opportunity for borrowers to learn about other repayment and postponement options and the potential costs of forbearance. The application includes a statement informing borrowers about the option to request a deferment or Income-Driven Repayment plan and examples of the additional costs borrowers may incur as a result of interest that continues to accrue during forbearance. While this is correct, the application does not include details about these options; instead, it directs borrowers to Education’s website for more information. Borrowers who only receive a forbearance application may inaccurately assume that forbearance is their only or preferred option. Moreover, borrowers may miss the opportunity to learn about other, potentially more favorable repayment and postponement options from Education’s loan servicers, who are responsible for counseling borrowers and approving forbearance requests. One consultant included an inaccurate statement in letters it sent to borrowers who were past due on their loans. This consultant sent past-due borrowers forbearance applications with letters that inaccurately stated that the federal government can take away Supplemental Nutrition Assistance Program and Supplemental Security Income benefits when borrowers default on a federal student loan. Inaccurate information about the consequences of default could cause a borrower who depends on these benefits to feel undue pressure to choose forbearance, even when eligible for more favorable repayment and postponement options. Further, this consultant’s script for its representatives to use when calling borrowers who are past due on their loans referred exclusively to postponing loan payments. The script instructed representatives to tell borrowers “I am now going to conference you in with your loan servicer and they will process your forbearance over the phone.” Borrowers who hear such statements may feel undue pressure to choose forbearance. The script also instructed representatives to tell the loan servicer that the borrower they were about to speak with was requesting a forbearance. Further, representatives from this consultant were also instructed to tell borrowers to “stick to their guns” on the option they have selected before connecting the borrower with their loan servicer on a three-way call. One consultant previously offered borrowers gift cards as an incentive to put their loans in forbearance. Education has also previously identified the use of gift cards to steer borrowers toward forbearance over other available options. An internal review that Education conducted in 2012 and 2013 found that a chain of schools used gift cards to promote forbearance for purposes of lowering its CDR. According to Education’s findings, a borrower who had attended one of the schools stated that she was current in her payments but was offered a $25 gift card to apply for forbearance. Multiple borrowers included in Education’s review expressed the view that they were pressured or forced to apply for forbearance and were not made aware of other options, such as deferment or Income-Driven Repayment plans. Indeed, offering gift cards may steer borrowers toward forbearance over other available options. While the consultant that offered gift cards to borrowers to lower schools’ CDRs has discontinued this practice, and the school Education reviewed has since closed, these practices may have affected reported CDRs and could be used by other consultants and schools. Schools have a financial interest in preventing borrowers from defaulting within the first 3 years of repayment to ensure that their CDRs remain low enough to meet Education’s requirements for participating in federal student aid programs. Consultants also have a financial interest in preventing borrowers from defaulting during the 3-year CDR period. Eight of the nine consultants we selected did not have any school clients that paid them to contact borrowers who were past due on their loans outside the 3-year CDR period. In addition, four of the nine selected consultants were paid by their client schools based on the number of past-due borrowers they brought current on their loans during the CDR period, and representatives’ salaries or incentives at two of these consultants were calculated based on this as well. Some consultants have an incentive to encourage forbearance in particular as a strategy to prevent borrowers from defaulting within the 3- year CDR period in an effort to lower their client schools’ CDRs. This is because forbearance applications can be processed more quickly than other repayment or postponement options. Loan servicers can grant general forbearance based on a request from borrowers over the phone because there are no documentation requirements, whereas borrowers seeking deferment or an Income-Driven Repayment plan generally must submit a written application. According to Education officials, loan servicers are required to process Income-Driven Repayment plan applications within 15 business days. One consultant sent borrowers a letter that stated it could process a verbal forbearance in 5 minutes. The president of one school that contracted with a consultant that is paid based on the number of borrowers brought current told us that he did not care whether the consultant encouraged the use of forbearance as long as borrowers did not default within the 3-year CDR period and the consultant followed federal regulations. According to Education data, nearly 90 percent of the school’s borrowers were in forbearance during the 2013 CDR period. Consultant payment structures, as well as the difference in processing requirements between forbearance, deferment, and Income-Driven Repayment plans may create incentives for consultants to encourage forbearance over other repayment and postponement options. Postponing Loan Payments Can Increase Borrowers’ Loan Costs and Reduce the Usefulness of the Cohort Default Rate to Hold Schools Accountable While forbearance can be a useful tool for helping borrowers avoid defaulting on their loans in the short term, it increases their costs over time and reduces the usefulness of the CDR to hold schools accountable. To understand the potential financial impact of forbearance during the first 3 years of repayment (the CDR period), we calculated the cost for a borrower with $30,000 in loan debt over 10 years in the Standard repayment plan with varying lengths of time in forbearance (see fig. 4). A borrower on the 10-year Standard repayment plan who did not spend any time in forbearance would pay $39,427 over the life of the loan. Spending all 3 years of the CDR period in forbearance would cost that borrower an additional $6,742, a 17 percent increase over spending no time in forbearance. One borrower we spoke with who took out $34,700 in loans and opted for forbearance accrued about $10,000 in interest in just over 3 years, an amount that the borrower said she would be paying off “for the rest of my days.” Further, the unpaid interest that accrues while a borrower’s loans are in forbearance may result in higher future monthly payments when the forbearance period ends. Borrowers who cannot make these higher monthly payments may eventually default. If schools’ consultants continue to encourage forbearance over other options that may be more beneficial, such as Income-Driven Repayment plans, some borrowers will continue to be at risk of incurring additional costs without any long-term benefits. Education officials and student loan experts we spoke with said that forbearance is intended to be a short-term option for borrowers facing financial difficulties lasting a few months to a year, such as unexpected medical expenses. Longer periods of forbearance, while not typically advantageous to borrowers, can be an effective strategy for schools to manage their CDRs. Specifically, spending 18 months or more—at least half of the CDR period—in forbearance reduces the potential for borrowers to default within the 3-year period (see fig. 5). This is because forbearance keeps borrowers current on their loans, and borrowers would not go into default until they had made no payments for an additional 360 days after the forbearance period ended. Indeed, according to our analysis of Education’s data for the 2013 CDR period, only 1.7 percent of borrowers who were in forbearance for 18 months or more defaulted within the 3-year CDR period, compared to 8.7 percent of borrowers who were in forbearance up to 18 months during this period, and 20.3 percent of borrowers who were not in forbearance during this period. Borrowers who default outside the 3-year CDR period will not negatively affect a school’s CDR. In an online presentation, representatives from one consultant highlighted that forbearance can be a tool for reducing a school’s CDR and stated that borrowers who postponed payments defaulted less often during the CDR period than other past-due borrowers based on a case study they conducted. According to our analysis of Education’s data, the percentage of borrowers whose loans were in forbearance for 18 months or more during the 3-year CDR period increased each year during the 5 cohorts we reviewed, doubling from 10 percent in the 2009 CDR cohort to 20 percent in the 2013 CDR cohort. During the same time period, the percentage of borrowers whose loans were in forbearance for any amount of time increased from 39 percent to 68 percent (see fig. 6). Further, borrowers in forbearance for 18 months or more defaulted in the year after the 3- year CDR period more often than they did during the CDR period. Specifically, 9.4 percent of these borrowers in the 2013 CDR period defaulted in the year following the CDR period, while only 1.7 percent defaulted in the first 3 years of repayment, suggesting that long-term forbearance may have delayed, not prevented, default for these borrowers. Reducing the number of borrowers in long-term forbearance and directing them toward other repayment or postponement options could help reduce the number of borrowers that later default and save the government money. For example, Education estimates that it will not recover a certain percentage of defaulted Direct Loan dollars even if repayment resumes. Specifically, for Direct Loans issued in fiscal year 2018, Education estimates that it will not recover over 20 percent of defaulted loans. These unrecovered defaulted loan amounts total an estimated $4 billion, according to our analysis of Education’s budget data. In addition to cost savings to the government, borrowers who avoid default would not have to face severe consequences, such as damaged credit ratings that may make it difficult to obtain credit, employment, or housing. In addition, the percentage of borrowers who made progress in paying down their loans during each CDR cohort—the repayment rate— decreased from 66 percent for the 2009 cohort to 46 percent for the 2013 cohort (see sidebar). We analyzed these data for a subset of schools with the largest CDR decreases from the 2009 to 2013 cohorts and found that as these schools’ CDRs improved, other borrower outcomes worsened (see app. II for more information about these schools). Specifically, for this subset of schools, the percentage of borrowers in long-term forbearance doubled, and the percentage of borrowers who made progress in paying down their loans during the CDR period decreased by half, suggesting that these schools may be encouraging forbearance as a default management strategy (see fig. 7). Education has acknowledged that when schools encourage borrowers to postpone loan repayment until the 3-year CDR period ends, it can have a distorting effect on the CDR. Borrowers who have postponed their payments through forbearance or deferment are considered to be “in repayment” for the purpose of calculating the CDR, even though they are not expected to make any payments on their loans while in these statuses. As a result, an increased use of forbearance, particularly long- term forbearance, could result in lower CDRs, and therefore fewer schools being sanctioned due to high CDRs. In July 1999, we reported that the CDR understates the actual number of borrowers who default. We suggested that Congress may wish to consider amending the Higher Education Act of 1965 to exclude borrowers with loans in deferment or forbearance at the end of the CDR period from schools’ CDR calculation and include these borrowers in a future CDR cohort after they have resumed making payments on their loans. Education’s Office of Inspector General made a recommendation to the agency to support similar amendments to the law in December 2003. For this report, we examined the impact that removing borrowers in long- term forbearance from the CDR calculation would have on schools’ reported CDRs. For the 2013 cohort, 35 schools from our population had CDRs of 30 percent or higher. When we excluded from our population borrowers who spent 18 months or more in forbearance and did not default within the 2013 CDR period, we found 265 additional schools that would potentially have had a CDR of 30 percent or higher (see app. II for more information about these schools). Schools with CDRs at this level for 3 consecutive years may lose eligibility to offer their students Direct Loans and Pell Grants. Further, 21 of the 265 schools would potentially have had a CDR greater than 40 percent, making them potentially subject to immediately losing eligibility to offer Direct Loans. Of the 265 schools that would have potentially been subject to sanctions based on our alternative calculation, 261 received a combined $2.7 billion in Direct Loans and Pell Grants in academic year 2016-2017. The CDR is a key tool for holding schools accountable for borrower outcomes and protecting borrowers and the federal government from the costs associated with default. The substantial growth in the percentage of borrowers spending at least half of the CDR period in forbearance reduces the CDR’s usefulness to hold schools accountable. This presents risks to the federal government and taxpayers, who are responsible for the costs associated with high rates of default, and to borrowers who may benefit from other repayment or postponement options. Since the way the CDR is calculated is specified in federal law, any changes to its calculation would require legislation to be enacted amending the law. Strengthening the usefulness of the CDR in holding schools accountable, such as by revising the CDR calculation or using other accountability measures to complement or replace the CDR, could help further protect both borrowers and the billions of dollars of federal student aid funds the government distributes each year. Actions Needed to Improve Education’s Oversight of Default Management Strategies and Public Reporting of CDR Sanctions Requirements Needed to Oversee How Schools and their Consultants Communicate Loan Options to Borrowers in Repayment Education’s ability to oversee the strategies that schools and their consultants use to manage CDRs is limited because there are no requirements governing the interactions that schools and their consultants have with borrowers once they leave school. Education requires that schools provide certain information to borrowers about their student loans when they begin and finish school but does not oversee schools’ or their consultants’ communications with borrowers after they leave school. According to Education, the Higher Education Act does not contain explicit provisions that would allow it to impose requirements governing communications that schools and their consultants may have with borrowers who have left school. As noted earlier, we found that some default management consultants, in seeking to help schools lower their CDRs, provided borrowers inaccurate or incomplete information or offered gift cards to encourage forbearance over other repayment or postponement options that may be more beneficial to the borrower. According to Education officials, borrowers are protected from such practices because loan servicers are required to inform borrowers of all available repayment options upon processing a forbearance. Education officials also said that performance-based contracts provide loan servicers an incentive to keep borrowers in repayment. However, a Consumer Financial Protection Bureau report found that borrowers may not be informed about the availability of other repayment plans and instead may be encouraged by their loan servicers to postpone payments through forbearance, which may not be in borrowers’ best interests. Further, some consultant practices we identified, such as instructing borrowers to return the forbearance application to the consultant and remaining on three-way calls with the loan servicer and the borrower, may undermine the role of the loan servicer. Education officials also said that borrowers should be aware of their repayment options because schools are required to inform borrowers of these options through exit counseling when they leave school. However, in 2015 we found gaps in borrowers’ awareness of repayment options. Education’s Office of Federal Student Aid has a strategic goal to help protect borrowers and families from unfair, deceptive, or fraudulent practices in the student loan marketplace. Without clear requirements regarding the information that schools and their consultants provide to borrowers after leaving school, Education cannot effectively oversee schools’ default management strategies. Further, without such requirements, Education cannot ensure that schools and consultants are providing borrowers with the information they need to make informed decisions to manage their loan costs and avoid future default. Education’s Public Reporting of Cohort Default Rate Sanctions Lacks Transparency The limited information Education reports annually to the public about schools that face sanctions for high CDRs overstates the extent to which schools are held accountable for their default rates. Specifically, Education does not report the number of schools that successfully appealed CDR sanctions or the number of schools ultimately sanctioned. For example, with the release of the 2013 CDRs in 2016, Education publicly reported that 10 schools were subject to sanctions, but did not publicly report that 9 schools appealed their sanctions and 8 were successful in their appeals and were thereby not sanctioned (see fig. 8). Office of Management and Budget guidelines call for federal agencies to ensure and maximize the usefulness of information they disseminate to the public. Federal internal control standards call for effective communication with external stakeholders. The number of schools subject to sanction has declined over time—from a high of 1,028 schools in fiscal year 1994 to 10 schools in fiscal year 2017 (see app. III). In addition, unpublished sanction data reveal that a small fraction of borrowers who defaulted on student loans attended schools that have been sanctioned. For example, two schools were ultimately sanctioned in 2016 and accounted for 67 of the nearly 590,000 borrowers whose defaulted loans were included in schools’ 2013 CDRs. By reporting only the number of schools subject to sanction and not those actually sanctioned, Education’s data make it difficult for Congress and the public to assess the CDR’s usefulness in holding schools accountable. Conclusions Preventing student loan defaults is an important goal, given the serious financial risks default poses to borrowers, taxpayers, and the federal government. The CDR, which is specified in federal law, is intended to hold schools accountable when significant numbers of their borrowers default on their student loans during the first 3 years of repayment. However, the metric in its current form creates incentives for schools that may result in unintended consequences for some borrowers. Schools have an interest in preventing their students from defaulting during the CDR period to ensure that they can continue to participate in federal student aid programs, and some schools contract with private consultants to work with borrowers who have fallen behind on their loan payments. Although some of these consultants have recently changed their communications to borrowers, others continue to provide inaccurate or incomplete information to encourage past-due borrowers to choose forbearance over other repayment options. While postponing payments through forbearance may help struggling borrowers avoid default in the near term, it increases borrowers’ ultimate repayment costs and does not necessarily put borrowers on a path to repaying their loans. Moreover, including borrowers who spend 18 months or more in forbearance in the CDR calculation reduces the CDR’s ability to hold schools accountable for high default rates since long periods of forbearance appear to delay—not prevent—default for some borrowers. Absent a statutory change, schools and their consultants seeking to keep CDRs below allowable thresholds will continue to have an incentive to promote forbearance over other solutions that could be more beneficial to borrowers and less costly to the federal government and its taxpayers. Education plays an important role in overseeing schools and their default management consultants to ensure that they are held accountable and student loan borrowers are protected. However, because Education asserts that it lacks explicit statutory authority to establish requirements regarding the information that schools and consultants provide to borrowers after they leave school, Education does not hold them accountable for providing accurate and complete information about repayment and postponement options. In addition, public information on CDR sanctions is important for assessing the usefulness of the CDR to hold schools accountable. Yet, Education’s practice of reporting the number of schools potentially subject to sanction without reporting the number of schools ultimately sanctioned following the appeals process limits transparency about the CDR’s usefulness for Congress and the public. Matters for Congressional Consideration We are making the following two matters for congressional consideration: Congress should consider strengthening schools’ accountability for student loan defaults, for example, by 1) revising the cohort default rate (CDR) calculation to account for the effect of borrowers spending long periods of time in forbearance during the 3-year CDR period, 2) specifying additional accountability measures to complement the CDR, for example, a repayment rate, or 3) replacing the CDR with a different accountability measure. (Matter 1) Congress should consider requiring that schools and default management consultants that choose to contact borrowers about their federal student loan repayment and postponement options after they leave school present them with accurate and complete information. (Matter 2) Recommendation for Executive Action The Chief Operating Officer of the Office of Federal Student Aid should increase the transparency of the data Education publicly reports on school sanctions by adding information on the number of schools that are annually sanctioned and the frequency and success rate of appeals. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this product to the Department of Education for review and comment. Education’s comments are reproduced in appendix IV. We also provided relevant report sections to the Consumer Financial Protection Bureau and the nine default management consultants for technical comment. The Consumer Financial Protection Bureau provided technical comments, which we incorporated as appropriate. Education agreed with our recommendation to increase transparency of school sanction data. In its response, Education stated that it makes a significant amount of CDR data publicly available on its website. For example, Education posts CDRs and underlying data for each school for which the rates are calculated and lists schools subject to sanctions as a result of their CDRs. Education also stated that beginning with the release of fiscal year 2015 CDRs, it would provide additional information on its website indicating whether schools subject to sanctions have submitted appeals and the disposition of such appeals. As we recommended in our draft report, Education should also publicly report the number of schools ultimately sanctioned each year. Our draft report included a recommendation for Education to seek legislation to strengthen schools’ accountability for student loan defaults. Education disagreed with this recommendation, asserting that from a separation of powers perspective, it has a responsibility to implement, and not draft, statutes. Education stated that if GAO believes such legislation is needed, it would be best addressed as a matter to Congress. We agree that, as an executive agency, Education is responsible for implementing laws as enacted. However, it is important to note that the President has the “undisputed authority” to recommend legislation to the Congress and the Office of Management and Budget within the Executive Office of the President has outlined procedures for executive branch agencies to submit proposed legislation. Indeed, in making this recommendation, we intended that Education seek legislation through any of the practices used by executive branch agencies in communicating with Congress. In a recent example, both the President’s Budget Request and Education’s Congressional Budget Justification for Fiscal Year 2019 seek a change in the statutory allocation formula for the Federal Work-Study program to focus funds on institutions enrolling high numbers of Pell Grant recipients. Nevertheless, in light of Education’s disagreement with our draft recommendation, and the importance of strengthening schools’ accountability for student loan defaults, we have converted the recommendation into a Matter for Congressional Consideration. Our draft report also included a recommendation for Education to require that schools and default management consultants that contact borrowers about repayment and postponement options after they leave school present accurate and complete information. Education agreed that institutions should provide accurate and complete information about all repayment options. It also stated that institutions should allow the borrower’s stated preference for any given repayment option to guide the ultimate direction of the conversation, and that the information provided should be free from financial incentive. However, Education asserted that it “cannot impose requirements on schools and their consultants without further authority.” Education clarified in a follow-up communication that the Higher Education Act does not contain “explicit provisions” under which it could require schools (and their consultants) to include specific content in the information that they choose to provide to borrowers after the borrowers leave school, but did not address whether there was any other authority under which it could take action in this area. Instead, Education noted that it could provide information to schools and their consultants on best practices in this area. We continue to believe that schools and their consultants should be required to ensure that any information they present to borrowers about repayment and postponement options after they leave school is accurate and complete. As we stated in our draft report, without clear requirements in this area, Education cannot ensure that schools and consultants provide borrowers with the information they need to make informed decisions to manage their loan costs and avoid future default. In light of this, and Education’s response to our draft recommendation, we have converted our recommendation into a Matter for Congressional Consideration. In its comments, Education inaccurately asserted that our findings should be viewed in light of a limited scope. As stated in the draft report, we analyzed trends in forbearance, repayment, and default using national data from Education for the five most recent CDR cohorts for a population of over 4,000 schools. To determine how schools work with borrowers to manage their CDRs, we reviewed the practices of a nongeneralizable sample of nine default management consultants that served over 1,300 schools. These schools accounted for over 1.5 million borrowers in the 2013 CDR cohort. The five consultants that provided inaccurate or incomplete information about forbearance or offered gift cards served about 800 schools, which accounted for over 875,000 borrowers in the 2013 CDR cohort. For each of the consultants, as stated in our draft report, we reviewed documentation including training materials, internal policies and procedures, and examples of correspondence they send to borrowers. Finally, Education inaccurately asserted that we based our findings on a small sample of interviews with 11 borrowers and officials from 3 schools and 4 consultants. We conducted these interviews to better understand borrowers’ loan experiences and the strategies that schools and their consultants use to manage the CDR, and the illustrative interview examples we include in our report do not form the basis of any of our findings or recommendations. In addition, Education commented that the report did not consider the extent to which borrowers enter Income-Driven Repayment plans during the 3-year CDR period or the substantial growth in borrowers participating in these plans over the past several years. Education suggested that such data would be important to consider in determining whether there has been an overreliance on forbearance in the past, and if so, whether any problems in this area are being remedied by the availability of Income- Driven Repayment plans. We have incorporated additional information regarding the increase in borrowers participating in Income-Driven Repayment plans in response. As Education noted in its comments, our draft report acknowledged that increased participation in these plans may have been a factor in the observed increase in overall rates of forbearance since it is common for loan servicers to place borrowers in administrative forbearance while processing applications for Income- Driven Repayment plans. However, as explained in our draft report, since administrative forbearance for this purpose should be for 60 days or less it would not explain the twofold increase in the percentage of borrowers in forbearance for 18 months or longer from CDR cohort years 2009 to 2013. Education also stated that while our report included an example of the additional interest cost incurred by a borrower using forbearance, it did not discuss the potential additional interest costs associated with other repayment options, such as Income-Driven Repayment plans. Education noted that these options could be more costly than forbearance in some instances and all options have consequences for borrowers. We acknowledged in our draft report that interest continues to accrue on loans in Income-Driven Repayment and that the monthly payments of some borrowers on these plans may not be high enough to pay down any principal during the first 3 years of repayment. However, as stated in our draft report, Income-Driven Repayment plans, unlike forbearance, offer borrowers the potential for loan forgiveness after 20 or 25 years of repayment. We have incorporated additional details about the potential costs of these and other repayment plans based on Education’s comments. The potential consequences that Education highlighted in its comments further illustrate the importance of ensuring that borrowers receive accurate and complete information to help them make informed decisions to manage their loan costs and avoid default. In response to our findings regarding communication practices of some default management consultants, Education stated that the draft report did not acknowledge that the forbearance application that selected consultants send to borrowers provides an opportunity for borrowers to learn about other repayment options and the potential costs of forbearance. We have incorporated additional information regarding the information included on the application. Although the form mentions deferment and Income-Driven Repayment, it does not describe these options; instead, it directs borrowers to Education’s website for more information. Therefore, we maintain that borrowers who only receive a forbearance application may inaccurately assume that forbearance is the only or preferred option. Further, Education commented that the draft report did not examine what effect, if any, consultants may have had in encouraging borrowers to seek consecutive forbearances since borrowers can remain in forbearance for no longer than 12 months before they have to reapply. Education also suggested that comparing the use of forbearance at schools that hired consultants that encouraged borrowers to postpone payments with those that did not would have provided a better understanding of the potential impact of such practices. While these topics were beyond the scope of our objectives for this report, Education may wish to explore them in support of its goals to protect borrowers and mitigate risks in the federal student aid programs. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Education, the Director of the Consumer Financial Protection Bureau, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This appendix discusses in detail our methodology for addressing (1) how schools work with borrowers to manage schools’ cohort default rates (CDR), and how these strategies affect borrowers and schools’ accountability for defaults; and (2) the extent to which the Department of Education (Education) oversees the strategies schools and their default management consultants use to manage schools’ CDRs and informs the public about its efforts to hold schools accountable. We conducted this performance audit from May 2016 to April 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Default Management Consultants – Interviews and Document Requests To determine how schools work with borrowers to manage their cohort default rates, we examined the practices of companies that schools contract with to help them lower their CDRs. Specifically, we selected a nongeneralizable sample of 9 of the 48 default management consultants on file with Education as of December 2016. To select the 9 consultants, we obtained lists of client schools from Education and reviewed websites for each of the 48 consultants to determine the services each company offered. Some companies offered an array of services to schools, while others focused exclusively on default management. We selected our nongeneralizable sample of 9 consultants by prioritizing those with large numbers of client schools, those with a specific focus on default management, or those with unique default management practices based on our review of their websites. These 9 companies served over 1,300 schools. These schools accounted for over 1.5 million borrowers in the 2013 CDR cohort. We reviewed documentation from the 9 consultants on the strategies they use to reduce borrower defaults during the CDR period; their organizational structure; products and services offered; current client schools; internal training materials; contracts and agreements with schools; methods of compensation for employees responsible for outreach to student loan borrowers; internal policies and procedures; and examples of correspondence (e.g., emails, letters, and repayment applications) with borrowers. Based on the information received from these consultants, we cannot determine how many borrowers were contacted or received correspondence from these companies. However, the consultants we spoke to generally indicated that the materials they provided to us were used for all or most of their school clients. To learn more about the strategies schools and default management consultants use to help schools manage their CDRs, we conducted interviews with managers at 4 of the 9 consultants. We also interviewed employees responsible for working with student loan borrowers to discuss the procedures they use to contact or counsel borrowers on loan repayment options. We selected these 4 consultants by prioritizing those that provided default management services to large numbers of client schools, or had unique default management practices based on website reviews. Schools and Borrowers – Interviews and Document Requests To determine how schools work with borrowers to manage schools’ CDRs we selected a nongeneralizable sample of 12 schools for review based on data from Education that suggested that they had successfully lowered their CDRs from the 2009 through 2013 cohorts through forbearance. This sample informed our selection of borrowers. We emailed borrowers who attended these 12 schools and requested interviews with them, and selected 3 of the 12 schools for interviews with school officials and document requests. To select the 12 schools, we analyzed CDRs for the 2009-2013 cohorts from Education’s Cohort Default Rate Database; 3-year forbearance rates for fiscal years 2009-2012 from Education’s Annual Risk Assessment data; and 3-year repayment rates for fiscal years 2009-2014 from Education’s College Scorecard data. We selected the 12 schools from the population that had a CDR calculated for 2013. We excluded schools whose 2013 CDR was calculated using a different formula that Education uses for schools with fewer than 30 borrowers entering repayment in a particular cohort. To be considered for selection, schools had to have had CDRs of 25 percent or above for cohort years 2009-2013 and also be in the following: 1) top 20 percent of year-to-year decreases in CDR; 2) top 20 percent of year-to-year increases in 3-year forbearance rates; or 3) top 20 percent of 3-year forbearances that resulted in default after the 3- year CDR period ended. This analysis resulted in a list of 312 schools, which we randomized within strata based on combinations of institutional control (public, nonprofit, and for-profit), maximum length of degree programs offered (less than 4-year or 4-year and above), and school size (fewer than 1,000 borrowers entering repayment in a given fiscal year and 1,000 or more borrowers entering repayment in a given fiscal year). We removed schools that had fewer than 1,000 borrowers entering repayment in a given fiscal year to mitigate the wide variations in forbearance rates and CDRs that may occur at smaller schools. Finally, we judgmentally selected a total of 12 schools from across the remaining strata, choosing the schools from each stratum in the randomized order. We conducted interviews with officials at 3 of these schools (public, nonprofit, and for-profit) and reviewed documentation on the strategies they use to reduce borrower defaults during the CDR period. To examine how default management strategies may affect borrowers, we obtained record-level data from Education’s National Student Loan Data System (NSLDS) related to the 12 schools we focused on in our review, including data on all loans that entered repayment from fiscal years 2011-2014 and contact information for the borrowers that took out these loans. We weighted the sample toward borrowers whose loans were in deferment, forbearance, or were consolidated during the CDR period or defaulted after the CDR period. We then randomly selected about 6,500 of these borrowers and emailed them a request to discuss their student loan repayment experience with us. We received replies from 49 borrowers and interviewed 11 of them that we thought may have been contacted by their school or a default management consultant. We generally selected borrowers for interviews in the order they replied to us. We also prioritized borrowers whose email responses included student loan experiences that were relevant to our objectives, such as receiving communication from their school about student loan repayment and postponement options. We were not able to interview borrowers who did not provide phone numbers or who provided phone numbers but did not respond to our calls. Data Analysis To determine how schools’ default management strategies affect borrowers and the CDR, we analyzed school-level data from Education on borrowers with loans that were included in schools’ official CDR calculations for the 2009 through 2013 cohorts. We selected the 2009 cohort because it was the first cohort held accountable for the 3-year CDR. The 2013 cohort was the most recent CDR available at the time of our analysis. We identified the year borrowers entered repayment using the same logic that Education does for calculating the CDR. A borrower with multiple loans from the same school whose loans enter repayment during the same cohort fiscal year was included in the formula only once for that cohort fiscal year. We excluded schools whose CDR was calculated using a different formula that Education uses for schools with fewer than 30 borrowers entering repayment in a particular cohort. For the population of 4,138 schools that had a CDR calculated for 2013 and a subset of 364 schools that had CDR decreases of 10 or more percentage points from the 2009 to 2013 cohorts, we analyzed cohort default rates (cohorts 2009-2013); the percentage of borrowers who were in forbearance for any length of time during their first 3 years in repayment (cohorts 2009-2013); the percentage of borrowers who were in forbearance for 18 or more months during their first 3 years in repayment (cohorts 2009-2013); the percentage of borrowers who paid down at least $1 of the principal loan amount during the first 3 years of repayment (cohorts 2009-2013); and the percentage of borrowers who were in forbearance for varying lengths of time during their first 3 years in repayment and then defaulted in the year following the CDR period (2013 cohort). We also calculated an alternative CDR for each of these 4,138 schools, in which we excluded borrowers who spent 18 or more months in forbearance during the 2013 cohort and did not default during the CDR period from their school’s CDR calculation. We analyzed how many schools would have potentially exceeded the 30 percent and 40 percent CDR thresholds for the 2013 cohort and calculated the total amount of Direct Loans and Pell Grants that these schools received in academic year 2016-2017. We did not estimate the number of schools that could become ineligible to participate in federal loan programs under this alternative methodology because such schools would be entitled to an appeal and sanctionable thresholds may change with the advent of new methodologies of calculating the CDR. Further, schools may change their default management strategies in response to an alternative CDR. In addition, we assessed the CDR against government standards for internal control for identifying and responding to risks and goals and objectives in the Office of Federal Student Aid’s Fiscal Year 2015-2019 Strategic Plan. Additionally, we analyzed data from Education’s Integrated Postsecondary Education Data System on sector and program length for these 4,138 schools, as well as for certain subsets of these schools (for more information, see app. II). To assess the reliability of the data elements we analyzed for our study, we (1) performed electronic testing of required data elements; (2) reviewed existing information about the data and the systems that produced them; and (3) interviewed agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. Review of Education Documents and Relevant Federal Laws and Regulations To determine the extent to which Education oversees the strategies schools and their default management consultants use to manage schools’ CDRs and informs the public about its efforts to hold schools accountable, we reviewed relevant federal laws, regulations, guidance, and internal documentation from Education on how it oversees schools and default management consultants practices as they relate to the CDR and how it implements and reports CDR sanctions. To better understand how CDRs are used in Education’s oversight of schools, we reviewed relevant regulations and interviewed Education officials responsible for administering program review, recertification for eligibility for federal student aid, and oversight of the CDR including default prevention. We assessed Education’s oversight activities against goals and objectives in the Office of Federal Student Aid’s Fiscal Year 2015-2019 Strategic Plan, government standards for internal control for communicating with stakeholders, and Office of Management and Budget guidelines for disseminating public information. Interviews with Experts and Consumer Advocates To help us understand how the default management strategies used by schools and default management consultants affect borrowers and reported CDRs, we interviewed individuals with expertise on federal student loans. Specifically, we interviewed experts from federal agencies including the Consumer Financial Protection Bureau and Education’s Office of Inspector General. We also interviewed experts from the Association of Community College Trustees, the Career Education Colleges and Universities, the Center for American Progress, The Institute for College Access & Success, Harvard’s Project on Predatory Student Lending, the Illinois Attorney General Office, and Young Invincibles. Appendix II: Sector and Program Length of Schools with Selected Characteristics Appendix II: Sector and Program Length of Schools with Selected Characteristics Schools whose cohort default rates (CDR) were calculated using a different formula that Education uses for schools with fewer than 30 borrowers entering repayment in a particular cohort were excluded from this analysis. Schools were included in this analysis if their CDR decreased by 10 percentage points or more from the 2009 to 2013 CDR cohorts. Foreign schools include schools that are eligible to participate in the Direct Loan program and are located outside the United States. Appendix III: Number of Schools Subject to Department of Education Cohort Default Rate Sanctions, 1991-2017 Appendix IV: Comments from the U.S. Department of Education Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kris Nguyen and Debra Prescott (Assistant Directors), Brian Schwartz (Analyst-in-Charge), Alex Galuten, Raheem Hanifa, John Karikari, Kirsten Lauber, Jeffrey G. Miller, John Mingus, Jeff Tessin, Khristi Wilkins, and Stephen Yoder made key contributions to this report. Additional assistance was provided by Susan Aschoff, Rachel Beers, James Bennett, Deborah Bland, Jason Bromberg, Alicia Cackley, Marcia Carlsen, David Chrisinger, William Colvin, Sheila McCoy, Arthur Merriam, Jessica Orr, Ellen Phelps Ranen, Phillip Reiff, Barbara Steel-Lowney, and Christopher Zbrozek.
Why GAO Did This Study As of September 2017, $149 billion of nearly $1.4 trillion in outstanding federal student loan debt was in default. GAO was asked to examine schools' strategies to prevent students from defaulting and Education's oversight of these efforts. This report examines (1) how schools work with borrowers to manage default rates and how these strategies affect borrowers and schools' accountability for defaults; and (2) the extent to which Education oversees the strategies schools and their default management consultants use to manage schools' default rates. GAO analyzed Education data on student loans that entered repayment from fiscal years 2009–2013, the most recent data at the time of this analysis; reviewed documentation from Education and a nongeneralizable sample of nine default management consultants selected based on the number of schools served (about 1,300 schools as of March 2017); reviewed relevant federal laws and regulations; and interviewed Education officials. What GAO Found According to federal law, schools may lose their ability to participate in federal student aid programs if a significant percentage of their borrowers default on their student loans within the first 3 years of repayment. To manage these 3-year default rates, some schools hired consultants that encouraged borrowers with past-due payments to put their loans in forbearance, an option that allows borrowers to temporarily postpone payments. While forbearance can help borrowers avoid default in the short-term, it increases their costs over time and reduces the usefulness of the 3-year default rate as a tool to hold schools accountable. At five of the nine selected default management consultants (that served about 800 of 1,300 schools), GAO identified examples when forbearance was encouraged over other potentially more beneficial options for helping borrowers avoid default, such as repayment plans that base monthly payments on income. Based on a review of consultants' communications, GAO found four of these consultants provided inaccurate or incomplete information to borrowers about their repayment options in some instances. A typical borrower with $30,000 in loans who spends the first 3 years of repayment in forbearance would pay an additional $6,742 in interest, a 17 percent increase. GAO's analysis of Department of Education (Education) data found that 68 percent of borrowers who began repaying their loans in 2013 had loans in forbearance for some portion of the first 3 years, including 20 percent that had loans in forbearance for 18 months or more (see figure). Borrowers in long-term forbearance defaulted more often in the fourth year of repayment, when schools are not accountable for defaults, suggesting it may have delayed—not prevented—default. Statutory changes to strengthen schools' accountability for defaults could help further protect borrowers and taxpayers. Education's ability to oversee the strategies that schools and their consultants use to manage their default rates is limited. Education's strategic plan calls for protecting borrowers from unfair and deceptive practices; however, Education states it does not have explicit statutory authority to require that the information schools or their consultants provide to borrowers after they leave school regarding loan repayment and postponement be accurate and complete. As a result, schools and consultants may not always provide accurate and complete information to borrowers. Further, Education does not report the number of schools sanctioned for high default rates, which limits transparency about the 3-year default rate's usefulness for Congress and the public. What GAO Recommends Congress should consider strengthening schools' accountability for student loan defaults and requiring that the information schools and consultants provide to borrowers about loan repayment and postponement options be accurate and complete. GAO recommends that Education increase transparency of reporting on default rate sanctions. Education agreed with our recommendation.
gao_GAO-18-52
gao_GAO-18-52_0
Background FEHBP was established primarily to help the government compete with private-sector employers in attracting and retaining talented and qualified workers. As indicated by the legislative history of the original FEHBP statute, lawmakers wanted enrollees to exercise choice among various plan types and, by using their own judgment, select health plans that best meet their specific needs. While participation in FEHBP is voluntary, in 2015, 85 percent of federal workers and 90 percent of federal retirees were enrolled in the program. Each FEHBP carrier offers one or more plans, and these plans can have up to three options, or levels of benefits, depending on which type of plan is being offered. Although they may differ in the specific benefits they provide, all FEHBP plans cover basic hospital, surgical, physician, emergency, and mental health care, as well as childhood immunizations and certain prescription drugs. However, FEHBP plans offer different levels of benefits, with many plans offering a choice between a more expensive plan option, which offers a higher level of coverage, and a less expensive plan option, which offers a lower level of coverage. FEHBP enrollees can purchase individual or family coverage. Beginning in 2016, enrollees could also purchase coverage for themselves and one eligible family member, referred to as “self plus one” coverage. FEHBP enrollees can change health care plans during an annual open enrollment period or at other times if they experience a qualifying life event, such as a change in family status. OPM data indicates that between 2005 and 2015, the annual percentage of FEHBP enrollees who changed their plan enrollment by choice—rather than because of mergers or plan terminations—ranged from 5 to 7 percent. The FEHBP statute limits the program to four specific plan types: (1) one service benefit plan—a government-wide plan with two levels of benefits; (2) one government-wide indemnity benefit plan; (3) employee organization plans; and (4) and comprehensive medical plans—also known as HMO plans. OPM generally refers to these plan types as either FFS plans (the service benefit plan and the employee organization plans), or HMO plans (comprehensive medical plans). Within the categories of FFS and HMO plans, there can be significant variation in the plan designs and enrollee cost sharing. Most FFS plans have PPO arrangements, which usually have lower out-of-pocket expenses (i.e., a smaller copayment and/or a reduced or waived deductible) when enrollees use providers within the plan’s preferred network. Compared with HMOs, PPOs typically offer their enrollees a greater choice of providers and have less plan management of the care that enrollees receive. HMOs provide or arrange for comprehensive health care services on a prepaid basis through designated plan physicians, hospitals, and other providers in particular locations. Each HMO sets a geographic area for which health care services will be available. Some HMOs offer a point of service product that offers FEHBP enrollees the choice of using a designated network of providers or using non-network providers at an additional cost. Additionally, in 2003 and 2005 respectively, FEHBP also began offering consumer-driven health plan (CDHP) and high-deductible health plan (HDHP) designs that are coupled with a tax-advantaged account to help enrollees pay for qualified medical expenses. Any of the FEHBP plan types may be offered with a CDHP or HDHP design, and therefore CDHPs and HDHPs can be either FFS or HMO plans. Enrollees in typical CDHPs have responsibility for certain up-front medical costs, an employer-funded account that enrollees may use to pay these up-front costs, and catastrophic coverage with a high deductible. CDHP enrollees receive full coverage of in-network preventive care. HDHPs offer low premiums but higher deductibles and annual out-of-pockets limits combined with a tax-advantaged account. HDHPs can have first dollar coverage (no deductible) for preventive care and higher out-of-pocket copayments and coinsurance for services received from non-network providers. OPM is responsible for negotiating health benefits and premiums with FFS and HMO plans. Each year, OPM sends a letter to all approved and participating FFS and HMO plans—its annual “call letter”—to solicit proposed benefit and premium changes for the next calendar year, which are due by the end of May. The descriptions of both covered and excluded benefits are incorporated into the final contracts. Each plan subsequently prints brochures describing the benefits and costs according to a standard format, as specified by OPM. The brochures are binding statements of benefits and exclusions that plans must follow as parties to FEHBP contracts. Those plans meeting the minimum requirements specified in the statute and regulations may participate in the program and their contracts may be automatically renewed each year. The federal government and FEHBP enrollees generally each bear a portion of the cost of FEHBP plan premiums. By statute, the government generally pays 72 percent of the weighted average premium of all health benefit plans participating in FEHBP, but no more than 75 percent of any particular plan’s premium, while enrollees pay the balance. Premium prices vary across plans and within plans and depend on whether an enrollee is enrolled in self-only, family, or self plus one coverage. The premiums are intended to cover enrollees’ health care costs, plans’ expenses, reserves, and OPM’s administrative costs. Although there has been some minor fluctuation in the number of FEHBP enrollees over time, total program enrollment has remained around 8 million enrollees since 2000. As the Congressional Research Service has reported previously, FEHBP enrollment is concentrated among a small number of carriers and BCBSA has the largest share of total program enrollment by far. See figure 1 for the total FEHBP enrollment and enrollment market share of the top five carriers in the program from 2000 through 2015. The Number of Available FEHBP Plan Offerings Increased Since 2007 and Enrollment Was Increasingly Concentrated Available FEHBP Plan Offerings Generally Increased in Recent Years, although Variation Existed among Counties The number of plan offerings available to FEHBP enrollees generally increased from 2007 through 2015. In 99 percent of counties nationwide, enrollees had more plan offerings in 2015 than they had in 2007. The median number of plan offerings available in a county increased from 19 in 2007 to 24 in 2015. Most of these offerings were the nationwide FFS plans that are available in all counties. There were 17 such plan offerings in 2007 and 19 in 2015. The remaining plan offerings were HMOs that were available in more limited areas. While the total number of HMO plans that participated in FEHBP decreased from 2007 through 2015, the median number of HMO plan offerings in a county increased. This suggests that those HMO plans in FEHBP in 2015 generally participated in more counties than was the case in 2007. (See table 1 for a comparison of plan offerings in 2007 and 2015.) Despite increases in the availability of the median number of HMO plan offerings in a county, there was wide variation in the number of HMO offerings available to enrollees in a given county. For example, while FFS plan offerings were available nationwide, in some counties enrollees had no HMO plan offerings. Since 2007, however, the number of counties without any HMO plan offerings available declined from 18 percent to less than 2 percent in 2015. Most counties had a couple of HMO plan offerings, and some counties had at least 10 HMO offerings. For example, in 2015, enrollees in one county in New York had 15 HMO plan offerings, giving enrollees a total of 34 offerings from which to select coverage. (See fig. 2 for the range of available HMO plan offerings among counties across all years.) Regarding reasons for the variation in available FFS and HMO plan offerings, OPM officials told us that plans participating in FEHBP enter and withdraw based on internal business decisions and often in response to changing economic conditions. For example, according to OPM officials, some plans may enter the program with the expectation of gaining a target market share. OPM officials also noted that decreases in plan participation in the past may have been a response to premium increases that impacted plans’ ability to effectively compete. In addition, a 2012 OPM report noted that many prominent HMO plan carriers have reduced the number of states in which they participated since 1985. Market Share Held by the Largest FEHBP Carrier in Each County, Generally BCBSA, Increased from 2000 through 2015 FEHBP enrollment within counties generally became more concentrated from 2000 through 2015, although most of that growth occurred prior to 2007. The share of the market held by the largest carrier increased from a county median of 58 percent in 2000 to 70 percent in 2007, to 72 percent in 2015. Similarly, the combined median county market share of the three largest carriers increased from 86 to 90 percent over the same time period. However, we observed that the median market share held by the second and third largest carrier generally decreased over time. This suggests that the increases in combined market share held by the three largest carriers were generally due to increases observed in the single largest carrier. Although there was little change in the median county market share of the top five carriers, these carriers accounted for nearly all enrollments in a county in each of the years we examined. (See fig. 3 for a comparison of the market share held by the three largest carriers over time.) We found that these increases in concentration were widespread. Overall, from 2000 through 2015, almost 90 percent of counties experienced an increase in the market share held by the largest carrier. Over this period, the percentage of counties in which the largest carrier held at least half of the market also increased—from 70 percent in 2000 to 93 percent in 2015. Additionally, the proportion of counties where at least 80 percent of the market share was held by the top three carriers increased from about 76 percent of counties in 2000 to 94 percent of counties in 2015. (See fig. 4 for maps showing the market share of the largest carrier in each county in 2000 and 2015.) Similar to the combined median county market share of the top five carriers, nationwide FFS plans’ combined median county market share accounted for almost all FEHBP enrollment and showed a slight increase from 97 percent in 2000 to 99 percent in 2015, although variation existed in some counties. Comparatively, the combined median county market share held by HMO plans decreased from 6 percent to 2 percent. In addition, in each year since 2000, 16 to 30 percent of counties had all of their FEHBP enrollment in FFS plans, and, in years for which we had HMO plan availability data, almost all of these counties offered at least one HMO plan offering. At the same time, we observed a small number of counties each year where HMO plans’ combined market share was at least 50 percent. BCBSA was the largest carrier in almost all counties nationwide and the share of these markets held by its two nationwide FFS plan options increased from 2000 through 2015. While BCBSA was already the largest carrier in 93 percent of counties in 2000, by 2015 it was the largest in 98 percent of counties. Over this same time period, the median county market share held by BCBSA also increased—from 58 percent in 2000 to 72 percent in 2015. Most of BCBSA’s 14 percent market share increase occurred between 2000 and 2008. Other carriers had significantly smaller median county market shares, but they had the highest share in a certain limited number of counties. The Government Employees Health Association, Inc. (GEHA), another carrier offering nationwide FFS plans, had the second highest program-wide market share in 2015, and an 8 percent median county market share. GEHA held the second or third largest market share in 77 percent of counties in 2015, reaching as high as 65 percent of the county market share, for example, in a county in Texas, but was the largest carrier in less than 1 percent of counties. Kaiser Permanente—which offers HMO plans—was the third largest carrier program-wide in 2015 and held the largest market share among HMOs (6 percent), though its market share decreased slightly over time. In counties where a Kaiser Permanente plan was available in 2015 (fewer than 200 out of more than 3,000 counties nationwide), those plans had a median county market share of 8 percent; however, in some counties Kaiser Permanente plans held a larger market share, for example, reaching as high as 64 percent in one county in California. In counties where Kaiser Permanente plans were available in 2015, it was the largest carrier 8 percent of the time and the second or third largest carrier in a majority of cases. (See table 2 for a description of market share and position for the three carriers with the largest program-wide market share within FEHBP.) BCBSA’s increased FEHBP market share may be due to a number of factors. For example, officials from several FEHBP carriers told us that BCBSA’s market share performance was tied to several factors, including brand recognition, comparably favorable plan premiums, and enrollee population characteristics. According to an OPM report, another factor contributing to BCBSA’s increased market share was the introduction of the Basic option to the Service Benefit Plan in 2002. Compared to its Standard option, this nationwide FFS plan option restricts enrollees to a more narrowly defined provider network (with some limited exceptions) and offers lower premiums, thereby broadening BCBSA’s ability to compete with other lower cost plans. As shown in table 3, while program-wide enrollments in BCBSA’s nationwide FFS plan options have increased by 32 percent following the introduction of the Basic option, enrollments in the Standard option decreased, suggesting that enrollees are shifting to the Basic option or plans offered by other carriers. In addition, a study published in 2012 noted that BCBSA market concentration was the possible outcome of the carrier’s established provider network and lower relative administrative costs. For examples of BCBSA’s and other carriers’ premiums, plan offerings, and market shares in 2015, in select counties, see appendix I. FEHBP Market Share Concentration among the Largest Carriers Was Generally Similar to the Large Group Market and More Concentrated than Medicare Advantage The combined market share for the three largest FEHBP carriers in a state was generally similar to the large group market and higher than Medicare Advantage. As shown in figure 5, in 2014, the median state market share for FEHBP was 89 percent compared to 90 percent in the large group market and 74 percent for Medicare Advantage. And, the range of state market shares held by the three largest carriers in Medicare Advantage and the large group market (69 and 62 percentage points, respectively) was wider than in FEHBP (23 percentage points). However, programmatic differences between the three selected markets, such as varying enrollee demographics, market sizes, and program designs, make it difficult to draw conclusions about these contrasting market trends. For each market and each year, the 50 states and the District of Columbia were ranked from highest to lowest market share for the combined three largest carriers in each state and then divided into four groups based on those rankings. FEHBP enrollment data could not be separated from the overall large group market data used to calculate state-level market share in prior GAO reports. In 2014, we estimated that FEHBP plans accounted for about 20 percent of the 44 million total enrollments in the large group market nationally. Compared to Medicare Advantage and the large group market, the state market shares held by the largest carrier in FEHBP generally held a larger share of the market. For example, in 2014, the median market share held by the largest carrier in a state was higher in FEHBP (75 percent) than both Medicare Advantage (35 percent) and the large group market (59 percent). Stakeholder Opinions and Cost Estimates Do Not Offer Clear Consensus about the Potential Effects of Expanding OPM’s Contracting Authority Stakeholders Generally Supported Expanding OPM’s Authority, but Said Using That Authority to Add Regional PPO Plans Could Have Negative Effects Seven of the 10 stakeholders we interviewed, and who commented on OPM’s contracting authority, generally supported expanding OPM’s contracting authority to allow it to contract with a greater variety of health plan types than are currently offered in FEHBP. Stakeholders we interviewed that offer HMO plans generally supported this expansion. However, the 2 stakeholders that offer nationwide FFS plans and 1 stakeholder that represents federal employees opposed it. Most of the concerns expressed by these 3 stakeholders were related specifically to the potential effects of OPM adding regional PPO plans to FEHBP. Five of the seven stakeholders we interviewed who supported expanding OPM’s contracting authority said that adding additional plan types could result in both positive and negative effects. In terms of positive effects, one stakeholder said the authority could potentially allow OPM to offer different types of plans—such as value-based plan designs and accountable care organizations—that could lead to improved benefit options and health outcomes for enrollees. One stakeholder also told us that OPM’s expanded authority would enable the agency to improve transparency by allowing plans to contract with OPM as the type of plan they actually are, rather than fitting into outdated statutorily established categories, which the stakeholder characterized as an “antiquated labeling system.” Another stakeholder said that participation by new plans in FEHBP would foster competition and help keep health plan costs down. One stakeholder also noted that if plan expansion would only be undertaken when it is in the best interests of FEHBP and its enrollees— as OPM has indicated would be the case—there was little or no downside to such expanded authority. Additionally, in April 2013, three FEHBP carriers that offer HMO plans sent a letter to Congress in favor of expanding OPM’s authority, citing that it would “ensure OPM has the tools it needs to lower costs and provide federal workers access to innovation, choice, and value” and would allow more competition in the program. Some stakeholders we interviewed, however, suggested that any positive effects of expanding OPM’s authority and adding new plan types could be limited due to other aspects of FEHBP that affect competition and discourage participation by carriers. In particular, these stakeholders cited concerns related to costs associated with FEHBP enrollees who are Medicare-eligible but who do not enroll in Medicare, and the formula that determines the government’s contributions to enrollee premiums. According to these stakeholders, this creates unfair competitive advantages for the nationwide plans and BCBSA in particular. They also cited FEHBP’s system for assessing the performance of participating carriers, which they said discourages competition and participation by carriers in FEHBP, particularly for certain HMO plans. OPM reported that it was open to considering some program changes related to these concerns; however, some proposed changes could require changes to the FEHBP statute. For more information about stakeholder comments regarding these other aspects of FEHBP, see appendix II. Some of the 10 stakeholders we interviewed and who commented on OPM’s contracting authority also identified other potential negative effects that could occur with expanding OPM’s contracting authority. For example, 1 stakeholder said that an increase in plan types offered could lead to a subsequent increase in OPM’s administrative costs. In addition, several of these stakeholders said that adding more plans to FEHBP would exacerbate an existing problem of choice overload for enrollees. One of the stakeholders said that FEHBP enrollees are already confused by the number of available plan offerings, and that the current information provided to enrollees does not allow for easy comparison of their choices. They noted that additional expansion of offerings will only complicate enrollees’ plan analysis. Consistent with these concerns, studies that we reviewed related to consumer choice and decision-making processes in health insurance markets suggest that adding additional plans may not always yield positive effects or improve competition. For example, a 2016 report by the RAND Corporation found that health insurance consumers are unlikely to change plans, even as better choices become available. Additionally, a 2009 study examining the Swiss health insurance market similarly found that as the number of choices offered to individuals grows their willingness to switch plans declines. The study found persistently low rates of plan switching despite high variation in premiums between plans, and found that more choice inhibited plan switching. It concluded that having a large number of plans to choose from likely reduces the effectiveness of consumer decision making, and that simplifying health plan decision making by reducing the number of choices might result in more price competition among insurers, and benefit consumers. Additionally, 6 of the 10 stakeholders we interviewed and who commented on OPM’s contracting authority said that there would potentially be negative effects if OPM were to use the expanded authority to add regional PPO plans to FEHBP. For example, 5 of these 6 stakeholders said there could be instability and higher premiums in FEHBP if new regional PPO plans were able to “cherry pick” low cost areas in which to participate. This was of particular concern to 1 of the 2 stakeholders we spoke to who offer nationwide plans. Because they offer the same premiums nationally, they said the lower-cost areas of the country help subsidize the premiums of the higher-cost areas. If these nationwide plans lost customers in lower-cost areas to regional PPO plans, then their premiums would likely rise. These 2 stakeholders and a third said, therefore, that adding regional PPO plans could result in nationwide carriers discontinuing their coverage due to their inability to compete with regional plans. According to 1 stakeholder that offers a nationwide FFS plan, if the nationwide carriers dropped out of the program, plan offerings would be significantly reduced in certain areas of the country and some areas could potentially be left with no offerings at all. Additionally, in 2014 and 2015, six nationwide FEHBP carriers, including the two we interviewed, sent letters to Congress expressing their opposition to legislation that would add new plan types in FEHBP. In the letters, they cited negative effects such as program destabilization, increased premiums, and fewer consumer choices—all of which were specifically tied to the proposal to add regional PPO plans to FEHBP. Two of the 10 stakeholders we interviewed and who commented on OPM’s contracting authority, however, said that adding regional PPO plans to FEHBP would have positive effects. For example, 1 of these stakeholders that offers HMO plans and referred to FEHBP’s plan type labels as antiquated noted that this would enable them to promote their existing FEHBP products—currently categorized as HMO plans—more appropriately as regional PPO plans. This stakeholder said the current categorization causes enrollees to erroneously believe their plans are more restrictive than the plans listed as nationwide FFS plans. When we shared these stakeholder concerns about expanding OPM’s contracting authority with OPM officials, they told us that the agency has existing strategies and is working towards implementing additional ones, which officials said should allow it to address many of these concerns. For example, OPM officials said in January 2017 that the agency was in the process of building models that would allow it to simulate the impact that adding new plan types would have on FEHBP, but that the agency is still years away from being able to make such assessments. The officials said that the agency would only seek to introduce new plan types that it determines to be in the best interests of FEHBP enrollees and the federal government. With regards to enrollee confusion over the number of plan choices, the OPM officials said that the agency is improving the tools enrollees can use to learn about the available plans. For open season in 2016, the agency released what it considers to be a new and improved Plan Comparison Tool on its website that enables enrollees to gain more knowledge about their health plan options before making a selection. According to the officials, some of the improved functions of the tool include more details about the plan benefits and services, clearer definitions of the health insurance terms, and easier ability to compare the plans. Officials also told us that they expect to make more improvements to the tool in future years based on feedback from the FEHBP enrollees who use it. OPM officials said the agency would continue existing plan negotiation strategies that, among other things, would prevent plans from “cherry picking”—that is, offering products in only the most profitable service areas—by ensuring that new carriers provide services in contiguous regions that include both low- and high-cost areas. Additionally, related to the concern that nationwide plans might withdraw from the program if regional PPO plans were introduced, OPM officials noted that if, for example, BCBSA were to cancel its nationwide plan options, another carrier might step up to gain the service benefit plan designation and provide nationwide service. Estimates of the Financial Effects of Expanding OPM’s Contracting Authority Differed on Whether Costs Will Increase or Decrease We identified three significantly differing estimates of the financial effects on the federal budget that expanding OPM’s FEHBP contracting authority would have. However, these estimates are based on different assumptions about a variety of factors such as premium changes, administrative costs, and enrollment, and only limited information was available about the methodologies used for each set of estimates. It is also important to note that the assumptions used in developing these estimates are subject to professional judgment and have inherent uncertainty regarding whether the assumed scenarios will be realized. The three estimates include: The President’s Budget for fiscal year 2017 estimated that expanding FEHBP to a greater variety of plan types would save $88 million from 2017 through 2026. According to information provided by OPM, the estimate considered the effect of a broad expansion of OPM’s authority to add new plan types, and OPM did not indicate whether the agency specifically considered the effect of adding regional PPOs to FEHBP when developing this estimate. OPM officials told us that these savings were based on a number of assumptions, including an estimate of the number of enrollees that will migrate to new plan types based on previous FEHBP experience and projecting a medical loss ratio of 90 percent for the new plan types added to FEHBP. However, in follow-up with the agency, OPM officials were not able to provide us with more detailed information about how these savings were calculated. The Congressional Budget Office, in its analysis of the budget proposal, estimated a range from $50 million in savings to $50 million in costs over the 10-year period. A 2014 study from the Center for Health and Economy that examined the effects of introducing regional PPOs to FEHBP across three scenarios estimated cost savings ranging from $1.2 to $2.1 billion over 7 years (2015 to 2021). The study provided limited information about the data, assumptions, and methodology the center used to develop its estimates. The study did explain that the center modeled the projected impact on enrollment, average premiums, and the federal budget of adding regional PPOs to FEHBP using three different sets of assumptions about how expensive the newly introduced regional PPO plans would be. Under each scenario, the center estimated shifts over time in enrollment from existing FEHBP plan designs (FFS, HMO, CDHP, and HDHP) to the new PPO plans— and assumed that these new plans would achieve 10 percent of the market share throughout the analysis period. The study also projected decreases in average FEHBP premiums and a corresponding reduction in total government contributions in each scenario. A December 2013 study conducted by Avalere Health at BCBSA’s request specifically examined the effect of adding regional PPOs into FEHBP and estimated an increase in spending of $7.8 billion over 10 years (2014 to 2023). In developing its estimates, the study noted that it assumed that the BCBSA national plans dissolve and would break into regional plans in response to new regional plan competition. The study stated that the $7.8 billion in increased costs was based on an assumption that both regional PPOs and BCBSA regional plans would have higher administrative costs as compared to BCBSA’s national plans. The study estimated that these costs would be offset slightly by an initial anticipated decrease in premiums resulting from new plans introducing competition into these regions. Agency Comments We provided a draft of this product to OPM for comment. The agency did not provide any comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to OPM and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Federal Employees Health Benefits Program (FEHBP) Plan Attributes for Selected Counties in 2015 In table 4, we present information about a selection of counties that reflect a range of FEHBP attributes, but which are not intended to be a representative sample of all counties. We chose counties with a range of total enrollments, market shares held by different plan offerings (with different enrollee premiums), and number of health maintenance organization (HMO) plan offerings. Appendix II: Stakeholder Opinions about Other Federal Employees Health Benefits Program Aspects That Affect Competition Some of the stakeholders we interviewed suggested that any positive effects of expanding the Office of Personnel Management’s (OPM) contracting authority and adding additional plan types to the Federal Employees Health Benefits Program (FEHBP) could be limited because of other aspects of the program that affect competition and discourage carrier participation. In particular, stakeholders cited concerns related to: Medicare-eligible enrollees, the government contribution formula for FEHBP premiums, and FEHBP’s plan performance assessment system. Medicare-eligible enrollees. Six of the 11 stakeholders we interviewed suggested that problems associated with Medicare-eligible enrollees negatively affect FEHBP premiums, and 5 of the 6 noted these problems create an unfair competitive advantage for the nationwide FEHBP plans. These stakeholders suggested that because certain older, Medicare- eligible FEHBP enrollees tend to incur higher health care costs, they drive up premiums. Some stakeholders noted that plans—in particular, health maintenance organizations (HMOs) that offer service in areas with higher concentrations of older enrollees—experience challenges keeping premium rates competitive with the nationwide plans like those offered by the Blue Cross Blue Shield Association (BCBSA). Additionally, 3 stakeholders we interviewed that offer HMO plans pointed specifically to costly retirees who opt not to enroll in in Medicare coverage of outpatient services, known as Part B, making it difficult for them to compete. FEHBP retirees eligible to enroll in Medicare are not required to do so, and some maintain only their FEHBP coverage instead. While there is no penalty for choosing not to enroll in Medicare, retirees who later decide to enroll in Part B must pay a penalty. For retirees in FEHBP who choose not to enroll in Medicare, their FEHBP plan remains the primary payer and they continue to receive the same level of coverage through that plan as they did prior to becoming eligible for Medicare. Two stakeholders said that charging the same rates to the retiree population without Part B and the active employee population—a scenario that occurs in FEHBP—is not typical of the private, commercial insurance market. In a recent publication, one of the stakeholders we interviewed reported that these types of issues have been a problem for FEHBP since its inception, and that it is therefore in the interest of every enrollee to join plans with the lowest proportion of high-cost retirees. The stakeholder noted that this distorts plan selection and alters results, noting that while the Kaiser plans on the West Coast do an outstanding job of keeping costs low for enrollees, they have a disproportionate number of retirees who correctly understand that they do not need to enroll in Medicare. According to the stakeholder, this puts Kaiser at a disadvantage since it has to cover the age-related costs of these enrollees. Stakeholders we interviewed offered a number of potential solutions for OPM to address these challenges. For example, two stakeholders suggested that OPM could introduce some form of risk adjustment into FEHBP to assist plans that have a disproportionate number of Medicare- eligible enrollees. Risk adjustment provides a way to correct for imbalances that occur when some carriers attract a larger share of enrollees at low risk for expensive claims and other carriers attract a larger share of enrollees at high risk for expensive claims. One of the two stakeholders suggested that FEHBP could introduce a budget-neutral risk adjustment program that adjusts the amount of a plan’s premium that is paid by the government based on a plan’s ratio of age-65 retirees with Medicare (Parts A, B, or both) to those without. The stakeholder said that this would greatly improve plan competition over time. OPM officials agreed with stakeholders that providing nationwide service is an advantage for carriers like BCBSA in high cost areas, but noted that it is a disadvantage in low cost areas, and said that, similarly, a lack of risk adjustment in the program works both in favor of and against BCBSA and HMOs. OPM officials said, for example, that the BCBSA Standard option would likely benefit from risk adjustment, while the BCBSA Basic option would likely be negatively impacted. OPM officials also said that risk adjustment could be a way for the agency to compensate plans that have enrollees with higher than average risk and to improve competition by discouraging plans from avoiding those higher risk enrollees. However, officials noted that risk adjustment would require the agency to have reliable claims-level data from each of the plans, which the agency does not have. In January 2017, OPM officials said that the agency is in the process of collecting claims data from FEHBP carriers and expects to have a sufficiently reliable data set by July 2018. OPM officials also noted that before implementing any form of risk adjustment in FEHBP they would have to use that data to determine the effects on the program, and they would also need to determine whether doing so would require any legislative changes. Some stakeholders we interviewed also suggested retirees could be incentivized to enroll in Medicare Part B (for example, by waiving the Medicare Part B late enrollment fee for FEHBP retirees, or by having FEHBP plans subsidize Part B premiums), and two stakeholders went as far as suggesting that Medicare enrollment should be required for those eligible. OPM officials said that they already encourage enrollment in Medicare Part B; in particular, they noted that in their annual call letters they have encouraged carriers to offer benefits in their plans that incentivize Medicare enrollment for eligible FEHBP enrollees. However, OPM officials said that they are open to pursuing additional approaches that would encourage FEHBP retirees to fully participate in Medicare coverage. The government contribution formula for FEHBP premiums. Five of the 11 stakeholders we interviewed suggested that the government contribution formula for FEHBP premiums negatively impacts program competition. The FEHBP statute establishes the amount the government contributes towards the costs of FEHBP plan premiums. By statute, the government pays an amount equal to 72 percent of the weighted average premium across all FEHBP plans, but no more than 75 percent of any particular plan’s premium. Enrollees generally pay the remaining premium. As such, enrollee contributions will generally be 25 percent for lower-premium plans and can be higher than 28 percent if their plan’s premiums are significantly higher than the weighted average FEHBP plan. Some stakeholders we interviewed noted that BCBSA has an advantage under the contribution formula, and that the existing formula does not incentivize enrollees to choose low cost plans. Two stakeholders noted that BCBSA’s large program market share—66 percent of total program enrollment in 2015—allows it significant influence over the government contribution amount. Therefore, several stakeholders suggested that BCBSA’s enrollees end up paying closer to the minimum of 25 to 28 percent of their premium’s costs. Conversely, other plans—particularly HMOs operating in high cost areas—may have premiums that are higher than BCBSA’s and the weighted program average, resulting in their enrollees having to pay considerably more than 28 percent of their total premium’s costs. Two stakeholders said that, as a result, carriers may exit the program once their premiums exceed the weighted program average. Additionally, two stakeholders we interviewed suggested that the formula does not incentivize enrollees to choose lower cost plans because the maximum government contribution amount is 75 percent— regardless of whether the plan’s premiums are less than the weighted FEHBP average. See table 5 for examples of how the government contribution formula affects the share of premiums that enrollees pay. Some stakeholders we interviewed proposed solutions to the concerns they identified with the government contribution formula. For example, two stakeholders suggested that the formula be changed so that plans that cost less than 72 percent of the weighted average would be covered either in full or to a greater extent by the government. They noted that this would incentivize enrollees to choose lower cost plan options and would strengthen the competitiveness of lower-cost plans—particularly as compared to the BCBSA options. One stakeholder also suggested that the government contribution formula could be varied by metropolitan regions (i.e., vary government and enrollee premium contributions based on regional health care costs), which they suggested would lead to more carriers introducing more plan offerings overall. While the government contribution formula is set in statute, OPM officials said that they are open to pursuing changes that would encourage FEHBP enrollees to select the health plans that meet their current and expected health care needs at affordable costs. FEHBP plan performance assessment system. Five of the 11 stakeholders we interviewed cited concerns with OPM’s system for assessing FEHBP plan performance, with 2 noting that it discourages competition and participation in FEHBP. OPM announced a new methodology for assessing plan performance in a letter to carriers in 2015, noting that the agency would use a discrete set of quantifiable measures to examine aspects of contract performance and link this performance assessment to the profit plans receive. OPM reported in the letter that it implemented performance assessment to move away from paying for procedures or services and towards paying for value and prevention of disease. It also noted that the system was intended to create a more objective performance standard and provide more transparency for enrollees. Stakeholders we interviewed, however, were particularly critical of the way in which community-rated plans are assessed in this new system, noting that plans are penalized rather than rewarded. Regulations specify a process by which OPM may withhold a portion of payments to a community-rated carrier based on plan performance thereby reducing the carrier’s profits. Two of these stakeholders said that the only way for a plan to not receive a financial penalty is to get a perfect score on the assessment and said that it is impossible to receive such a score. Therefore, one stakeholder noted that the system is extremely discouraging to carriers, particularly to new carriers considering joining FEHBP. Additionally, two stakeholders said that the measures used in the assessment—Healthcare Effectiveness Data and Information Set (HEDIS) and Consumer Assessment of Healthcare Providers and Systems (CAHPS) measures—favor certain types of HMOs. For example, one stakeholder noted that some carriers can have problems meeting the HEDIS measure for breast cancer screening rates, because they have to get patients to go to a separate mammography center while carriers that are part of more integrated health systems can offer mammograms in-house. With regard to how the plan performance assessment system could be improved, stakeholders we interviewed suggested that OPM should switch to a reward or incentive-based system for community-rated carriers. Several stakeholders suggested that OPM could implement a system similar to the Medicare Advantage star ratings system. In December 2016, OPM officials told us that they were listening to community-rated plans’ concerns regarding the performance assessment penalty and would consider adjustments to address these concerns. Then in March 2017, in response to some of these concerns, OPM issued a letter to FEHBP carriers proposing an update to the assessment of community-rated plans that would allow carriers with high-performing plans to avoid any financial penalties. Regarding the concern about the use of HEDIS and CAHPS measures, OPM officials said that these measures are well-established and commonly required by other commercial and government payers, such as Medicare Advantage. Nonetheless, OPM officials said that the plan performance system will continuously be improved through the introduction of new measures and the retirement of measures on which all FEHBP plans have achieved satisfactory performance. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the contact named above, William Hadley (Assistant Director), Kristi Peterson (Assistant Director), Christina Ritchie (Analyst in Charge), Leonard Brown, William Garrard, Daniel Ries, and Said Sariolghalam made key contributions to this report. Also contributing were Sandra George, Emei Li, Yesook Merrill, Laurie Pachter, Vikki Porter, Jennifer Rudisill, Frank Todisco, and Merrile Sing. Related GAO Products Private Health Insurance: In Most States and New Exchanges, Enrollees Continued to be Concentrated among Few Insurers in 2014. GAO-16-724. Washington, D.C.: September 6, 2016. Private Health Insurance: The Range of Premiums and Plan Availability for Individuals in 2014 and 2015. GAO-15-687. Washington, D.C.: August 10, 2015. Private Health Insurance: Concentration of Enrollees among Individual, Small Group, and Large Group Insurers from 2010 through 2013. GAO-15-101R. Washington, D.C.: December 1, 2014. Federal Employees Health Benefits Program: Oversight of Carriers’ Fraud and Abuse Programs. GAO-14-39. Washington, D.C.: November 14, 2013. U.S. Postal Service: Proposed Health Plan Could Improve Financial Condition, but Impact on Medicare and Other Issues Should Be Weighed before Approval. GAO-13-658. Washington, D.C.: July 18, 2013. Federal Employees Health Benefits Program: Premium Growth Has Recently Slowed, and Varies among Participating Plans. GAO-07-141. Washington, D.C.: December 22, 2006. Federal Employees Health Benefits Program: First-Year Experience with High-Deductible Health Plans and Health Savings Accounts. GAO-06-271. Washington, D.C.: January 31, 2006. Federal Employees Health Benefits Program: Early Experience with a Consumer-Directed Health Plan. GAO-06-143. Washington, D.C.: November 21, 2005. Federal Employees Health Benefits Program: Competition and Other Factors Linked to Wide Variation in Health Care Prices. GAO-05-856. Washington, D.C.: August 15, 2005. Federal Employees’ Health Plans: Premium Growth and OPM’s Role in Negotiating Benefits. GAO-03-236. Washington, D.C.: December 31, 2002. Federal Employees’ Health Program: Reasons Why HMOs Withdrew in 1999 and 2000. GAO/GGD-00-100. Washington, D.C.: May 2, 2000.
Why GAO Did This Study FEHBP provides health care coverage to about 8 million federal employees, retirees, and their dependents through carriers that contract with OPM. The Federal Employees Health Benefits Act of 1959 limited the types of plans OPM could offer. OPM has reported that the program needs more competition between plans and more diverse health plan choices and has proposed that its contracting authority be expanded to allow a greater variety of types of health plans to participate in FEHBP than are currently allowed. GAO was asked to examine FEHBP plan participation and the potential impact of OPM adding new plan types to the program. This report describes, among other things: (1) how the number of plans and market shares of carriers participating in FEHBP changed in recent years, and (2) what is known about the potential effects of allowing OPM to contract with a greater variety of types of health plans than are currently offered. GAO requested OPM plan availability and enrollment data by county for 2000 through 2015, but county-level availability data were only available for 2007 and 2009 through 2015. Therefore, plan availability and market share analysis timeframes differ. GAO also interviewed OPM officials, 11 FEHBP stakeholders, such as carriers and federal employee and retiree organizations, and reviewed relevant documentation and research, such as cost estimates of the potential effects of expanding OPM's authority. GAO provided a draft of this product to OPM for comment. The agency did not provide any comments. What GAO Found Federal Employees Health Benefits Program (FEHBP) enrollees can choose from a number of health plan offerings depending on where they live. From 2007 to 2015, the median number of plan offerings available in a county increased from 19 to 24. Of the 24 plan offerings in 2015, 19 were available nationwide and 5 were health maintenance organization plans offered in specific geographic areas. Yet despite more available plan offerings in recent years, enrollment has become more concentrated within the largest health insurance carrier in a county. Specifically, the median share of enrollment held by the largest carrier in a county increased from 58 percent in 2000 to 72 percent in 2015. Further, one carrier—the Blue Cross Blue Shield Association—was the largest carrier in 93 percent of counties in 2000 and 98 percent of counties in 2015. The stakeholders GAO interviewed and the cost estimates GAO reviewed about the potential effects of expanding the Office of Personnel Management's (OPM) authority to contract with more plan types than currently offered in FEHBP did not offer clear consensus about the effects. Most stakeholders supported expanding OPM's authority; those opposed were primarily concerned about OPM adding regional preferred provider organization plans, saying this could cause program instability and higher premiums. Estimates by OPM and others differed significantly on whether the expansion would increase or decrease costs. This is because they used differing assumptions about premiums, enrollment, and other factors, and it is unclear whether the assumptions used in these estimates will be realized.
gao_GAO-18-254
gao_GAO-18-254_0
Background Fintech—originally short for financial technology—refers to the use of technology and innovation to provide financial products and services. For purposes of this report, fintech firms are nontraditional technology- enabled providers, such as start-ups or more established technology firms, such as Apple or Google, that are offering traditional financial products or services to consumers. Fintech products or services are typically provided—sometimes exclusively—through the Internet or via mobile devices, such as smartphones, rather than being provided through face-to-face visits to financial institution branches. The products and services that fintech firms offer include: payments between individuals, and between individuals and loans to consumers and businesses; advice on wealth management or general financial activities; and distributed ledger technology used to make payments, record and track asset ownership, and other purposes. Fintech Payments Various fintech firms offer ways for individuals to make payments and transfer value, including for purchasing goods or services or for transferring money to individuals domestically or internationally. The payments offered by these providers are often conducted using applications (apps) on smartphones or other mobile devices. Often these fintech payments involve the use of accounts linked to existing debit or credit cards and are processed through the existing networks and channels for these types of payments. In some cases, fintech providers may also route their payments through the Automated Clearing House networks, which have traditionally been used to facilitate automatic bill paying to utilities or other merchants or funds transfers between banks. Fintech payments can also be made by charging a consumer’s phone bill. For example, consumers can send charity contributions via text or charge in-app purchases to their mobile phone bill. One common fintech payment method involves mobile wallets, or electronic versions of consumers’ wallets, which offer consumers the convenience of conducting transactions without having to enter credit or debit card information for each transaction. Using a mobile wallet, consumers can store payment card information and other information on their mobile devices that is often needed to complete a purchase. Generally, mobile wallets replace sensitive information with random values—a process called tokenization—to provide greater security when making a payment, and transmit this information using existing credit and debit card networks. A variety of fintech firms provide mobile wallets, including Apple, Google, and Samsung. Consumers may use mobile wallets to make payments to other consumers or to businesses; in mobile applications; through mobile browsers; or in-person at a store’s point-of-sale terminal. Some providers, such as Paypal and Venmo, allow individuals to create accounts on mobile devices to make payments funded by debit or credit cards, as well receive and store funds sent to the account owner that can be used to make payments to others or buy goods from merchants. Figure 1 illustrates how a mobile wallet enables the payment information to be transferred by allowing compatible devices to exchange data when placed in very close proximity to each other using various technologies, such as wireless communication. Regarding the total volume of payments by fintech providers, the association representing state banking supervisors estimated that fintech payment firms were likely used to facilitate payments or currency exchanges of up to $189 billion in the first 2 quarters of 2017. In a 2016 report on consumers’ use of mobile financial services, the Federal Reserve’s survey of more than 2,220 respondents found that over 30 percent of consumers aged 18-44 had made payments using mobile phones sometime during 2015. According to a report by the Smart Payment Association, 200,000 locations accepted Apple Pay when it was launched in September 2014, but by February 2016, this number had reached 2 million. According to Paypal, it had 218 million active customer accounts at the end of the third quarter of 2017 and processed over 6 billion payments valued at more than $354 billion in 2016. Fintech Lending Fintech lenders—often referred to as marketplace lenders and which operate almost exclusively online —offer a variety of loan types and may use different sources of funds than traditional lenders. The types of loans offered by fintech providers include consumer and small business loans. While these lenders may use traditional means of assessing borrowers’ creditworthiness, such as credit scores, they also may analyze large amounts of additional or alternative sources of data on other aspects of borrower characteristics, such as information from bank accounts, to determine creditworthiness. Fintech lenders can follow various models. For example, some conduct person-to-person lending in which loans are financed by individual investors. In other cases, the funds for these loans can come from institutional investors such as hedge funds, financial institutions, or from notes sold to individual investors. In some cases, funding for loans is obtained by securitizing previously-made loans and selling securities backed by the cashflows from the underlying loans. The fintech lenders that use external capital are referred to as direct lenders and include such firms as SoFi and Earnest. Figure 2 below shows the flow of funds for typical direct lenders. Other fintech lenders include lenders that partner with depository institutions—including banks or credit unions—to originate loans that are then purchased by the lender or by another investor. Examples of lenders partnered with depository institutions include LendingClub Corporation, Prosper, and Upstart. Figure 3 shows the flow of funds for such lenders. Some lenders, such as OnDeck, have now developed hybrid models, selling some whole loans to institutional investors while retaining servicing responsibilities. One firm that tracks fintech activities reported that the volume of lending by 13 of the most significant lenders had reached about $61 billion as of the end of September 2016, and other market monitors estimate that fintech lending volumes could grow to as much as $90 billion to $122 billion by 2020. Fintech Wealth Management and Financial Advice Fintech firms are also offering wealth management or other financial advice, some with minimal or no human interaction. For example, new firms called robo-advisers are offering investors advice using algorithms based on these investors’ data and risk preferences to provide advice on recommended asset holdings and allocations. Fintech firms offering these advice services include Betterment, Personal Capital, and Wealthfront. Figure 4 illustrates a typical case of a consumer using a fintech wealth management adviser. One research firm estimated in July 2017 that robo-adviser firms would have as much as $1 trillion in assets under management by 2020 and as much as $4 trillion by 2022. In addition, some fintech firms—referred to as financial account aggregators—allow consumers to aggregate the information from their various financial accounts, including their assets in bank accounts and brokerage accounts, to enable them to better see their financial health and receive advice on alternative ways to save money or manage their finances. Consumers can access this combined information either online or on mobile devices. Account aggregator firms offering this type of advice on savings and other activities include Mint and HelloWallet. Distributed Ledger Technologies Distributed ledger technology (DLT) is a secured way of conducting transfers of digital assets in a near real-time basis potentially without the need for a central authority. DLT involves a distributed database maintained over a network of connected computers that allows network participants to share and retain identical cryptographically secured records. Such networks can consist of individuals, financial entities, or other businesses. Blockchain is one type of DLT. A blockchain is a shared digital ledger that records transactions in a public or private network. Distributed to all members in the network, the ledger permanently records, in a sequential chain of cryptographically secured blocks, the history of transactions that take place among the participants in the network. DLT products can have different types of access control. For example, some may be “unpermissioned” (public) ledgers that are open to everyone to contribute data to the ledger and have no central control, while others may be “permissioned” (private) ledgers that allow only certain participants to add records and verify the contents of the ledger. The financial services industry has identified various potential uses for DLT. These include tracking international money transfers or tracking the changes of ownership of various financial assets, such as or securities like bonds or stocks or derivatives like swaps contracts. In addition, DLT is being used to track ownership of bitcoin, a virtual currency, specifically using a blockchain. Some companies are using DLT to raise funds. According to a recent bulletin by U.S. securities regulators, these virtual coins or tokens are being created and then disseminated using DLT as part of offerings known as token sales or initial coin offerings. As part of these token sales, purchasers may use fiat currency (e.g., U.S. dollars) or virtual currencies to buy these virtual coins or tokens. Currently, the capital raised from the sales may be used to fund development of a digital platform, software, or other project; or, the virtual tokens or coins may be used to access the platform, use the software, or otherwise participate in the project. After they are issued, in some cases the virtual coins or tokens may be resold to others in a secondary market on virtual currency exchanges or other platforms. Various Regulators May Oversee Fintech Activities A variety of federal and state regulatory bodies may oversee fintech firms or their activities to the extent these firms provide a regulated payment; lending; wealth management; or distributed ledger technology service or activity. Table 1 explains the basic functions of the relevant federal regulators. In addition to the federal regulators above, various state entities also conduct regulatory activities over fintech firms operating within their jurisdictions. According to the association representing state regulators, state financial services regulators license and supervise activities, such as money transmission, consumer lending, and debt collection, irrespective of technology deployed. Nonbank financial service providers that offer services directly to consumers are likely subject to state oversight. In addition to state financial services regulators, state securities regulators, state entities that oversee corporate activities, and state attorneys general have jurisdiction over certain fintech firms. In general, these entities may have authority to license or register firms, conduct exams, and take enforcement actions for violations of state laws or regulatory requirements. Fintech Activities Can Provide Benefits and Pose Risks to Consumers and the Broader Financial System Fintech products in payments; lending; wealth management; and distributed ledger technology can provide consumers and the broader financial system with various benefits but may also pose risks similar to those of traditional products. While existing laws apply to fintech products and services in most cases, some products pose additional risks that may not be sufficiently covered by existing laws. Fintech Products Can Provide Various Consumer Benefits According to our prior work, literature we reviewed, and stakeholders we interviewed, consumer benefits of fintech products include greater convenience; lower cost; increased financial inclusion; faster services; and improved security. Greater convenience: Consumers can use fintech products and services on their mobile device to make payments; transfer money; easily obtain payment for shared expenses; obtain loans; or to receive investment advice without the time and expense of visiting a financial service provider’s physical location. They can also access these services outside of standard business hours. In addition, the ability to see information from all of their financial accounts together in a single dashboard provided by an account aggregator is more convenient than reviewing information from each account on separate statements. Lower cost: Innovations in payments, including the use of DLT, could reduce the cost of payments for consumers. For example, one fintech firm uses DLT to reduce the operational and liquidity costs traditionally incurred with some international payments. Some fintech providers do not charge fees for payments, so consumers save by avoiding paying for checks or incurring automated teller machine fees. In addition, because fintech providers often do not have overhead costs associated with physical locations and use automation instead of relying on large staffs to provide services, they may be able to pass these cost savings on to consumers. For example, according to a Treasury report, automated loan processing, underwriting, and servicing may allow fintech lenders to offer lower rates or fees on their loans because they have to hire fewer loan officers. Similarly, automation in robo-advising could allow consumers to obtain investment advice at a lower cost than if they obtained services from a firm that relied more heavily upon human advisers. Increased financial inclusion: Using alternative data may allow fintech lenders to offer loans to consumers whose traditional credit history may have been insufficient for banks to extend them credit. CFPB officials stated that using alternative data—including bill payment history as a proxy for debt repayment—could expand responsible access to credit, particularly to some consumers who are among the estimated 45 million people who lack traditional credit scores. Similarly, a study by FDIC staff noted that fintech accounts may also enable consumers whose traditional accounts are closed due to lack of profitability for the provider or other reasons to continue to have access to financial services. Also, robo-advising services can make investment advice more accessible to consumers who cannot meet account minimums at traditional advisers by offering lower account minimums. Faster services: Automation may reduce transaction times for services like loan approval or investment advice. Stored payment data in fintech providers’ mobile wallets may reduce transaction time for online purchases because consumers do not need to reenter billing information. Further, such data may reduce transaction time for in- store purchases because transactions using contactless payments are faster than transactions using card readers and cash. Peer-to- peer payments made via mobile wallets may transfer money faster than checks. Also, using DLT may greatly reduce settlement times for currency, derivatives, and securities transactions by improving processes or reducing the number of entities involved in a transaction. For example, one firm is using DLT to reduce settlement for securities from 2 days to the same day. Improved security: While credit and debit transactions have traditionally transmitted sensitive information that can be hacked and used to make fraudulent transfers, fintech providers’ mobile wallets generally replace this sensitive information with randomly generated numbers that mitigate the risk that transaction information can be used fraudulently (tokenization), according to the Federal Reserve’s Mobile Payments Industry Workgroup. Similarly, while lost or stolen credit and debit cards can be used to make fraudulent payments, a lost or stolen mobile device can have security features that protect a mobile wallet from unauthorized use. For example, according to FTC, mobile device features such as device passwords, fingerprint readers, and face recognition software can help protect consumer accounts from unauthorized access. Additionally, FCC notes in a consumer guide that consumers’ ability to disable their mobile devices remotely can help prevent fraudulent use of a consumer’s fintech provider accounts if their mobile devices have been lost or stolen. Further, mobile device Global Positioning System (GPS) data can help identify suspicious activity in consumer accounts or to ensure that a mobile phone being used at a particular merchant is actually at that location, according to the Federal Reserve’s Mobile Payments Industry Workgroup and others. Fintech Products Generally Pose Consumer Risks Similar to Those of Traditional Products The literature we reviewed and stakeholders we interviewed also identified potential risks fintech products pose to consumers, including fraud, discrimination, and unsuitable advice. In general, these risks are similar to those posed by traditional financial products. While laws that apply to traditional products also apply to fintech products in most cases, some fintech products pose additional risks that may not be sufficiently addressed by existing laws. While the legal framework for consumer protection applies to many of the risks associated with fintech products, the extent to which consumers benefit from these protections is a function of the existing regulatory framework and its coverage of fintech activity. We discuss the regulatory framework for fintech products in greater detail later in this report. Fintech Payments Consumers face the risk of unauthorized transactions regardless of whether they use a traditional or fintech firm to make payments. CFPB officials we interviewed told us that some fintech products, such as mobile wallets, increase the number of firms involved in a transaction, which may increase the risk of unauthorized transactions. However, when consumers fund their mobile wallets by linking to traditional funding sources—debit or credit cards or bank accounts—consumer protection laws such as the Electronic Fund Transfer Act and the Truth in Lending Act generally apply. These acts and their implementing regulations provide that consumers can dispute charges to these accounts and liability for losses may be limited to $0 if disputes are made within specified time frames. Consumer protection laws, such as the Electronic Fund Transfer Act, which apply to traditional funding sources, do not yet cover payments funded by mobile wallet balances or mobile carrier billing. To address this gap in protections for mobile wallet funds, CFPB issued a final rule on prepaid accounts that will extend protections for error resolution and liability for unauthorized transfers to prepaid account and mobile wallet balances. This rule had previously been scheduled to become effective in April 2018, but in January 2018, CFPB delayed the effective date of the rule to April 1, 2019. However, fintech firms we interviewed told us that even when certain consumer protections are not required by statute or regulation, they voluntarily provide similar protections and disclose these protections in their terms of service. Agencies have also issued tips for consumers to safeguard their mobile devices and identify fraudulent payments. Similarly, wireless carriers have taken steps to mitigate fraudulent billing in response to enforcement actions, including offering services that prevent third parties from adding charges to consumer bills without consumers’ knowledge or permission— a practice known as “cramming.” However, FCC has found that fraudulent billing continues to be a problem. FCC’s July 2017 proposed cramming rule seeks to codify the agency’s existing prohibition against fraudulent billing through language explicitly prohibiting wireless carriers from placing third-party charges on consumers’ bills without consumer verification. In addition, FCC and FTC have issued tips for consumers and firms publicizing practices that help avoid cramming. Consumers also face the risk their funds could be lost due to the failure of their payment provider. Although consumers with funds in a bank account have protection from this risk through federal deposit insurance up to $250,000, consumers with funds in a mobile wallet may not be similarly protected. To address this risk, some fintech firms deposit consumers’ mobile wallet balances into an FDIC-insured bank or savings association, resulting in the funds being insured by FDIC up to the applicable deposit insurance limit in the event of the failure of the bank or savings association. Other fintech firms voluntarily disclose to consumers in their terms and conditions that any mobile wallet balances they hold are not FDIC insured. However, according to the Conference of State Bank Supervisors (CSBS), 49 states have laws that require fintech firms engaged in money transmission or stored value to self-insure through bonding, holding investments against funds held or transmitted, and meeting minimum net worth requirements. Further, consumers face the risk that their mobile wallet balances will not be accessible in a timely manner. Under the Expedited Funds Availability Act, banks are required to make customers’ deposited funds available to them within prescribed time frames. For example, banks are typically required to make funds a customer receives through an electronic transfer available by the next business day. However, as nonbanks, fintech firms are not subject to this act’s requirements and therefore do not have to make mobile wallet balances available under the same time frames. For example, one fintech firm we interviewed told us that most transfers from mobile wallets to bank accounts make funds available by the next business day, but certain circumstances, such as suspicious account activity, may cause the firm to delay transfers a few days. Another fintech firm we interviewed told us that transfer amounts are limited based on anti-money laundering requirements. However, fintech firms we spoke with voluntarily disclose the availability of funds and any limits on access in the terms and conditions provided to customers when they create their accounts. However, FTC recently settled with a fintech payment provider for delays in fund accessibility experienced by its users. In its complaint, FTC charged that the firm had failed to disclose that these funds could be frozen or removed based on the results of the firm’s review of the underlying transaction. As a result, consumers complained that at times, the firm delayed the withdrawal of funds or reversed the underlying transactions after initially notifying them that the funds were available. Fintech Lending Consumers face risks associated with unclear terms and conditions regardless of whether they borrow from a traditional or fintech lender. For example, consumers could have difficulty understanding their repayment obligations or how those terms compare to terms offered by other lenders. However, the Truth in Lending Act requires lenders to provide consumers with standardized, easy-to-understand information about the terms of the loan and enables consumers to make claims against lenders for violating Truth in Lending Act requirements. Consumers also face risk of discrimination and unfair credit practices regardless of whether they borrow from a traditional or fintech lender. However, these risks may not be fully understood with fintech lenders that use alternative underwriting standards and consumer data—such as information on rent payments and college attended. For example, fintech firms assessing applicants’ creditworthiness with criteria highly correlated with a protected class—such as race or marital status—may lead to a disproportionate negative effect. As with traditional lenders, federal fair lending laws, such as the Equal Credit Opportunity Act, apply to fintech lenders. In addition, some fintech lenders have taken steps that aim to address this risk. For example, one fintech lender said it monitors the effect any changes to their underwriting models may have on fair lending risk. Consumers face risk of harm due to inaccurate credit assessments, but these risks are also less understood with fintech lenders that use alternative data to underwrite loans. For example, inaccurate data or models used by a fintech lender could classify borrowers as higher credit risks than they actually are. This could result in those borrowers paying unnecessarily high interest rates and increasing their risk of default or could result in creditworthy borrowers being denied credit. Whereas the Fair Credit Reporting Act requires that borrowers have an opportunity to check and correct inaccuracies in credit reports, borrowers could face more challenges in checking and correcting alternative data that some fintech lenders use to make underwriting decisions because alternative data are not typically reflected in credit reports. However, the Equal Credit Opportunity Act requires lenders, including fintech lenders, that deny credit to applicants to disclose the specific reasons for denial. Alternatively, if the fintech lender’s underwriting is too lax, loans could be made to borrowers who lack the ability to repay them. Borrowers who default under these circumstances then face limited access to and higher prices for credit in the future. Fintech Wealth Management Consumers face risks of receiving unsuitable investment advice regardless of whether they obtain advice from a traditional or robo- adviser. While a human adviser may be able to mitigate this risk by probing consumers for more information to assess needs, risk tolerance, or other important factors, a robo-adviser’s ability to mitigate this risk may be based on a discrete set of questions to develop a customer profile. In addition, advisers could make inaccurate or inappropriate economic assumptions, perhaps due to a failure to factor in changing economic conditions, which could result in flawed investment recommendations. While human advisers may be able to mitigate this risk to some degree based on their ability to adjust to economic conditions, a robo-adviser’s ability to mitigate this risk is based on whether its algorithm has been updated to reflect the most recent economic conditions. Because, as we discuss below, robo-advisers generally are required to comply with the same requirements as traditional investment advisers, customers of robo- advisers and traditional advisers receive the same protection from these risks. Consumers who use fintech services that provide an aggregated view of their accounts at other financial institutions could potentially be more exposed to losses due to fraud. If a consumer authorizes an account aggregator to access their financial accounts and grants the aggregator authority to make transfers, the consumer may be liable for fraudulent transfers made. CFPB is studying risks associated with entities that rely on access to consumer financial accounts and account-related information, and has issued a related request for information (we address this issue later in this report). Distributed Ledger Technology DLT can be used to issue and distribute digital assets known as tokens to consumers and investors. Virtual currencies—tokens that are digital representations of value that are not government-issued legal tender— could pose some unique risks to consumers. For example, the ability of virtual currency users to recover funds lost due to fraud or errors may be more limited than that of customers using traditional products like payment cards or bank transfers to make payments. Whereas traditional transactions can be reversed to correct fraud or errors, many virtual currency transactions are designed to be irreversible. Also, unlike storing dollars in a bank account, if a consumer stores their virtual currency in a mobile wallet, their wallet provider may disclaim responsibility for replacing virtual currency that is stolen. Further, CFPB’s prepaid accounts rule, which will extend consumer protections to prepaid cards and mobile wallets with stored value, explicitly does not extend consumer protections to virtual currencies. However, firms that transmit, exchange, hold, or otherwise control virtual currency may be subject to state consumer protection law. In addition to fraud and errors, consumers who use virtual currencies may face other risks of loss. Federal deposit insurance does not apply to virtual currency balances. As a result, according to FDIC staff, consumers could face losses if they store their virtual currencies with a mobile wallet firm that goes out of business unless the firm offers private insurance. Further, if consumers store their virtual currency on their own and misplace or forget their account access information, they may lose access to their funds. Unlike bank accounts for which users can reset passwords or usernames, some wallets do not offer a way to reset such information. To help consumers address these risks, federal agencies and state regulators have issued documents publicizing practices that may help consumers use virtual currency more safely. Tokens—which may also function similarly to a security—could pose some unique risks to investors, and some investor protections may not be available. Token sales, sometimes known as initial coin offerings or ICOs, are being used by firms to raise capital from investors and may pose investor risks, including fraud and theft. For example, one firm allegedly promised investors it would invest its token sale earnings in real estate, but instead allegedly defrauded investors of their investments. Fraud and theft are risks of other securities offerings, and investors receive protections from these risks under the Securities Act of 1933 and the Securities Exchange Act of 1934 for token sales that meet SEC’s definition of a security. However, these protections do not apply to investors who participate in token sales that do not meet the definition of a security. In December 2017, SEC issued a cease-and-desist order to one firm for failure to register their token sale with SEC. In addition, SEC has reported that an investor’s ability to recover funds may be limited if key parties to token sales are located overseas or operating unlawfully. To help investors address these risks, SEC and FINRA have issued documents publicizing risks of token sale investment. Tokens traded on a platform may also be considered commodities and may pose investor risks including fraud and theft. Platforms that facilitate leveraged, margined, or financed trading of tokens may be subject to a requirement to register with the CFTC. To help investors understand tokens, CFTC has issued a report publicizing potential risks of virtual currencies and clarifying cases in which investors may be at risk because CFTC does not have oversight authority. For example, virtual currency and token exchanges that conduct certain spot or cash market transactions but do not use leverage, margin, or financing are not required to follow all of the rules that regulated exchanges are required to follow. DLT applications may pose other unknown risks compared to the technologies and processes they replace, given that the technology is in the early stages of development. For example, CFTC and the Federal Reserve have identified cybersecurity and operational risks as potential risks of DLT. FDIC officials said that finality of a transaction under a DLT settlement may potentially raise legal challenges. Also, applications of DLT that depend on consensus for validating transactions are vulnerable to a “51 percent attack,” which could defraud consumers by revising their transactions or sending fraudulent payments. However, according to market observers, such an attack is unlikely and has not been carried out. Fintech Products Can Pose Other Risks to Consumers; Risks to the Broader Financial System Are Unclear Consumers face the risk of financial loss due to data breaches regardless of whether they use a traditional or fintech firm, and these breaches could undermine the financial system by eroding consumer trust in financial institutions. Similar to traditional products and services that collect sensitive consumer information and are connected to the Internet, fintech products and services may be vulnerable to cyberattack and can pose data security risks. In addition, one market observer we interviewed told us that hackers may target these new fintech firms before their security systems are mature. However, according to literature we reviewed and fintech firms and market observers we interviewed, some fintech firms have adopted technologies or practices designed to mitigate security risks. For example, new fintech firms can use the latest information technology systems to secure their products instead of having to update older systems. Additionally, as discussed above, some fintech firms use new techniques and leverage mobile device features to enhance data security, and one fintech firm said that it also uses technology that contacts clients if a data breach issue arises. Like traditional financial institutions, rules and guidelines implementing the Gramm-Leach-Bliley Act (GLBA) generally require fintech firms to secure customer information. In addition, some regulators have issued guidance to consumers publicizing practices that help avoid security problems when using fintech products. Regulators have also issued guidance to businesses including fintech firms that recommends that they adopt policies and procedures that address the prevention and detection of, and response to, cybersecurity threats. For example, the New York State Department of Financial Services requires regulated entities to meet cybersecurity requirements outlined in regulation. Some fintech firms may also pose privacy concerns because they may collect more consumer data than traditional firms. For example, fintech lenders that use alternative data in underwriting may have sensitive information about consumers’ educational background, mobile phone payments, or other data. One fintech firm we spoke with requires consumers to provide additional data, such as what a payment is for, in order to make peer-to-peer payments. Some data aggregators may hold consumer data without disclosing what rights consumers have to delete the data or prevent the data from being shared with other parties. A leak of these or other data held by fintech firms may expose characteristics that people view as sensitive. GLBA generally requires fintech firms and traditional financial institutions to safeguard nonpublic personal information about customers. According to literature we reviewed and fintech firms and market observers we interviewed, as with data security, some fintech firms use new technologies or mobile device features to mitigate data privacy risks. In addition, some regulators have issued guidance to consumers publicizing practices that help maintain privacy when using online products and services, including those provided by fintech firms. Regulators have also issued GLBA guidance to businesses including fintech firms recommending that they adopt policies and procedures to prevent, detect, and address privacy threats. Similar to traditional products and services, fintech products may be used to facilitate illicit activities, including money laundering, terrorist financing, and evading sanctions program requirements. For example, in 2015, the Financial Action Task Force (FATF) reported that new payment methods pose an emerging terrorist finance vulnerability because users can access these methods from anywhere in the world and it is difficult for enforcement agencies to identify the beneficiary. However, FATF found that the extent to which terrorist groups actually exploit these technologies is unclear and said that enforcement agencies should monitor these risks for developments. Further, FATF has stated that fintech innovations provide an opportunity to bring anti-money laundering efforts into the 21st century by reducing dependency on cash and informal systems and making it easier for authorities to detect and follow illicit financial flows. Relevant laws that prohibit financial crimes apply to fintech products. For example, the Bank Secrecy Act (which established reporting, recordkeeping, and other anti-money laundering requirements) and economic sanctions programs (which create economic penalties in support of U.S. policy priorities) apply to all financial firms that transmit money regardless of whether they use traditional or fintech products. Finally, market observers have questioned whether fintech activities could create risks to overall financial stability, but many have said such risks are relatively minimal due to fintech firms’ small market presence. While direct or indirect linkages between large financial institutions could lead financial problems at one firm to create similar problems for other firms that can undermine financial stability, studies by regulators in various countries and international organizations found that fintech firms have not generally reached a level of interconnectedness where their financial distress would threaten the stability of other financial system participants. For example, the Bank for International Settlements and the Financial Stability Board reported that in 2015 fintech accounted for 2 percent of new credit in the United States. Additionally, after assessing virtual currencies, the European Central Bank concluded in a November 2017 report that virtual currencies were not a threat to financial stability due to their limited connection with the real economy, their low volume traded, and the lack of wide user acceptance. However, the Financial Stability Board and other market observers have noted that fintech firms could potentially affect financial stability in both positive and negative ways as the activities and firms evolve. For example, fintech firms could help decentralize and diversify the financial services market, and they could diversify exposure to risk by increasing access to financial services for consumers and small businesses. On the other hand, providers could potentially also increase risks to financial stability. For example, robo-advisers could amplify swings in asset prices if their risk models rely on similar algorithms, making the portfolio allocation methods of robo-advisers more highly correlated than those of traditional advisers, although according to the Financial Stability Oversight Council, this risk could also arise if traditional advisers follow similar allocation strategies. Similarly, according to the Financial Stability Board, fintech lenders could potentially amplify swings in credit availability if the investors that fund many marketplace lending products are more willing to fund loans during market upturns or less willing to fund loans during market downturns. To help balance these potential benefits and risks, the Financial Stability Board recommended that international bodies and national authorities continue to monitor the issues and consider the effects of fintech in their risk assessments and regulatory frameworks. Fintech Firms’ Compliance with Applicable Laws Is Subject to Varied Federal Oversight The extent to which fintech firms are subject to federal oversight of their compliance with applicable consumer or other laws varied. Fintech firms that offer investment advice typically register with and are subject to examinations by federal securities regulators. Some fintech firms providing payments or loans that have partnered with federally regulated banks or credit unions may receive indirect oversight from federal financial regulators as part of their efforts to ensure that their regulated entities are adequately managing the risks of these arrangements. Nonpartnered fintech firms would not typically be subject to routine examinations by a federal financial regulator but would instead be subject to state regulatory oversight and enforcement. While fintech firms and financial institutions are subject to different degrees of routine federal oversight, we found that indications of fintech firms causing widespread harm were limited as they were subject to fewer complaints than large financial institutions. Fintech Firms Providing Investment Advice Are Subject to the Same Oversight as Traditional Financial Institutions Fintech robo-advisers offering wealth management advice would generally be subject to the same federal and state oversight as traditional investment advisers. Under the Investment Advisers Act of 1940 and state securities laws, any entity or individual that offers investment advice for compensation generally must register as an investment adviser—with SEC or states—and adhere to various reporting and conduct requirements. When providing advice, investment advisers—traditional or fintech—are considered fiduciaries to their clients, which means they owe a duty of care and loyalty to their clients, and they must disclose all actual or potential conflicts of interest, and act in their clients’ best interest. To review for compliance with this standard and other applicable requirements, staff from SEC and state securities regulators conduct examinations of registered investment advisers. Specifically, state regulators are responsible for conducting examinations of investment advisers that operate in fewer than 15 states and hold client assets under management of less than $100 million. However, according to staff from the North American Securities Administrators Association—a membership organization for state, provincial, and territorial securities administrators in the United States, Canada, and Mexico—no robo-adviser firms were solely regulated by the states as of October 2017. Fintech Firms That Partner with Financial Institutions May Be Subject to Indirect Federal Financial Regulator Oversight Some fintech firms may be subject to indirect federal oversight as part of relationships they have entered into with regulated financial institutions. If fintech firms partner with federally-regulated financial institutions, such as a bank or credit union, federal financial regulators may conduct examinations of the regulated financial intuition that could include some review of the extent to which the fintech firm may affect the partner financial institution’s adherence to relevant regulations through the services provided to the financial institution. Regulators conduct these examinations in order to assess the risk to the regulated institution because the failure of the fintech firm to follow such laws could expose the bank or credit union to financial or other risks. As part of the indirect oversight of fintech firms, the financial institution would be expected by its regulators, under various third-party guidance issuances by these regulators, to ensure that any risks to the institution resulting from the relationship with the fintech firm are assessed and mitigated. Among other things, banks and credit unions should conduct due diligence on potential third-party partners, including having a process within the institution for managing the risks posed to their institution by the third party. For example, OCC third-party guidance states that banks should adopt risk management processes that are commensurate with the level of risk and complexity of the third-party relationship. These processes include establishing risk-mitigating controls, retaining appropriate documentation of the bank’s efforts to obtain information on third parties, and ensuring that contracts meet the bank’s compliance needs. Although fintech firms partnering with federally regulated institutions would be expected to follow the practices in this guidance, the extent to which they would be overseen by a federal financial regulator was limited. For example, FDIC and OCC staff told us that they had examined a fintech firm that provides financial account aggregation services to regulated institutions. This review focused on the fintech firm’s data security rather than its activities with consumers. FDIC staff also said they conducted exploratory discussions with some fintech lenders, but these firms were not part of their technology service provider examination program. However, as of November 2017, FDIC and OCC staff noted that they had not completed examinations of fintech firms within our scope. NCUA staff noted that NCUA does not have authority to examine services provided to credit unions by third-party service providers. In order to examine any services provided to credit unions, NCUA must rely on credit unions voluntarily providing information on the third-party service provider. However, NCUA’s staff noted some of their examiners had accompanied state regulators in an examination that involved a credit union’s partnership with a fintech payments firm. Other Fintech Firms Are Not Routinely Overseen by Federal Financial Regulators, but Are Subject to State Oversight Fintech firms not providing investment advice or partnered with federally- regulated financial institutions would be subject to routine oversight by a federal regulator only under certain circumstances. For example, CFPB could examine some fintech firms as a result of its examination authorities. Specifically, it has supervisory authority over certain nondepository institutions, including mortgage lenders and servicers, payday and student loan providers, and “larger participants” in consumer financial product and service markets, which could include fintech providers. CFPB has conducted or plans to conduct examinations of fintech firms that meet the agency’s definition of ‘“larger participants” in sectors for which they have designated such participants. For example, according to CFPB staff, it has conducted a stand-alone examination of a fintech payments company that provides international remittances, and it has scheduled an examination of a fintech lender that provides student loans. As of October 2017, it had not defined other “larger participants” specifically for other markets in which fintech firms may be active, but it is considering a proposed rule to supervise larger participants in the personal loan markets, which might include larger fintech lenders. CFPB may also conduct examinations of individual companies that it determines pose risks to consumers, as identified in public orders. Furthermore, CFPB’s supervisory authority also extends to third-party service providers of nondepository institutions overseen by the agency. Fintech firms may also be subject to examinations related to their compliance with anti-money laundering laws and related requirements. FinCEN, which is responsible for administering federal anti-money laundering laws, has authority to examine any fintech firms conducting money transmission, according to Treasury officials. These firms would be required to comply with the applicable anti-money laundering and counter-terrorist financing requirements, including registering with FinCEN, establishing anti-money laundering programs, and reporting suspicious activities to FinCEN. However, FinCEN delegates routine anti- money laundering examinations of federally-chartered or registered financial institutions to the federal financial institution regulators. In other cases, firms subject to anti-money laundering requirements, including fintech payments or lending firms, could be examined by state regulators and the Internal Revenue Service. Fintech firms not subject to routine federal supervisory oversight would instead generally be subject to state oversight. As of October 2017, 49 states, as well as the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands, required entities that provide money transfer services—which may include some fintech payments firms—to obtain licenses to conduct such activities in their jurisdictions according to documents from state regulator associations and CSBS staff. In addition, all states and the District of Columbia required lending licenses for consumer lenders operating in their states, according to CSBS staff. Furthermore, some states have created or provided guidance on licensing statutes in order to include virtual currencies. For example, in 2015 New York finalized a new license for virtual currency businesses under New York’s financial services law. State regulators in these jurisdictions conduct examinations of the firms that hold licenses to assess their compliance with safety and soundness and various other requirements. In addition, CSBS staff stated that as of February 2018, approximately 37 states authorize state regulators to examine banks’ third-party service providers—which could include fintech companies. According to state regulators we interviewed in Illinois, New York, and California, their agencies use the same approach to regulate and examine fintech firms and traditional financial institutions providing similar services. Furthermore, according to state regulatory associations and some state regulatory agencies, fintech firms such as money transmitters undergo regular supervision through on-site examinations to monitor compliance with federal and state capital, liquidity, and consumer protection requirements. For example, Money Transmitters Regulators Association staff said that state regulators examine MSBs at least every 3 years depending on risk assessment and previous examination record, and that state examinations cover federal and state laws, including data security and anti-money laundering requirements. Similarly, staff from one state regulator noted that they conduct consumer protection examinations of direct lenders and take enforcement action if they identify potential violations. CSBS staff noted that state requirements do not differ for fintech firms because the requirements and examinations are activity- based. For example, most states have anti-money laundering requirements within their money transmitter license laws. Due to state anti-money laundering examination cycles, CSBS staff stated that MSBs licensed in 40 or more total states experience an examination at least once every 14 months. Fintech Firms Can Be Subject to Enforcement Actions by Federal and State Regulators Outside of examinations, fintech firms that violate federal and state regulations can be subject to enforcement actions by federal and state agencies with such authorities. The OCC, Federal Reserve, and FDIC may have enforcement jurisdiction over fintech firms when the fintech firm is an “institution affiliated party” under the Federal Deposit Insurance Act or a service provider under the Bank Service Company Act. In addition, CFPB can take enforcement action against institutions under its jurisdiction for noncompliance with federal consumer protection laws. For example, in 2016, CFPB used its unfair, deceptive, or abusive acts or practices authorities to investigate and issue a consent order against a fintech firm operating an online payment system, which CFPB determined had made deceptive data security claims to customers. FTC can also take enforcement actions against fintech firms not registered or chartered as a bank for violations of any federal consumer laws FTC enforces, including the FTC Act’s prohibition against unfair or deceptive acts or practices. For example, in 2015, FTC took action against the providers of a smartphone application, alleging that they deceived consumers and installed hidden malicious software code to generate virtual currencies for the providers without consumer permission. It can also bring enforcement action against non-bank service providers that maintain or process customer information under its GLBA authority. Other federal entities can pursue enforcement action against fintech firms. The Department of the Treasury’s Office of Foreign Assets Control can take action against fintech firms that violate U.S. sanctions regulations. In addition, FinCEN can also pursue enforcement measures against fintech firms that transmit funds—such as certain fintech payment and lending firms—due to its authority to enforce compliance with the Bank Secrecy Act’s anti-money laundering and prevention of terrorist financing provisions. For example, FinCEN took enforcement action in May 2015 against the fintech firm Ripple—a company that allows users to make peer-to-peer transfers in any currency using a DLT-enabled process—for violating anti-money laundering requirements through its sale of virtual currency. In 2016, CFTC brought an enforcement action against a Hong Kong-based fintech firm for offering illegal off-exchange financed retail commodity transactions in bitcoin and other cryptocurrencies, and for failing to register as a futures commission merchant. Finally, state regulators can also take enforcement action against financial institutions and fintech firms that violate state data security or consumer protection laws. In addition, state attorneys general may bring actions against fintech companies through consumer protection and deceptive trade practice acts, according to the National Association of Attorneys General. In Some Cases, Fintech Firms May Not Be Subject to Financial Regulator Oversight Some fintech companies may not be subject to any federal or state financial oversight if they do not meet federal or state definitions of a money service or other regulated business. For example, some fintech payments firms—such as certain mobile wallet providers—might not be subject to state or federal money service business requirements because their role in the payment process does not specifically involve transmitting money, according to state and federal regulators. One mobile wallet provider claimed that it is not subject to federal financial regulatory oversight because it does not transfer funds or authorize transactions, but instead facilitates the transfer of customer data as part of the credit card or debit card networks; it also does not retain any of its consumers’ personal data, including data on purchase content, location, or dollar amount. Indications of Fintech Activities Creating Widespread Consumer Harm Appear Limited Compared to Traditional Providers Available regulatory data show that the number of consumer complaints against fintech activities appears modest compared to traditional providers. For example, although our analysis of the CFPB’s consumer complaint database has limitations in assessing risk, the number of published complaints submitted against several prominent fintech firms from April 2012 through September 2017 included in this database was generally low, when compared to select large financial institutions. Our analysis showed that for 13 large firms offering fintech payments, lending, investment advice, financial account aggregation, or virtual currencies, only 5 of the firms had complaints in the CFPB database, with 4 having received fewer than 400 complaints. The largest number of published complaints had been submitted against a large fintech payment provider with over 3,500 published complaints. Further, the number of published complaints submitted against the fintech payment provider was relatively small compared to the number of published complaints submitted against other, often larger financial institutions. For example, our analysis showed that 10 large financial institutions each received between approximately 14,300 and 67,300 total complaints April 2012 through September 2017. In addition, various federal regulators, including CFPB and FTC, can address the risk of consumer harm by taking actions against fintech firms for deceptive or unfair acts or practices when warranted. For example, in 2016, FTC reached a settlement with a firm that sold machinery designed to create virtual currencies—a process known as mining—and allegedly had been deceiving its customers about the availability and profitability of the machinery. As noted earlier, FTC also settled with a fintech payment provider in February 2018 over complaints by thousands of consumers the company had received regarding confusion over its funds availability practices. Additionally, in 2016 CFPB assessed a $100,000 civil penalty against a fintech payments firm for deceiving consumers about its data security practices and the safety of its online payment system. The U.S. Regulatory Environment Poses Various Challenges to Fintech Firms Fintech firms can find that the complexity of the U.S. financial regulatory system creates challenges in identifying the laws and regulations that apply to their activities, and that complying with state licensing and reporting requirements can be expensive and time-consuming for mobile payment providers and fintech lenders. Also, federal agencies could improve collaboration and clarify issues related to financial account aggregation by making sure that interagency efforts dedicated to fintech include all relevant participants and incorporate other leading practices. In addition, because banks are liable for risks posed by third parties, fintech firms may face delays in entering into partnerships with banks. Challenges with Complexity of Financial Regulatory Structure The complex U.S. financial regulatory structure can complicate fintech firms’ ability to identify the laws with which they must comply and clarify the regulatory status of their activities. As noted in our past reports, regulatory oversight is fragmented across multiple regulators at the federal level, and also involves regulatory bodies in the 50 states and other U.S. jurisdictions. Fintech firms and other stakeholders we interviewed told us that it was difficult for fintech firms to navigate this structure. In particular, understanding the laws and regulations that may apply to fintech firms was not easy because existing regulations were sometimes developed before the type of product or service they are now offering existed. In addition, the cost of researching applicable laws and regulations can be particularly significant for fintech firms that begin as technology start-ups with small staffs and limited venture capital funding. Fintech payments and DLT firms and other market participants told us that navigating this regulatory complexity can result in some firms delaying the launch of innovative products and services—or not launching them in the United States—because the fintech firms are worried about regulatory interpretation. For example, staff from one U.S. firm that developed a DLT payments technology told us that they and their peers only work with foreign customers due to the fragmented U.S. financial regulatory structure and lack of unified positions across agencies on related topics. However, several U.S. regulators have issued rules and guidance to help fintech firms understand where their products and services may fit within the complex financial regulatory structure, as shown in the following examples. In December 2017, the Federal Reserve’s Consumer Compliance Outlook newsletter included an article that offered financial institutions and fintech firms general guideposts for evaluating unfair and deceptive practices and fair lending risk related to fintech, with a focus on alternative data. Also, in 2016, a special edition of Consumer Compliance Outlook focused on fintech, including summarizing relevant federal laws, regulations, and guidance that may apply to mobile payments, fintech lending, and digital wealth management. For example, the newsletter listed laws and regulations related to credit, privacy, and data security; anti-money laundering requirements; and consumer and investor protection. In 2016, CFPB issued a final rule that will extend wide-ranging protections to consumers holding prepaid accounts, including peer-to- peer payments and mobile wallets that can store funds. Also, in 2015, CFPB issued a set of nonbinding consumer protection principles for new faster payment systems, which outline CFPB expectations for payment services providers. In February 2017, SEC issued updated guidance on robo-advisers that addresses the substance and presentation of disclosures provided to clients on the robo-adviser and the investment advisory services it offers, the obligation to obtain information from clients to ensure that recommended investments are suitable, and the need to implement effective compliance programs reasonably designed to address the unique nature of providing automated advice. Similarly, in March 2016, FINRA issued a report on effective practices related to digital investment advice and reminded FINRA-registered broker- dealers of their obligations under FINRA rules. In 2013, FinCEN issued guidance that clarified the applicability of anti- money laundering and related regulations to participants in certain virtual currency systems, and in 2014 FinCEN issued administrative rulings that further clarified the types of market participants to which the 2013 guidance applies. In October 2017, CFTC issued a report on virtual currencies that explains that it considers virtual currencies to be commodities, outlines related examples of permissible and prohibited activities, and cautions investors and users on the potential risks of virtual currencies. In July 2017, SEC issued a report on DLT token sales, which cautions market participants that sales with certain characteristics may be subject to the requirements of federal securities laws. In general, the report uses one company’s token sale as an example to illustrate how SEC could consider a token sale to be a securities offering, and why companies offering such products would have to register the offering with SEC or qualify for an exemption. In August 2017, FINRA also issued an investor alert on DLT token sales, which includes questions for investors to ask before participating in such sales. In January 2017, FINRA issued a report on DLT uses more broadly, which outlines key regulatory considerations for firms that want to use DLT in equity, debt, and derivatives markets. For example, the report outlines securities-related regulatory considerations for DLT applications that could alter securities clearing arrangements, be used for recordkeeping by broker-dealers, or change the equity or debt trading process, among other things. Challenges Complying with Numerous State Regulatory Requirements As mentioned previously, although federal oversight applies to some fintech firms, fintech payments and lending firms not subject to routine federal oversight must typically obtain state licenses based on their activities. Banks can choose to be chartered at the state level or as a national bank, which generally exempts them from state licensing requirements and examination. In contrast, fintech payment providers operating as MSBs—including those using DLT—and fintech firms offering consumer loans must typically hold licenses in each state in which they operate. Similarly, as mentioned above, small robo-advisers would generally have to be licensed in states in which they wish to operate. State regulators and other market observers we interviewed told us that they believe state regulation of fintech firms provides benefits. Several market participants and observers said that states understand the needs of their local economies, consumers, and market participants and can use their authorities to craft tailored policy and regulation. For example, New York regulators created a special license for virtual currency firms. New York regulators told us that they did so because of New York’s status as a financial and innovation hub, as well as activities and concerns of virtual currency firms operating within their jurisdiction. In addition, state regulators may complement the federal oversight structure by dedicating additional resources to helping educate fintech firms on regulatory requirements and making sure that firms follow these requirements. For example, two state regulators told us that they work closely with many fintech start-ups to help educate them on regulatory requirements before they apply for licenses or begin operations, and a state regulatory association told us that fintech firms and state regulators often meet to discuss regulatory concerns. Representatives of a state regulatory association told us that federal agencies also rely increasingly on state examinations to ensure compliance with anti-money laundering requirements. Similarly, an industry association and state regulators told us that they believe states are very responsive to consumer complaints. For example, one state regulator told us that they investigate hundreds of consumer complaints per month and believed they often resolved consumer complaints more quickly than their federal consumer protection counterparts, although CFPB staff told us that CFPB handles thousands of complaints per month. California regulators also told us they have initiated their own investigations into the extent to which fintech lenders comply with state lending and securities laws, and risks that fintech lenders may pose to consumers and to markets. However, complying with fragmented state licensing and reporting requirements can be expensive and time-consuming for mobile payment providers and fintech lenders. For example, stakeholders we interviewed said that obtaining all state licenses generally costs fintech payments firms and lenders $1 million to $30 million, including legal fees, state bonds, and direct regulatory costs. Also, market participants and observers told us that fintech firms may spend a lot of time on state examinations because state exam requirements vary and numerous states may examine a fintech firm in 1 year. For example, staff from a state regulatory association said that states may examine fintech firms subject to coordinated multistate exams 2 or 3 times per year, and as many as 30 different state regulators per year may examine firms that are subject to state-by-state exams. Although these challenges are not unique to fintech firms, they may be more significant for fintech firms than for other MSBs and lenders. For example, some MSBs and lenders operate in a limited geographic area that can require them to be licensed by one state only. Other firms operate in multiple states or nationwide, but may have started with a license in one state and then obtained additional licenses and spread these compliance costs as they grew over time. In contrast, fintech firms are generally online-only businesses that likely seek to operate nationwide from their inception, which immediately requires licenses in all states and generates higher up-front compliance costs that may strain limited venture capital funding. For example, one firm we interviewed that funds fintech start-ups told us that one of their fintech firms spent half of the venture capital funds it had raised obtaining state licenses. As a result, some firms may choose not to operate in the United States. For example, one DLT provider we interviewed told us that although they are based in the United States, they operate abroad exclusively because state licensing costs are prohibitively expensive. Bank partnerships and specialized operating charters offered by federal and state banking regulators may help fintech firms more easily operate nationwide by generally preempting state licensing requirements. For example, some fintech payments firms and fintech lenders have chosen to partner with nationally chartered and state-chartered banks, which allows them to operate nationwide without having to obtain individual state licenses. Also, two fintech lenders have applied for an Industrial Loan Corporation (ILC) charter, an FDIC-supervised state banking charter, which commercial firms other than regulated financial institutions can obtain in certain states to operate nationally. Such ILCs would also be overseen by FDIC if they obtain FDIC deposit insurance. In addition, in December 2016, OCC announced its intent to consider applications for special-purpose national bank charters from fintech firms such as lenders, which would allow such firms to operate nationally under a single national bank charter if finalized. However, OCC officials we interviewed told us that this special-purpose national bank charter is on hold because they are still reviewing whether to go forward with the proposal, and CSBS has filed a lawsuit against OCC challenging the fintech charter. Some fintech lending firms and an industry association representing payments firms have expressed interest in applying for this special charter, but other stakeholders we interviewed told us that the proposed fintech charter may not be a good option for small fintech firms if the capital requirements are the same as those for banks. In addition, state regulators are taking steps to make it easier for fintech firms seeking to operate across multiple states. For example, CSBS staff we interviewed told us that states leverage the Nationwide Multistate Licensing System—which enables firms to submit one application with information that fulfills most of the licensing requirements of each state that participates in this system. Staff from CSBS, some fintech firms, and an industry observer we interviewed said that although the multistate licensing system has reduced administrative requirements somewhat, firms still have to make additional filings to address certain requirements unique to some states. In February 2018, seven state regulators also agreed to standardize key elements of the MSB licensing process and mutually accept licensing findings. Additionally, in 2013, state regulators established the Multi-State MSB Examination Taskforce, which coordinates and facilitates multistate supervision of MSBs. CSBS staff told us that multistate exams have made the state MSB exam process more efficient for state regulators and MSBs. In May 2017, the CSBS also announced they would be expanding efforts to modernize state regulation of fintech firms. For example, under this initiative, officials we interviewed told us they plan to redesign their multistate licensing system to provide a more streamlined licensing process for new applicants and shift state resources to higher-risk cases by 2018; plan to harmonize multistate supervision by establishing model approaches to key aspects of nonbank supervision, making examinations more uniform, identifying and reporting violations at the national level, and creating a common technology platform for examinations by 2019; and have formed a fintech industry advisory panel—with sub-groups on payments, lending, and banking—to identify licensing and regulatory challenges. Challenges with Interagency Collaboration Although a few fintech market participants and observers we interviewed told us that they thought regulatory collaboration on fintech was sufficient, the majority of market participants and observers we interviewed who commented on interagency collaboration said that it could generally be improved. Some also cited additional areas in which better interagency collaboration could facilitate innovation: Use of alternative data and modeling in fintech lending. Fintech lenders may face challenges because agencies with authorities related to consumer protection and fair lending have not issued guidance on the use of alternative data and modeling. For example, one fintech lender we interviewed told us that they discussed using alternative data to assess creditworthiness with FDIC and FTC, but they do not understand what each agency might consider to be an unfair, deceptive, or abusive practice because the agencies have not coordinated positions. Staff we interviewed from two consulting firms that advise on fintech told us that lack of clarity or coordination on fair lending and use of alternative data and modeling creates uncertainty for fintech lenders. This has led some fintech lenders to forgo use of alternative data for underwriting purposes since they do not know if it will produce outcomes that violate fair lending laws and regulations. However, FDIC staff told us that FDIC applies the same standards as FTC in determining whether an act or practice is unfair or deceptive and that existing guidance on fair lending applies broadly to traditional and nontraditional modeling techniques and data sources. OCC special-purpose national bank charter. A few market participants and observers we interviewed told us that fintech payment providers and lenders may face challenges because OCC has not sufficiently coordinated with the Federal Reserve and FDIC on OCC’s special-purpose national bank charter. Despite OCC discussion with the Federal Reserve, the charter proposal does not specify whether recipients could access the Federal Reserve payments system. Federal Reserve officials have said that the Federal Reserve will likely not take any policy positions or make any legal interpretations about the proposed charter until OCC finalizes the charter’s terms and a firm applies for a charter. Officials have said that this is their position because the potential policy and legal interpretation issues that could arise related to membership and access to Federal Reserve services will require a case-by-case, fact- specific inquiry unique to any firm that moves forward with an application. One fintech lender we interviewed told us that obtaining consistent and complete information from OCC and the Federal Reserve on the specific rights this charter would grant a fintech lender had been challenging, and that this lack of consistency and clarity could discourage fintech firms from applying for the charter. However, OCC staff we interviewed told us that the charter is not yet final and that they facilitate communication between fintech firms that are interested in the special charter and the Federal Reserve. Also, OCC staff said that they briefed FDIC staff on the special charter, but will coordinate further if appropriate. Differing regulatory interpretation of consumer protection requirements. As discussed above, fintech firms may be subject to CFPB oversight and limited federal financial regulatory oversight if they also partner with financial institutions. In addition, FTC and CFPB can also take enforcement actions against fintech firms not registered or chartered as a bank for violations of any federal consumer protection laws they enforce. Fintech firms we spoke with said that this can cause challenges because firms are concerned that regulators may have different interpretations of what conduct might merit consumer protection enforcement actions, and a research and consulting firm we interviewed that works with fintech start-ups told us that this is one of the industry’s biggest challenges. Similarly, the potential for differing regulatory interpretation may limit the effectiveness of agency efforts to innovate. For example, fintech firms can apply for a CFPB No Action Letter, which is intended to reduce regulatory uncertainty for financial products or services that promise substantial consumer benefit but face uncertainty regarding consumer protection requirements. However, some entities we spoke with said that few firms have applied, in part because a letter provided by CFPB may not preclude prudential regulators or FTC from taking enforcement actions in cases where they have jurisdiction. Although stakeholders indicated that agencies could improve interagency collaboration on other fintech issues, federal agencies said that they already collaborate through a variety of informal and formal channels at the domestic and international levels. Domestically, in addition to informal discussions and participation in fintech events hosted by other agencies, some agencies have coordinated examinations of third-party service providers and enforcement actions. For example, in 2014 and 2015, CFPB, FCC, FTC, and state regulators coordinated on enforcement actions related to unauthorized mobile carrier billing charges. Also, U.S. agencies have had informal discussions regarding fintech with their foreign counterparts. For example, Treasury staff have discussed regulations designed to counter money laundering and terrorist financing with officials from countries such as France and the United Kingdom. In addition, federal agencies have begun to collaborate on fintech regulatory issues through formal interagency working groups that are primarily concerned with other financial regulatory issues. For example, at the domestic level, U.S. prudential regulators have discussed issues related to potential risks of fintech lending and DLT through the Financial Stability Oversight Council. At the international level, the Federal Reserve represents the United States at the Bank for International Settlements, which has published papers on fintech topics including payments, fintech lending, and DLT. For more information on these efforts and others, see appendix II. Further, federal agencies said that they have recently organized the following interagency collaborative groups dedicated to fintech, as detailed in appendix II: In March 2017, the Federal Reserve convened the Interagency Fintech Discussion Forum, an informal group which meets approximately every 4 to 6 weeks and aims to facilitate information sharing among consumer compliance staff from the federal banking regulators on fintech consumer protection issues and supervisory outcomes. Discussion topics have included account aggregation, alternative data and modeling techniques, and third-party oversight. In 2016, Treasury created the Interagency Working Group on Marketplace Lending, which was active over the course of fiscal year 2016, meeting 3 times. This group shared information among industry participants and public interest groups, and discussed issues from a Treasury report on benefits and risks associated with online marketplace lending. In 2010, the Federal Reserve Banks of Atlanta and Boston created the Mobile Payments Industry Workgroup to facilitate discussions among industry stakeholders about how a successful mobile payments system could evolve in the United States. This group also functions as an interagency collaboration mechanism through biennial meetings between industry stakeholders and relevant regulators that update industry on regulatory concerns, identify potential regulatory gaps, and educate regulators on mobile payment technologies. However, we found that these groups do not include all relevant participants. For example, NCUA was not included in the Interagency Fintech Discussion Forum or the Interagency Working Group on Marketplace Lending, and FCC has not participated in the biennial regulator meetings of the Mobile Payments Industry Workgroup since 2012. Federal Reserve staff said that they did not include NCUA in the Interagency Fintech Discussion Forum because NCUA is not a bank regulator. Treasury staff noted that staff who could explain why NCUA had not been invited to participate in the Interagency Working Group on Marketplace Lending were no longer with the agency. Similarly, FCC staff could not recall why they had not participated in recent biennial regulator meetings of the Mobile Payments Industry Workgroup. However, NCUA has experiences and perspectives that would make it a relevant participant in the Interagency Fintech Discussion Forum, and NCUA officials said that they would participate in these interagency efforts if invited. NCUA would be a relevant participant because, although it does not oversee banks, it oversees credit unions that have entered into partnerships with fintech lenders and virtual currency exchanges, and could enter into partnerships with other fintech firms. Similar to fintech partnerships with banks, these partnerships could create risks related to safety and soundness and consumer protection. Further, NCUA’s 2018– 2022 draft strategic plan includes fintech as a key risk to the credit union system because fintech could provide a competitive challenge to credit unions or take advantage of differences in how credit unions and fintech firms are regulated, among other things. Likewise, as Federal Reserve staff have acknowledged, FCC could be a relevant participant in biennial regulators meetings of the Mobile Payments Industry Workgroup because FCC could share valuable insight on regulatory concerns related to mobile device security with other regulators and industry participants. Specifically, FCC has facilitated and encouraged industry efforts to improve security of mobile devices, on which consumers make fintech payments, and has conducted related consumer education efforts. FCC staff said they would consider participating in future biennial regulator meetings of the Mobile Payments Industry Workgroup if the topics discussed aligned with FCC’s work on mobile device security. Our past work has identified key practices relating to collaborative mechanisms among agencies that increase their effectiveness, such as including participants with the appropriate knowledge, skills, and abilities. In addition, these key practices also state that an interagency group should continue to reach out to potential participants who may have a shared interest in order to ensure that opportunities for achieving outcomes are not missed. However, we found that interagency collaborative efforts dedicated to fintech issues were not fully leveraging relevant agency expertise. Lack of NCUA participation in the Interagency Fintech Discussion Forum may preclude NCUA and the other participating agencies from sharing information that could be useful in efforts to oversee the risks that fintech poses to their regulated institutions. Similarly, lack of FCC participation in the biennial regulators meetings of the Mobile Payments Industry Workgroup could preclude industry participants from receiving updates on FCC regulatory concerns related to mobile device security and could preclude FCC from learning about new risks that fintech payments products pose to mobile device security. Furthermore, OCC and international bodies have identified fintech as an area where collaboration among agencies can be helpful. For example, OCC has stated that collaboration among supervisors can promote a common understanding and consistent application of laws, regulations, and guidance through steps such as establishing regular channels of communication. At the international level, the Bank for International Settlements has recommended that bank supervisors in jurisdictions where responsibilities related to fintech are fragmented among a number of regulators with overlapping authorities should collaborate with other relevant agencies to develop standards and regulatory oversight for fintech, as appropriate. Similarly, the Financial Stability Board has suggested that responsible agencies further open lines of communication to address cross-cutting fintech issues. Industry Disagreements on Aggregation of Consumer Financial Account Information Create the Need for Stronger Collaboration Among other consumer protection issues related to financial account aggregation, market participants do not agree about whether consumers using account aggregators will be reimbursed if they experience fraudulent losses in their financial accounts. While some account aggregators negotiate contracts with the financial institutions that hold the consumer accounts that are being aggregated, other account aggregators have no relationship with the financial institutions holding the consumer accounts that they access on behalf of those consumers. Officials from at least one large bank have made public statements that they may not reimburse losses from consumer accounts if the consumer provided his or her account credentials to an account aggregator and fraudulent activity subsequently occurs in the consumer’s account. In contrast, some account aggregators and consumer protection groups have argued that consumer protection law establishes that banks retain the obligation to reimburse losses due to transactions not authorized by the consumers. To date, CFPB and the Federal Reserve have taken varying public positions on this disagreement among market participants, and some regulators told us that they have held related discussions with market participants and observers. In October 2017, CFPB issued principles for consumer-authorized financial data sharing and aggregation that stated that consumers should have reasonable and practical means to dispute and resolve instances of unauthorized transactions. However, CFPB’s principles are not binding and federal financial regulators have not issued guidance or rules to clarify this issue. As previously mentioned, CFPB also issued a request for information studying these topics to various industry members, observers, and consumers in November 2016. A member of the Board of Governors of the Federal Reserve System has publicly stated that industry stakeholders will need to come to agreement on which party bears responsibility for unauthorized transactions. Also, Federal Reserve staff told us that some financial institutions and account aggregators are negotiating contractual arrangements that could address this issue on a case-by-case basis. In addition, staff from FDIC, the Federal Reserve, and OCC said that they have discussed related issues with market participants and observers. The financial regulators have recently begun to hold collaborative information sharing discussions on consumer compliance issues surrounding financial account aggregation, but this collaboration has not resulted in any coordinated public outcomes on the issues. In May 2017, the federal financial regulators—CFPB, the Federal Reserve, FDIC, NCUA, and OCC—and representatives of state financial regulators began to share information on account aggregation and related consumer compliance issues through the Federal Financial Institutions Examination Council (FFIEC) Task Force on Supervision and the FFIEC Task Force on Consumer Compliance. The regulators are collaborating through FFIEC because they acknowledge that account aggregation issues cross agency jurisdictions. According to participating agency officials, FFIEC discussions have covered responsibilities for consumer reimbursement due to fraudulent charges and access to consumer data, generated an internal paper on consumer compliance issues, and previewed CFPB’s principles for consumer-authorized financial data sharing and aggregation prior to publication. However, as of November 2017, these efforts have not generated public outcomes to guide market participants. The federal financial regulators’ missions include ensuring that consumers are protected. CFPB’s primary mission is to protect consumers in the financial marketplace, including ensuring that markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation. Similarly, according to their mission and vision statements, the banking and credit union regulators help protect consumer rights by supervising financial institutions to help ensure compliance with consumer protections. However, some of the regulators told us that they have not taken more steps to resolve the disagreements surrounding financial account aggregation because they are concerned over acting too quickly. For example, Federal Reserve staff we interviewed told us that premature regulatory action could be detrimental to the negotiations between individual financial institutions and financial account aggregators. Similarly, OCC staff we interviewed told us that OCC staff does not recommend publishing guidance or rules while the account aggregation industry is evolving because regulation should not constantly change. Nonetheless, the financial regulators could take additional steps to address these issues without prematurely issuing rules or regulations. Further, the FFIEC IT Examination Handbook on e-Banking’s appendix on aggregation services, which the financial regulators use in their examinations of banks, indicates that the financial regulators have been aware since at least 2003 that regulatory requirements related to consumer protection responsibilities of financial account aggregators are not clear. Incorporating leading practices on collaboration could strengthen the efforts that regulators are making to address financial account aggregation issues. As discussed previously, our prior work has developed interagency collaboration principles that make efforts among agencies more likely to be effective. These principles find that collaborative efforts should define the short-term and long-term outcomes that the collaboration is seeking to achieve and clarify the roles and responsibilities of the participating agencies, among other things. Although banking regulators and CFPB have discussed issues related to account aggregation within FFIEC, these discussions have not yet defined outcomes or produced any public outcomes to help guide fintech firms and traditional financial institutions which could help lead to market- based solutions, or defined agency roles and responsibilities. In addition, market participants, CSBS staff, and a member of the Board of Governors of the Federal Reserve System have said that additional collaboration on financial account aggregation issues—including reimbursement for unauthorized transactions—would be beneficial. Similarly, in its 2017 annual report, the Financial Stability Oversight Council encouraged financial regulators to monitor how fintech products affect consumers and regulated entities and to coordinate regulatory approaches, as appropriate. Acting collaboratively to help address consumer compliance issues related to financial account aggregation could help financial regulators better meet their consumer protection missions. Improved collaboration could help regulators and market participants resolve disagreements over account aggregation and related consumer compliance issues more quickly and in a manner that balances the competing interests involved. Taking steps now, while the discussion on financial account aggregation is in its relatively early stages, could help federal regulators better address these needs over the long term. Until regulators coordinate and assist the industry in clarifying and balancing the valid interests on both sides, consumers could have to choose between facing potential losses or not using what they may find to be an otherwise valuable financial service, and fintech firms providing useful services to consumers will face barriers to providing their offerings more broadly. Challenges Involving Fintech Partnerships with Banks Partnerships between fintech firms and financial institutions are increasingly common because such partnerships offer benefits to both parties involved. According to literature we reviewed and market participants and observers we interviewed, the benefits to banks can include the ability to meet consumer demand by providing their customers with access to innovative products that provide good user experiences without having to dedicate extensive internal time or resources. Market observers and Federal Reserve staff we interviewed told us that this benefit may be particularly important for small banks and credit unions, which have fewer staff and fewer financial resources for research and development. Similarly, the benefits to fintech firms can include access to banking services and networks, customer acquisition, and assistance with regulatory compliance. Some fintech firms enter contractual agreements to partner with banks through white-labeling, a type of partnership where the bank markets the fintech firm’s product as its own when soliciting customers. Other fintech firms enter contractual partnerships with banks as stand-alone third-party relationships. For example, some fintech lenders make loans to customers and partner with a bank that originates or purchases loans sourced through the fintech lender. However, because banks are liable for risks posed by third parties as discussed above, fintech firms may face delays in entering into partnerships with banks. Financial regulators have issued guidance on risk management for financial institutions’ relationships with third parties. Among other things, this guidance explains that financial institutions are expected to conduct proper due diligence in selecting partners and to monitor the activities conducted by third parties for compliance with relevant laws, rules, and regulations, considering areas such as consumer protection, anti-money laundering/counter-terrorist financing, and security and privacy requirements. Banks, fintech firms, and market observers we interviewed told us that banks may interpret this guidance conservatively. Large banks may also spend significant time conducting due diligence on the practices and controls in place at the fintech firms seeking to partner with them in order to prevent unnecessary compliance or operational risks, while a banking association told us that small banks with fewer resources to dedicate to due diligence may be unwilling to risk partnering with fintech firms. Banks, fintech firms, and market observers we interviewed told us that bank due diligence can also lead to lengthy delays in establishing partnerships, which can put fintech firms at risk of going out of business if they do not have sufficient funding and are not able to access new customers through a bank partner. For example, officials we interviewed from one bank told us that it takes about 18 months to launch a partnership with a fintech firm, and acknowledged that this is too slow to align with venture capital funding cycles that many fintech providers rely upon. Consideration of Regulatory Approaches Abroad Could Benefit Fintech Regulation and Innovation Regulators abroad have addressed the emergence of financial innovation through various means, including establishing innovation offices; establishing mechanisms for allowing fintech firms to conduct trial operations; holding innovation competitions; providing funding for firms through business accelerators; and using various methods to coordinate with other regulators domestically and internationally. While certain U.S. regulators have adopted similar efforts, further adoption of these approaches by U.S. regulators could facilitate interactions between regulators and fintech firms and improve regulators’ knowledge of fintech products. However, some initiatives may not be appropriate for the U.S. regulatory structure. For example, adopting certain initiatives could raise concerns about U.S. agencies picking winners, in which firms that participate in these programs may be better positioned to succeed than other firms. Further, particular initiatives may not align with agencies’ legal authorities or missions. Regulators in the U.S. and Abroad Have Developed Approaches to Improve Interaction with Firms and Help Them Identify Applicable Regulatory Requirements Citing the complexity of the U.S. financial regulatory system, fintech firms and industry observers noted having difficulty identifying which regulations they were subject to or which regulators would oversee their activities. Further, one fintech firm noted that when they were able to identify their regulators, they had difficulty finding a point of contact at the regulators. Officials from three regulators that we interviewed also noted that they had been contacted by fintech firms that were confused about their regulatory status and did not fall under the agency’s regulatory authority, but were subject to oversight by other regulators. Regulators in the U.S. and abroad have taken steps to better facilitate interactions with fintech firms, including by establishing innovation offices with dedicated staff to serve as a front door for start-up firms or innovators to find information on regulation and to contact the agency. These innovation offices generally maintain a webpage hosted on the agencies’ websites, a dedicated e-mail address, or dedicated staff. Through these innovation offices, some agencies offer services including office hours during which regulatory staff are available to meet and provide informal guidance. For example, CFPB officials said that, as of August 2017, they had met with approximately 115 companies in four such events in New York and San Francisco, under the agency’s Project Catalyst. Similarly, OCC officials noted that through their Office of Innovation, they have been able to answer regulatory questions for fintech firms and connect firms to relevant OCC offices. Since the launch of LabCFTC, CFTC’s innovation office, in May 2017, CFTC officials have met with more than 100 entities through office hour sessions in New York, Chicago, and Washington, D.C. In addition to office hours, several regulators have held fintech events through their innovation offices. For example, FTC has held three fintech forum events comprising panel discussions with industry experts, covering topics such as marketplace lending and distributed ledger technology. Several regulators have also issued publications on various fintech topics, which are posted to the dedicated webpages for those agencies with innovation offices. Some regulators from other jurisdictions also facilitated regular interaction with firms through their innovation offices. For example, through its Innovation Hub, the United Kingdom’s (UK) Financial Conduct Authority offers informal regulatory guidance to individual firms directly and through posted publications; operates its regulatory sandbox, described below; and engages with industry participants through various events. Similarly, through a program called Looking Glass, the Monetary Authority of Singapore offers fintech firms training and consultation on regulation and provides a space for fintech firms to give product demonstrations to regulators and banks. Regulators and fintech firms we interviewed abroad said that these innovation offices have helped firms better understand their regulatory obligations and help regulators identify and address risks early. For example, representatives of a robo-adviser firm we interviewed in Hong Kong said that their interactions with the Hong Kong Securities and Futures Commission’s innovation office—known as the Fintech Contact Point—made identifying and obtaining guidance from the appropriate regulatory officials easier, which helped the firm more efficiently develop a product compliant with applicable regulations. Some fintech firms and industry observers stated that U.S. regulators’ innovation offices have helped fintech firms by offering a point of contact for new entrants in the industry. Additionally, in a 2009 report, we created a framework that identified characteristics of an effective financial regulatory system. One of the characteristics was that regulators should oversee new products as they come onto the market to take action as needed to protect consumers and investors, without unnecessarily hindering innovation. Figure 5 summarizes efforts that we reviewed by regulators in the U.S. and abroad to implement initiatives to improve interactions with fintech firms. However, FDIC and NCUA have not established innovation offices for various reasons. For example, FDIC staff said that, although the agency has not formally evaluated establishing an innovation office, they have met with fintech firms to discuss deposit insurance applications. Associated with the deposit application process, the agency has established central points of contact for all interested parties, not only fintech firms. NCUA said that its lack of legal authority over third-party service providers limited the usefulness of an innovation office, since fintech providers are often third-party service providers. However, by not dedicating specific staff, as occurs with the establishment of an innovation office, these regulators could be less able to interact with fintech firms in their sectors and fintech firms that partner with their regulated entities. Other regulators who, similar to FDIC and NCUA, generally do not directly oversee third-party providers, though they may have such authority, have noted benefits from establishing innovation offices. For example, OCC, which has a similar mission to these two regulators, has formed such an office and OCC staff said that the agency has benefited by learning about industry trends involving fintech and by improving interactions with fintech firms and banks. Similarly, Federal Reserve officials we interviewed said that efforts through its innovation office have helped staff better understand fintech issues and have particularly helped its examiners better understand banks that partner with fintech companies. Consideration of establishing innovation offices, as many U.S. regulators have recently done, could help FDIC and NCUA better enable new firms to become familiar with regulatory requirements and could better facilitate interaction between the agencies and fintech service providers. Regulators Abroad Use Various Approaches to Learn about and Enable Development of New Fintech Products, and U.S. Regulators Could Consider Taking Similar Steps Internationally, some regulators have taken various approaches that help educate their staff on emerging products and help innovators develop products in limited-risk environments (see fig. 6). Based on interviews with regulators and firms abroad and a literature review, initiatives that we studied include regulatory sandboxes, proofs-of-concepts, innovation competitions or awards, and agency-led accelerators. Regulatory sandboxes that we studied were agency-led programs that allow firms to test innovative products; services; business models; or delivery mechanisms in a live environment, subject to agreed-upon testing parameters. The proofs of concept that we reviewed were similar to sandboxes, but for these programs regulators issued a request for proposals to industry to develop a product that is conceptual; that is, an idea for a product that is not yet on the market. In the fintech competitions that we studied, regulators invited firms to develop solutions to problem statements drafted by agencies or financial institutions. Accelerators that we reviewed provided funding; access to regulators and mentors; connections to outside funding sources; potential clients; and working space to fintech firms and start-ups. One approach regulators abroad were using to learn about fintech activities was regulatory sandboxes. While a few U.S. regulators have undertaken efforts that are similar to regulatory sandboxes, most have not. Two regulators that we interviewed stated that tools already exist, such as the comment process, to fulfill the role of a sandbox by helping them better understand innovation and assist in the development of rules and guidance. However, other U.S. regulators said that creating regulatory sandboxes by using tools such as No Action Letters could benefit regulators and firms. Based on our analysis of selected jurisdictions’ efforts, regulatory sandbox programs generally may include the following elements: firms apply to participate; firms and regulators agree on the parameters of how products or services will be tested, such as the number of consumers or transactions included in the test, the required product disclosures, or the time frame of the test; firms secure the appropriate licenses, if applicable; and firms and regulators interact regularly. In some cases, the sandbox may include limited regulatory relief. For example, UK regulators we interviewed noted that they can waive or modify a rule, issue a “no enforcement action” letter, or provide a restricted license for a firm participating in the sandbox. However, these tools are used on a case-by-case basis for the duration of the sandbox test, are not used for every participating firm, and would not limit any consumer protections. Further, UK regulators we interviewed said that while waiving or modifying rules is possible, they are only used on an exceptional basis. Similarly, Singapore regulators said that they can relax specific legal and regulatory requirements, such as capital requirements, on a case-by-case basis for firms while they are participating in the sandbox. Also, Hong Kong regulators allow firms to operate without full regulatory compliance for the limited product offerings within the sandbox. Similar to UK and Singapore regulators, Hong Kong regulators we interviewed said that they have put safeguards in place to protect consumers from and manage the risk of the regulatory relief. For a more detailed description of the Hong Kong, Singapore, and UK sandboxes, see appendix III. Regulators and market participants we interviewed abroad said that these fintech sandboxes have helped regulators better understand products and more effectively determine appropriate regulatory approaches while limiting the risk that the failure of a fintech firm could pose to consumers. Some participating firms we interviewed told us they benefited by being able to test products with customers, make changes to their business model, and understand how their products would be regulated. Moreover, two participating firms and a regulator we interviewed said that firms are able to introduce their products to the market more quickly because they are able to test their products in the market while becoming compliant with laws and regulations. One fintech firm that participated in the UK sandbox pointed out that the UK regulators better understood their firm’s technology and business model because of interactions in the sandbox. For example, although the company and regulatory officials had previously disagreed on whether the firm’s product needed to be regulated, after gaining a better understanding of the company’s business model through interactions in the sandbox, the regulatory officials agreed that the product did not require regulatory oversight. Similarly, Singapore regulators we interviewed noted that their sandbox provides them a hands-on approach to learning about new technologies and how the technologies align with regulatory requirements. Some U.S. regulators have programs that share some characteristics with sandboxes. As shown in figure 6, CFPB, SEC, and CFTC have issued No Action Letters in which agency staff state that they do not intend to recommend certain regulatory action against the firms if they offer the products in the way described in a request letter to the regulator. The issuance of such letters could assist fintech firms in cases in which the applicability of existing regulations to their product is unclear. However, similar to sandboxes abroad, CFPB officials stated that No Action Letters do not provide safe harbor for companies taking actions that are clearly not allowed under U.S. consumer regulations. As of March 6, 2018, CFPB had issued one No Action Letter to Upstart Network, a company that uses alternative data to assess creditworthiness and underwrite loans. As a condition of the No Action Letter, Upstart will regularly report lending and compliance information to CFPB to mitigate risk to consumers and inform CFPB about the impact of alternative data on lending decisions. In addition, CFPB officials we interviewed said that they can use a similar tool known as trial disclosure waivers, which allow industry participants to seek CFPB approval to test an innovative disclosure or way of delivering a disclosure to consumers that includes a safe harbor provision during which the industry participant may be exempted from statutory or regulatory requirements. As of March 6, 2018, CFPB had not issued any trial disclosure waivers. Through its Project Catalyst, CFPB has also established a research pilot program where it collaborates with firms that are testing innovative products to understand consumer use and policy implications of innovative products. CFPB officials said that research pilots have similar elements to sandboxes, including participant application, agreement of testing parameters, and regular meetings between CFPB and the participating firm. Four firms have concluded research pilots with CFPB and three other firms are currently participating in pilots. Similarly, OCC officials said that they are considering developing a pilot program, which will allow banks or fintech firms partnering with banks to test innovative products with the involvement and interaction of OCC staff. OCC officials said that they have not set a date for determining whether to go forward or implement the program. Proofs of Concept Another approach regulators abroad were using to learn about fintech activities was establishing proofs of concept. The proofs of concept that we studied are similar to sandboxes in that the regulator has regular interaction with the company to better understand the product or technology, but the product is not introduced into the market during the proof of concept period. For example, the Bank of England, through its Accelerator program, uses proofs of concept to have firms develop technology that can help the agency improve its operations, according to agency officials. The Hong Kong Monetary Authority, which, among other things, regulates banks in its jurisdiction, uses proofs of concept to allow industry participants to develop products that are conceptual and not ready for market implementation. A firm we interviewed that participated in a proof of concept with Hong Kong Monetary Authority said that it offered the regulator the opportunity to gain a working understanding of the technology, while providing a test environment for the company to tailor the technology to adhere to regulatory requirements. CFTC officials noted that they are exploring the ability to conduct proofs of concept through LabCFTC. CFTC officials noted that the agency would be well positioned to conduct proofs of concept because they already collect large amounts of market data that could potentially be leveraged for such projects. However, CFTC officials expressed concerns that receiving services as part of proofs of concept may violate gift or procurement laws. The Federal Reserve Bank of Boston participates in a collaborative effort called Hyperledger, which serves a similar purpose as a proof of concept for the Federal Reserve Bank. Hyperledger is a collaborative effort involving public and private entities created to advance the use of blockchain technologies across various sectors. As observers in the Hyperledger, Federal Reserve Bank staff have gained hands-on experience with blockchain technology by experimenting with uses of the technology. None of the other regulators with whom we spoke said that they planned to conduct proofs of concept. Innovation Competitions or Awards Another approach used by regulators abroad for learning about fintech activities was establishing fintech competitions or awards to encourage financial innovation. Winning firms receive recognition, contracts, or cash prizes. For example, the Monetary Authority of Singapore operated an international competition called Hackcelerator to crowdsource innovative solutions to problems that Singaporean financial institutions identified, including insurance, customer identification, and data analytics, according to officials. Singapore regulators have also established FinTech Awards, which provide ex-post recognition to FinTech solutions that have been implemented. CFTC officials said that they are seeking public input to establish prize competitions and intend to launch such competitions in 2018. FTC officials said that in 2017, the agency challenged participants to create a technical solution, or tools, that consumers could use to guard against security vulnerabilities in software found on the Internet of Things devices in their homes. FINRA staff noted that the agency holds internal innovation competitions, called CREATEathons, in which FINRA staff compete to develop solutions to various problems identified internally by staff. While external parties do not participate in these competitions, teams can consult with firms. Some U.S. regulators pointed out that while some regulators abroad are mandated to promote competition, no such mandate exists among most U.S. financial regulators. Agency-led Incubator or Accelerator Two governments we studied abroad were also learning about fintech by establishing incubators or accelerators to encourage the development of a country’s fintech industry and talent pool. The accelerators provide funding, access to regulators and mentors, connections to outside funding sources, potential clients, and working space to fintech firms and start- ups. For example, officials we interviewed from SG Innovate, Singapore’s government led accelerator, said that the agency helps Singaporean businesses expand overseas, bring companies to Singapore, and connect start-ups to regulators and funding, among other things. None of the U.S. regulators we interviewed said that they planned to establish such accelerator programs. Regulators from the U.S. and abroad pointed out that the U.S. fintech industry is more developed than those of other jurisdictions with many fintech firms, large talent pools, and significant amounts of private funding or privately run accelerators. Regulators and market participants we interviewed abroad said that these knowledge-building initiatives have helped regulators learn about new products and business models and have allowed firms to test products. Although CFTC and SEC can issue No Action Letters, those agencies have not adopted other approaches similar to these knowledge-building initiatives described above. Further, FDIC, the Federal Reserve, and NCUA have not adopted any of these approaches. U.S. regulators said that these initiatives could raise concerns about favoring certain competitors over others and also noted that they may not have the authority to initiate these programs. However, despite similar potential constraints with regard to competition and authority limitations, CFPB and OCC have formally evaluated undertaking relevant knowledge-building initiatives, through conversations with regulators abroad, general research, and documentation of their efforts; and they have begun developing similar approaches, according to agency officials. A characteristic of an effective financial regulatory system we identified in our 2009 framework was that a regulatory system should be flexible and forward looking, which would allow regulators to readily adapt to market innovations and changes. Consideration by U.S. regulators of adopting approaches taken by regulators abroad, where appropriate, could result in the implementation of initiatives that help improve their overall ability to oversee fintech and how it affects the entities they currently regulate. While constraints may limit the ability or willingness of regulators to fully adopt these practices, opportunities exist to assess ways to tailor them to the U.S context. Regulators in the U.S. and Abroad Have Adopted Approaches to Facilitate Coordination on Financial Innovation Regulatory coordination is less of an issue for regulators abroad because most jurisdictions have fewer financial regulators. For example, the UK has 3 agencies involved in financial regulation, Singapore has 1 financial regulator, and Hong Kong has 4 financial regulators, compared to the 10 federal agencies involved in the regulation of fintech in some capacity in the United States. However, regulators abroad have undertaken efforts to bolster coordination among domestic regulators—as applicable—as well as regulators abroad and industry representatives (see fig. 7). These collaborative efforts include advisory councils and steering committees dedicated to fintech issues; and fintech-specific cooperation agreements. In the jurisdictions we examined, two agencies have established fintech advisory councils or steering committees of industry participants and government officials. Fintech advisory councils and steering committees may provide a valuable connection to industry, through which U.S. regulators could gain insight into industry developments. For example, the Hong Kong securities regulator has established an advisory council comprised of members with knowledge and experience of various parts of Hong Kong’s fintech industry. Officials of this agency told us that the advisory council provides valuable market data, a forum that offers firms a preliminary check for interpretation of their rules and updates on emerging issues. Advisory council members said that the council gives this regulator a cross-functional perspective from industry experts and enables the agency to learn about emerging issues and related regulatory challenges early in their development. Selected U.S. regulators have established formal advisory committees dedicated to fintech issues, as shown in figure 7. FINRA has established a Fintech Industry Committee through which FINRA member and nonmember firms are provided a platform for ongoing dialogue and analysis of fintech developments related to FINRA’s purview. FINRA officials said that the agency has also established the FinTech Advisory Group, a forum to identify and prioritize FinTech topics and coordinate appropriate regulatory approaches with key stakeholders. CFTC staff noted that the agency restarted its Technology Advisory Committee in late 2017 to explore a range of fintech topics and augment the work of LabCFTC. FDIC officials noted that the agency has a Fintech Steering Committee, which aims to help FDIC understand fintech developments by identifying, discussing, and monitoring fintech trends through reports from the staff working groups that the steering committee has established. The Fintech Steering Committee had not made any formal recommendations as of March 13, 2018. As previously mentioned, U.S regulators we interviewed said that they have coordinated with other regulators and industry through various mechanisms, as the following examples illustrate. (For additional information on interagency collaborative efforts, see app. II). The Federal Reserve has coordinated with relevant industry participants and other regulators including CFPB, FDIC, FTC, NCUA, OCC, Treasury, and CSBS through its Mobile Payments Industry Working Group and its Faster Payments Task Force. FTC solicits insight from industry participants, observers, and regulators through its fintech forums. Regulators have also coordinated with each other through domestic and international interagency financial regulatory bodies, as well as a recently organized interagency collaborative group dedicated to fintech, the prudential regulators’ Interagency Fintech Discussion Forum. Cooperation Agreements Some regulators abroad have cooperation agreements with other regulators abroad to share information and to help fintech firms begin operations in other jurisdictions. For example, Singapore regulatory staff told us that the regulator has 16 such agreements with entities from 15 regions that typically consist of (1) referrals to regulatory counterparts for firms attempting to operate in a new country, (2) guidance to firms on regulation in the firm’s new country of operation, and (3) information exchange among regulators and between regulators and fintech firms. UK regulators said that these agreements outline how the agencies in each country pledge to assist each other’s fintech firms seeking to operate in their country with business-to-business contacts, office space, and other assistance. For example, regulators can discuss trends related to their authorities and share information on fintech firms seeking to expand operations in the other country. A fintech firm we interviewed said that because much financial innovation is international in scope, sharing information across borders with cooperation agreements is important for regulators to understand the new technologies and to be responsive to risks. On February 19, 2018, CFTC and UK Financial Conduct Authority signed a cooperation agreement, which, according to CFTC officials, will focus on information sharing and facilitate referrals of fintech companies interested in entering the other regulator’s market. None of the other U.S. regulators that we interviewed had fintech-specific cooperation agreements with regulators abroad. Most of them said that existing memoranda of understanding were sufficient to facilitate information sharing. One regulator we interviewed abroad noted that establishing fintech-specific cooperation agreements with U.S. regulators is difficult because no direct regulatory counterpart exists since the U.S. financial regulatory structure is significantly different from those of other jurisdictions. Conclusions The emergence of various fintech products has produced benefits to consumers and others. Fintech products often pose risks to those of traditional financial products, although in some cases fintech products pose additional risks. While existing consumer protection and other laws apply to some fintech products and services, in some cases fintech transactions may not be covered by such protections. The extent to which the activities of fintech providers are subject to routine federal oversight varies, but fintech firms not overseen by a federal body generally are subject to oversight by state regulators. While limited evidence of widespread problems has surfaced to date, as the prevalence of fintech products grows, risks posed by segments of the industry that regulators do not routinely examine could correspondingly grow. Therefore, efforts regulators by regulators to monitor developments and risks posed by these firms and their financial innovations remains a sound approach. With fintech products spanning across financial sectors and jurisdictions of the numerous U.S. regulatory bodies, many parties have called for improved regulatory coordination. While regulators have taken steps to collaborate, opportunities remain to improve collaboration in line with GAO’s leading practices. For example, the Interagency Fintech Discussion Forum and the biennial meetings of the Federal Reserve Mobile Payments Industry Workgroup do not include NCUA and FCC, respectively, agencies that could add valuable perspectives. Without these agencies, these efforts are not fully leveraging relevant agency expertise, and NCUA and FCC may be precluded from learning about risks that are relevant to their authorities. Among other consumer protection issues related to financial account aggregation, market participants do not agree about whether consumers using account aggregators will be reimbursed if they experience fraudulent losses in their financial accounts. Until regulators coordinate and assist the industry in clarifying and balancing the valid interests of consumers, financial account aggregators, and financial institutions, consumers could have to choose between facing potential losses or not using what they may find to be an otherwise valuable financial service. Although regulators have been reluctant to act too quickly in light of related industry efforts, they could increase collaboration to address key issues such as consumer reimbursement for unauthorized transactions. Aligning ongoing collaborative efforts with leading practices could help regulators and market participants resolve disagreements over financial account aggregation and related consumer compliance issues more quickly and in a manner that balances the competing interests involved. With our past work finding that an effective financial regulatory system needs to be flexible and forward looking to allow regulators to more readily adapt and oversee new products, U.S. regulators could potentially improve their oversight of innovative fintech activities by considering adoption of some of the efforts already being successfully used by regulators abroad. While constraints may limit the ability or willingness of regulators to fully adopt these practices, opportunities exist to assess ways to tailor them to the U.S. context. Some U.S. regulators have established innovation offices that can help fintech providers more easily obtain needed information from relevant regulators; however, FDIC and NCUA have not established such offices, which could help facilitate these regulators’ interactions with fintech firms and with the entities they regulate. Also, initiatives such as regulatory sandboxes or proofs-of- concept that provide fintech firms the opportunity to operate and share information with appropriate regulators have helped regulators abroad educate their staff and thereby improve their oversight capacities. However, the Federal Reserve, CFTC, FDIC, NCUA, and SEC have not initiated such programs due to concerns about favoring certain competitors over others or that they may not have the authority to initiate these programs. While constraints may limit the ability or willingness of regulators to fully adopt these practices, additional consideration by these regulators of some of the approaches taken by regulators abroad could assist U.S. regulators in learning more about new financial technologies that could provide useful knowledge for their own regulatory activities. Recommendations for Executive Action We are making a total of sixteen recommendations. The Chair of the Board of Governors of the Federal Reserve System should invite NCUA to participate in the Interagency Fintech Discussion Forum. (Recommendation 1) The Chairman of the Federal Communications Commission (FCC) should discuss with the Presidents of the Federal Reserve Banks of Atlanta and Boston whether the topics of the 2018-2019 biennial regulators meeting of the Federal Reserve’s Mobile Payments Industry Working Group would make FCC participation beneficial to the FCC or the group, and take steps accordingly. (Recommendation 2) The President of the Federal Reserve Bank of Atlanta should discuss with the Chairman of the FCC and the President of the Federal Reserve Banks of Boston whether the topics of the 2018-2019 biennial regulators meeting of the Federal Reserve’s Mobile Payments Industry Working Group would make FCC participation beneficial to the FCC or the group, and take steps accordingly. (Recommendation 3) The President of the Federal Reserve Bank of Boston should discuss with the Chairman of the FCC and the President of the Federal Reserve Banks of Atlanta whether the topics of the 2018-2019 biennial regulators meeting of the Federal Reserve’s Mobile Payments Industry Working Group would make FCC participation beneficial to the FCC or the group, and take steps accordingly. (Recommendation 4) The Director of the Consumer Financial Protection Bureau should engage in collaborative discussions with other relevant financial regulators in a group that includes all relevant stakeholders and has defined agency roles and outcomes to address issues related to consumers’ use of account aggregation services. (Recommendation 5) The Chair of the Board of Governors of the Federal Reserve System should engage in collaborative discussions with other relevant financial regulators in a group that includes all relevant stakeholders and has defined agency roles and outcomes to address issues related to consumers’ use of account aggregation services. (Recommendation 6) The Chairman of the Federal Deposit Insurance Corporation should engage in collaborative discussions with other relevant financial regulators in a group that includes all relevant stakeholders and has defined agency roles and outcomes to address issues related to consumers’ use of account aggregation services. (Recommendation 7) The Chairman of the National Credit Union Administration should engage in collaborative discussions with other relevant financial regulators in a group that includes all relevant stakeholders and has defined agency roles and outcomes to address issues related to consumers’ use of account aggregation services. (Recommendation 8) The Comptroller of the Currency should engage in collaborative discussions with other relevant financial regulators in a group that includes all relevant stakeholders and has defined agency roles and outcomes to address issues related to consumers’ use of account aggregation services. (Recommendation 9) The Chairman of the Federal Deposit Insurance Corporation should formally evaluate the feasibility and benefit of establishing an office of innovation or clear contact point, including at least a website with a dedicated email address. (Recommendation 10) The Chairman of the National Credit Union Administration should formally evaluate the feasibility and benefit of establishing an office of innovation or clear contact point, including at least a website with a dedicated email address. (Recommendation 11) The Chair of the Board of Governors of the Federal Reserve System should formally evaluate the feasibility and benefits to their regulatory capacities of adopting certain knowledge-building initiatives related to financial innovation. (Recommendation 12) The Chairman of the Commodity Futures Trading Commission should formally evaluate the feasibility and benefits to their regulatory capacities of adopting certain knowledge-building initiatives related to financial innovation. (Recommendation 13) The Chairman of the Federal Deposit Insurance Corporation should formally evaluate the feasibility and benefits to their regulatory capacities of adopting certain knowledge-building initiatives related to financial innovation. (Recommendation 14) The Chairman of the National Credit Union Administration should formally evaluate the feasibility and benefits to their regulatory capacities of adopting certain knowledge-building initiatives related to financial innovation. (Recommendation 15) The Chairman of the Securities and Exchange Commission should formally evaluate the feasibility and benefits to their regulatory capacities of adopting certain knowledge-building initiatives related to financial innovation. (Recommendation 16) Agency Comments and Our Response We provided a draft of this report to CFPB; CFTC; FCC; FDIC; the Federal Reserve; FTC; NCUA; OCC; SEC; and Treasury, as well as CSBS and FINRA. We received written comments from all of these agencies except for Treasury and FINRA; the comments are reprinted in appendixes IV through XII, respectively. Agencies to which we directed recommendations agreed with our recommendations, as detailed below. All of these agencies except FCC and NCUA also provided technical comments, which we incorporated as appropriate. In response to our recommendation that CFPB engage in collaborative discussions that incorporate leading practices with other financial regulators on financial account aggregation issues, CFPB stated in its letter that it concurred. CFPB stated that it has taken steps to address related issues independently. CFPB also noted that it has participated in related ongoing collaborative discussions and that it would continue to do so. CFTC concurred with our recommendation that it formally evaluate adopting knowledge-building initiatives related to financial innovation. CFTC also noted that it is either using or exploring the use of some of the knowledge-building initiatives identified in the report. However, the agency also raised concerns that, without targeted legislative changes, some of those initiatives may violate federal procurement laws and gift prohibitions. In its letter, FCC agreed with our recommendation that it should discuss with the Presidents of the Federal Reserve Banks of Atlanta and Boston whether the topics of the 2018–2019 biennial regulator meeting of the Federal Reserve’s Mobile Payments Industry Working Group would make FCC participation beneficial to FCC or the group, and take steps accordingly. FCC noted that it will reach out to the Federal Reserve Banks of Atlanta and Boston to determine whether FCC participation would be beneficial. Regarding our recommendation that FDIC engage in collaborative discussions that incorporate leading practices with other financial regulators on financial account aggregation issues, FDIC stated in its letter that it recognizes the benefits of engaging in collaborative discussions with other relevant regulators. It noted that it has been involved in ongoing collaborative discussions about such issues and that it would continue to do so, particularly regarding liability for unauthorized transactions and consumer reimbursement. Regarding our recommendation that FDIC formally evaluate the feasibility and benefit of establishing an Office of Innovation or clear contact point, FDIC stated that it would conduct such an evaluation, and acknowledged that it has a long history of engaging in open dialogue with any party interested in discussing matters related to FDIC’s mission and responsibilities. Regarding our recommendation that it formally evaluate adopting knowledge building initiatives related to financial innovation, FDIC stated that it recognizes the importance of knowledge building and has developed a framework and implemented initiatives to facilitate this. It also noted that it will continue ongoing efforts to build knowledge related to financial innovation and will consider other relevant knowledge building initiatives, as appropriate. In response to our recommendations that the Federal Reserve include NCUA and FCC in relevant working groups, the Federal Reserve stated in its letter that its Board staff would seek NCUA’s participation and that staff from the Reserve Banks in Atlanta and Boston would discuss FCC’s participation in relevant working groups. Regarding our recommendation that the Federal Reserve engage in collaborative discussions that incorporate leading practices with other financial regulators regarding financial account aggregation issues, the Federal Reserve acknowledged the importance of working together to ensure that consumers were protected, and noted a variety of ways it already coordinates on such issues, and noted that it will continue to engage in such discussions to address the important issues surrounding reimbursement for consumers using these services. Regarding our recommendation that it formally evaluate adopting knowledge-building initiatives related to financial innovation, the Federal Reserve noted that it recognizes the importance of such efforts and has recently organized a team of experts to ensure that fintech-related information is shared across its organization. NCUA stated in its letter that it concurred with our recommendations to engage in collaborative discussions that incorporate leading practices with other financial regulators on financial account aggregation issues, formally evaluate the feasibility and benefit of establishing an office of innovation or clear contact point, and formally evaluate the feasibility and benefits to their regulatory capacities of adopting certain knowledge- building initiatives related to financial innovation. NCUA noted that evaluations of fintech activities are challenging for NCUA because it does not have vendor authority like the other federal banking regulators. We have previously raised NCUA’s lack of vendor authority as a matter for congressional consideration. NCUA stated it will continue to monitor risks posed by fintech firms to the credit union industry by working with the banking regulators. Regarding our recommendation that OCC engage in collaborative discussions that incorporate leading practices with other financial regulators on financial account aggregation issues, OCC stated in its letter that it recognizes the importance of this recommendation. It noted that it has been involved in ongoing collaborative discussions about such issues and that it would continue to do so. SEC stated in its letter that it concurred with our recommendation to formally evaluate the feasibility and benefits to their regulatory capacities of adopting certain knowledge-building initiatives related to financial innovation. SEC also stated that it will coordinate with other agencies as appropriate during its assessment. In its letter, CSBS drew connections between steps that state regulators have taken and those that we are recommending to federal agencies. CSBS also provided additional information regarding state licensing requirements, which we incorporated into our report. Additionally, CSBS expressed support for our recommendations on federal interagency collaboration and stated that it would support related efforts that respected the role of state regulators. In addition, CSBS said that these efforts could benefit from the participation of state regulators and that it would be willing to participate if invited. Similarly, CSBS expressed support for our recommendations that certain federal agencies formally evaluate the feasibility and benefit of establishing an office of innovation or clear contact point and formally evaluate the feasibility and benefit of adopting knowledge-building initiatives related to financial innovation. However, CSBS also cautioned that knowledge-building initiatives should not preempt state consumer protection and licensing laws for fintech payment providers or fintech lenders. As agreed with your offices, we are sending this report to the appropriate members of Congress; CFPB; CFTC; FCC; FDIC; the Board of Governors of the Federal Reserve; FTC; NCUA; OCC; SEC; and Treasury, as well as CSBS and FINRA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) fintech benefits, risks, and extent of legal or regulatory protections for users; (2) efforts by U.S. regulators to oversee fintech activities; (3) challenges that the regulatory environment poses to fintech firms; and (4) the steps taken by domestic and other countries’ regulators to encourage financial innovation within their countries. While fintech does not have a standard definition, for the purposes of this report we focused on products and services leveraging technological advances offered by financial institutions; nonbank financial companies; and technology companies within the payment, lending, and wealth management sectors, as well as products or services operating under distributed ledger technology (DLT). Within these four identified sectors, we examined particular products and services. In the payments technologies sector we limited our scope to mobile wallets, peer-to-peer payments, and peer-to-business payments products and services. To identify these four sectors, we conducted background research and reviewed prior GAO reports on fintech, person-to-person lending, and virtual currencies. In the fintech lending sector, we focused on consumer lending—including credit card and home improvement loans—and small business lending services from direct and platform lending models; however, we did not include mortgage lending in our scope, due to the significant amount of regulation within the subsector. In the digital wealth management sector, we examined firms that exclusively offer advice using algorithms based on consumers’ data and risk preferences to assist or provide investment recommendations and financial advice directly to consumers. We also examined issues relating to fintech account aggregation companies that consolidate and display data from consumers’ accounts across financial institutions to help consumers more easily see their overall financial health. For DLT, we focused on providers that used DLT in payments and securities processing and token sales. We also included information on the use of DLT in virtual currencies, such as bitcoin and Ethereum. We also reviewed available data on transaction volumes for the payments, lending, and robo advising sectors. To identify the benefits provided and risks posed to consumers by fintech services, we conducted a literature review of agency, industry participant, and industry observer documents that analyzed developments within fintech. Using ProQuest, Scopus, SSRN, and Nexis.com databases in the literature review, we identified over 500 relevant articles out of over 1,100 search results by using search terms associated with the four fintech subsectors mentioned above. Our search included articles from 2011 to October 2017. To determine the usefulness of the studies for inclusion, we conducted a review of search results involving multiple content reviews by GAO analysts to determine which relevant articles could (1) provide credible sources of information to help address our researchable questions, or (2) help identify knowledgeable persons or groups to interview. We excluded documents based on the following criteria that eliminated articles that were (1) duplicated; (2) related to countries outside our review; (3) about virtual currencies; (4) categorized as “marginally relevant” by analysts based on the article’s title, publication date, and source; (5) less recent documents from each author or source; (6) from news outlets or nonauthoritative sources; or (7) deemed irrelevant or not useful. To obtain the financial services and fintech stakeholder perspectives on fintech benefits and risk, we reviewed academic papers, reports, and studies by other organizations on fintech activities we identified through a literature search. We also conducted over 120 interviews with financial regulators; banks; fintech providers; consumer groups; trade associations; academics; think tanks; and consulting and law firms. We identified potential interviewees by conducting Internet research; reviewing literature search results; reviewing recommended interviewees from our initial interviews; and selecting interviewees based on their relevance to the scope of our review. We selected fintech firms and financial intuitions, industry observers, and federal agencies based on the product or service conducted by the firm, expertise of the industry observers, and oversight authority of the federal agencies. We identified fintech benefits and risk by speaking with relevant regulators and other knowledgeable parties including: the Board of Governors of the Federal Reserve System (Federal Reserve); the Federal Deposit Insurance Corporation (FDIC); the National Credit Union Administration (NCUA); the Office of the Comptroller of the Currency (OCC); the Commodity Futures Trading Commission (CFTC); the Bureau of Consumer Financial Protection, known as the Consumer Financial Protection Bureau (CFPB); the Department of the Treasury (Treasury); the Federal Communications Commission; Federal Trade Commission (FTC); the Financial Industry Regulatory Authority (FINRA), the Securities and Exchange Commission (SEC); and the Small Business Administration. To obtain state-level perspectives we interviewed representatives of the Conference of State Bank Supervisors (CSBS), National Association of Attorneys General, Money Transmitter Regulators Association, National Association of State Credit Union Supervisors, and the North American Securities Administrators Association. We also interviewed staff from three state financial regulatory agencies in states with active fintech firms and regulatory activities: California, Illinois, and New York. To assess the regulatory environment and various challenges faced by fintech firms, we identified relevant laws and regulations pertaining to fintech companies within our scope by reviewing prior GAO reports on financial regulation and fintech, interviewed agency staff and industry participants, and analyzed relevant agency documents, including relevant laws and regulations. We also reviewed guidance; final rulemakings; initiatives; and enforcement actions from agencies. To obtain federal regulatory perspectives, we interviewed staff from the Federal Reserve, FDIC, NCUA, OCC, CFTC, CFPB, Treasury, FTC, FINRA, SEC, and SBA. To determine the steps taken by domestic and other countries’ regulators to encourage financial innovation in their countries, we conducted fieldwork—including interviews with regulatory agencies, fintech firms, and industry observers, as well as, observations of fintech programs—in the United Kingdom, Singapore, and Hong Kong. We also conducted interviews with a regulatory organization and fintech firms operating in Canada. We identified and selected countries for our fieldwork through criteria that focused on the extent to which these locations had significant (1) financial services activities, (2) fintech activities, and (3) fintech regulatory approaches. We conducted Internet research, literature searches, and interviews to identify relevant foreign regulators within the selected fieldwork sites. To obtain other countries’ regulator perspectives, we interviewed and analyzed agency documents on regulatory efforts and views on fintech innovations within their financial markets from regulators in Hong Kong, Singapore, and the United Kingdom. To obtain the perspective of fintech firms operating in the selected fieldwork sites, we conducted Internet research, literature searches, and interviews to determine relevant fintech firms and foreign trade associations, including recommendations from domestic industry participants and observers. We conducted this performance audit from initiation August 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Interagency Collaborative Efforts That Have Addressed Fintech Issues In this appendix, we present interagency working groups (including task forces and other interagency collaborative bodies) that have discussed fintech issues, and in some cases, taken specific actions. This list includes interagency groups that are dedicated exclusively to fintech as well as those that may discuss fintech as part of their broader financial regulatory focus. Also, it includes interagency groups that operate at both the domestic and international levels (see tables 2 and 3). This list is based on information we obtained from the federal financial regulatory agencies we met with and is not intended to be an exhaustive list. Appendix III: Regulatory Sandbox Examples UK Financial Conduct Authority’s Regulatory Sandbox According to officials, the purpose of the Financial Conduct Authority’s (FCA) sandbox is to allow firms to test innovative products, services, or business models in a live market environment, while ensuring that appropriate protections are in place. FCA has stated that its sandbox has (1) reduced the time and cost of getting innovative ideas to market; (2) facilitated access to finance for innovators; (3) enabled products to be tested and introduced to the market; and (4) helped the agency build appropriate consumer protection safeguards into new products and services. The characteristics of the FCA sandbox, according to the agency, are listed below. Eligible Participants: Currently regulated firms as well as unregulated firms. Eligibility Criteria: Firms submit an application outlining how they meet the eligibility criteria for testing, which are (1) carrying out or supporting financial services business in the UK; (2) genuinely innovative; (3) identifiable consumer benefit; (4) need for sandbox testing; and (5) ready to test. Testing Parameters: If a firm is unauthorized it must obtain authorization or restricted authorization prior to participation in the sandbox. Prior to participating in the sandbox a firm must design, and obtain agreement on, the parameters of the sandbox test, including the duration; customer selection; customer safeguards; disclosures; data; and testing plans. FCA has four ways that it can help firms operate more easily in its sandbox. First, it can provide restricted authorizations that are a tailored authorization process for firms accepted into the sandbox. Any authorization or registration is restricted to allow firms to test only their ideas as agreed upon with agency staff, which is intended to make the process easier for firms to meet requirements and reduce the cost and time to initiate the test, according to the agency. Second, FCA provides individual guidance to firms in the sandbox that are unclear on how the agency’s rules apply, whereby FCA will interpret the regulatory requirements in the context of the firm’s specific test. Third, in some cases, FCA may be able to waive or modify an unduly burdensome rule for the purposes of the sandbox test, but it cannot waive national or international laws. Finally, FCA can issue no enforcement action letters in cases where they cannot issue individual guidance or waivers but they believe regulatory relief is justified for the circumstances of the sandbox. According to the agency, no enforcement action letters are offered only during the duration of the sandbox test to firms that keep to the agreed- upon testing parameters and that treat customers fairly. Also, no enforcement action letters only apply to FCA disciplinary action and do not limit any liabilities to consumers. Officials we interviewed noted that rule waivers and no enforcement action letters are rarely used tools. As of January 2018, FCA had received more than 200 sandbox applications. Eighteen firms had successfully graduated from the first cohort, 24 firms were preparing to test in the second cohort, and 18 other firms were accepted to test in the third cohort. Monetary Authority of Singapore’s Regulatory Sandbox Recognizing that when lack of clarity over whether a new financial service complies with legal and regulatory requirements could cause some financial institutions or start-ups to choose not to implement an innovation, the Monetary Authority of Singapore’s (MAS) purpose in establishing its sandbox was to encourage such experimentation so that promising innovations could be tested in the market and have a chance for wider adoption, according to the agency. In addition, the agency stated that sandbox tests include safeguards to contain the consequences of failure and maintain the overall safety and soundness of the financial system. The characteristics of the MAS sandbox, according to MAS, are listed below. Eligible Participants: Firms that are looking to apply technology in an innovative way to provide financial services that are regulated by MAS, including financial institutions, fintech firms, and professional services firms partnering with such firms. Eligibility Criteria: Firms submit an application outlining how they meet the eligibility criteria for testing, which are that (1) the product uses new technology or existing technology in an innovative way, (2) the product benefits consumers or industry, and (3) the firm intends to deploy the product in Singapore on a broader scale after exiting the sandbox. Testing Parameters: Firms must define the following testing parameters prior to participating in the sandbox: (1) clearly defined test scenarios and expected outcomes must be established; (2) boundary conditions that facilitate meaningful experiments while sufficiently protecting the interests of consumers and maintaining the safety and soundness of the industry must be in place; (3) the firm assesses and mitigates significant associated risks; and (4) an acceptable exit and transition strategy must be defined. MAS stated that it will consider relaxing various regulatory requirements for the duration of the sandbox test. However, they emphasized that their sandbox is not intended and cannot be used as a means to circumvent legal and regulatory requirements. MAS staff determines the specific legal and regulatory requirements that they may be willing to relax on a case- by-case basis. According to MAS, some of the regulatory requirements that could be relaxed included maintenance of certain levels of financial soundness, solvency, capital adequacy, and credit ratings as well as licensing fees, board composition requirements, and management experience requirements, among others. However, MAS has also laid out some requirements that it will not consider relaxing, including those regarding consumer information confidentiality, anti-money laundering, and countering terrorist financing. MAS officials said that all firms in the sandbox will receive some form of regulatory relaxation. As of November 2017, MAS had received more than 30 sandbox applications. One firm had successfully graduated, and a few other firms were testing or were in the process of initiating a sandbox test. Hong Kong Monetary Authority’s Fintech Supervisory Sandbox According to the Hong Kong Monetary Authority (HKMA), the purpose of the HKMA sandbox is to enable banks and technology firms to gather data and user feedback so that they can make changes to their innovations, thereby expediting the launch of new products and reducing development costs. HKMA officials stated that the sandbox allows banks and their partnering technology firms to conduct pilot trials of their fintech initiatives involving a limited number of participating customers without the need to achieve full compliance with HKMA’s supervisory requirements. The characteristics of the HKMA sandbox, according to the agency, are listed below. Eligible Participants: Regulated banks and their partnering technology firms. Eligibility Criteria: Fintech initiatives that are intended to be launched by banks in Hong Kong are eligible for the sandbox. Testing Parameters: Participating firms must (1) define the scope, phases, timing, and termination of the sandbox test; (2) establish customer protection measures, including disclosures, complaint handling, and compensation for consumer loss; (3) establishing risk management controls; and (4) establish a monitoring program for the sandbox test. Similar to MAS, HKMA stated that its sandbox should not be used as a means to bypass applicable supervisory requirements; however, HKMA will relax regulatory requirements on a case-by-case basis. As of November 2017, nine banks had participated in 26 HKMA sandbox tests. Twelve of these tests had been completed and banks collaborated with fintech firms in 15 of the tests. Appendix IV: Comments from the Consumer Financial Protection Bureau Appendix V: Comments from the Commodity Futures Trading Commission Appendix VI: Comments from the Conference of State Bank Supervisors Appendix VII: Comments from the Federal Communications Commission Appendix VIII: Comments from the Federal Deposit Insurance Corporation Appendix IX: Comments from the Board of Governors of the Federal Reserve System Appendix X: Comments from the National Credit Union Administration Appendix XI: Comments from the Office of the Comptroller of the Currency Appendix XII: Comments from the Securities and Exchange Commission Appendix XIII: GAO Contact and Staff Acknowledgments GAO Contact Lawrance L. Evans, Jr., (202) 512-8678 or evansl@gao.gov. Staff Acknowledgements In addition to the contact named above, Cody Goebel (Assistant Director); Chloe Brown (Analyst-in-Charge); Chris Ross; Davis Judson; Ian P. Moloney; and Bethany Benitez made key contributions to this report. Also contributing to this report were Joanna Berry; Timothy Bober; Richard Hung; Pamela Davidson; Tovah Rom; Cynthia Saunders; and Jena Sinkfield.
Why GAO Did This Study Advances in technology and the widespread use of the Internet and mobile communication devices have helped fuel the rise of traditional financial services provided by non-traditional technology-enabled providers, often referred to as fintech. GAO was asked to provide information on various aspects of fintech activities. This report addresses fintech payment, lending, wealth management, and other products. GAO assesses 1) fintech benefits, risks, and protections for users; 2) regulatory oversight of fintech firms; 3) regulatory challenges for fintech firms; and 4) the steps taken by domestic and other countries' regulators to encourage financial innovation within their countries. GAO reviewed available data, literature, and agency documents; analyzed relevant laws and regulations; and conducted interviews with over 120 federal and state regulators, market participants, and observers, and regulators in 4 countries with active fintech sectors and varying regulatory approaches. What GAO Found Fintech products—including payments, lending, wealth management, and others—generally provide benefits to consumers, such as convenience and lower costs. For example, fintech robo-advisers offer low cost investment advice provided solely by algorithms instead of humans. Fintech products pose similar risks as traditional products, but their risks may not always be sufficiently addressed by existing laws and regulations. Also, regulators and others noted that fintech activities create data security and privacy concerns and could potentially impact overall financial stability as fintech grows. The extent to which fintech firms are subject to federal oversight of their compliance with applicable laws varies. Securities regulators can oversee fintech investment advisers in the same ways as traditional investment advisers. Federal regulators may review some activities of fintech lenders or payment firms as part of overseeing risks arising from these firms' partnerships with banks or credit unions. In other cases, state regulators primarily oversee fintech firms, but federal regulators could take enforcement actions. Regulators have published consumer complaints against fintech firms, but indications of widespread consumer harm appear limited. The U.S. regulatory structure poses challenges to fintech firms. With numerous regulators, fintech firms noted that identifying the applicable laws and how their activities will be regulated can be difficult. Although regulators have issued some guidance, fintech payment and lending firms say complying with fragmented state requirements is costly and time-consuming. Regulators are collaborating in various ways, including engaging in discussions on financial protections for customers that may experience harm when their accounts are aggregated by a fintech firm and unauthorized transactions occur. Market participants disagree over reimbursement for such consumers, and key regulators are reluctant to act prematurely. Given their mandated consumer protection missions, regulators could act collaboratively to better ensure that consumers avoid financial harm and continue to benefit from these services. GAO has identified leading practices for interagency collaboration, including defining agency roles and responsibilities and defining outcomes. Implementing these practices could increase the effectiveness of regulators' efforts to help resolve this conflict. Regulators abroad have taken various approaches to encourage fintech innovation. These include establishing innovation offices to help fintech firms understand applicable regulations and foster regulatory interactions. Some use “regulatory sandboxes” that allow fintech firms to offer products on a limited scale and provide valuable knowledge about products and risks to both firms and regulators. Regulators abroad also established various mechanisms to coordinate with other agencies on financial innovation. While some U.S. regulators have taken similar steps, others have not due to concerns of favoring certain competitors or perceived lack of authority. While these constraints may limit regulators' ability to take such steps, considering these approaches could result in better interactions between U.S. regulators and fintech firms and help regulators increase their understanding of fintech products. This would be consistent with GAO's framework calling for regulatory systems to be flexible and forward looking to help regulators adapt to market innovations. What GAO Recommends GAO is making numerous recommendations related to improving interagency coordination on fintech, addressing competing concerns on financial account aggregation, and evaluating whether it would be feasible and beneficial to adopt regulatory approaches similar to those undertaken by regulators in jurisdictions outside of the United States. In written comments on a draft of this report, the agencies stated that they concurred with GAO's recommendations and would take responsive steps.
gao_GAO-18-116
gao_GAO-18-116_0
Background The Kissell Amendment The Kissell Amendment applies to contracts entered into by DHS as of August 16, 2009, and, according to the Congressional Record, would require DHS to purchase uniforms made in the United States. According to the Congressional Record, the amendment was intended to extend some of the provisions found in the Berry Amendment to DHS. The Berry Amendment generally restricts the Department of Defense’s (DOD) procurement of textiles, among other items, to those produced within the United States. Pursuant to the Kissell Amendment, subject to exceptions, funds appropriated, or otherwise available to DHS, may not be used to procure certain textile items directly related to the national security interests of the United States if the item is not grown, reprocessed, reused, or produced in the United States. The Kissell Amendment specifies categories and types of textiles including items such as clothing, tents, tarpaulins, covers, and protective equipment, as well as the fibers used for fabrics such as cotton and other natural and synthetic fabrics. We refer to these textile items that are directly related to the national security interests of the United States as “Kissell-covered items.” The Kissell Amendment also has multiple exceptions to the procurement restriction, including: Small Purchases Exception – procurements under the simplified acquisition threshold (currently set at $150,000). Availability Exception – satisfactory quality and sufficient quantity of any Kissell-covered item cannot be procured when needed at U.S. market prices. Procurements Outside the United States – procurements by vessels in foreign waters or emergency procurements outside the United States. De Minimis Exception – DHS may accept delivery of a Kissell-covered item if it contains non-compliant (i.e., foreign) fibers as long as the total value of those fibers does not exceed 10 percent of the total purchase price of the item. In addition to the exceptions noted above, the Kissell Amendment also states that the Amendment shall be applied in a manner consistent with U.S. obligations under international agreements. As a result, purchases of Kissell-covered items, including uniforms and body armor, by DHS and its components must be procured consistent with U.S. obligations under relevant U.S. trade agreements. These agreements include the World Trade Organization (WTO) Government Procurement Agreement (GPA) and 14 bilateral or regional free trade agreements (FTAs) with 20 countries. These agreements generally require each party’s goods and services to be given treatment comparable to what is given to domestic goods and services in certain government procurements. The United States implements these obligations through the Trade Agreements Act of 1979 (TAA) and subpart 25.4 of the Federal Acquisition Regulation (FAR). According to DHS and its components, officials apply the Kissell Amendment by following the TAA as implemented in FAR subpart 25.4. As a result, when an international trade agreement applies to a DHS procurement of a Kissell-covered item, the Kissell Amendment does not restrict DHS’s purchasing of textile items from that foreign source, regardless of the item’s relationship to the national security interests of the United States. The Buy American Act The Buy American Act (BAA) can also apply to DHS procurements. The BAA restricts the U.S. government from purchasing nondomestic end products, unless an exception applies. Examples of exceptions include: Where the cost of the domestic end product would be unreasonable. Where sufficient commercial quantities of domestic end products of a satisfactory quality are not reasonably available. In acquisitions covered by the WTO GPA or FTAs, USTR has waived the Buy American statute and other discriminatory provisions for eligible products. The BAA could apply to procurements of certain textile items valued below the $150,000 simplified acquisition threshold, to which the Kissell Amendment does not apply. The applicability of the act to a particular procurement depends on a number of factors such as the existence of a waiver or whether an exception applies. DHS Obligations for Textile Procurements DHS and its components procure textiles and fabrics for numerous purposes, including clothing and equipping its officers and employees. From October 2009 through June 2017, of DHS’s more than $105 billion in obligations for procurements, $774 million, or less than one percent, was for textile products, according to FPDS-NG. The majority of textiles and fabrics procured by DHS components are for uniforms and body armor. In particular, of the $774 million, DHS obligated $516 million (or 67 percent) to procure uniforms and body armor for DHS personnel (see fig. 1). DHS Updated Policies and Procedures to Incorporate the Kissell Amendment Restriction DHS Procurement Policies Contain the Kissell Amendment Restriction In August 2009, DHS updated its procurement regulations, the HSAR, to incorporate the Kissell Amendment restriction on the procurement of textiles from foreign sources; since then DHS inserted language incorporating the restriction into the 11 uniform and body armor contracts we reviewed. The HSAR establishes standardized DHS policies for all procurement activities within the department; according to DHS officials, all DHS components are to follow these policies. Pursuant to the Kissell Amendment, the restriction on the procurement of textiles became effective for DHS on August 16, 2009. One day later, DHS published an interim rule with a request for comments from the public that amended relevant HSAR sections to reflect the statutory change limiting the procurement of products containing textiles from sources outside the United States (i.e., the Kissell Amendment). On June 9, 2010, after receiving comments from the public, DHS adopted the amendments issued under the interim rule as final and without change. The amended sections detail the restriction on procurements of foreign textiles. They also provide a list of the types of textile items included in the restriction (i.e., yarn, wool, cotton), the exceptions noted in the Kissell Amendment, and provide detail on the specific application of trade agreements. Under the regulations, unless an exception applies, a specific clause shall be inserted in solicitations and contract actions detailing the requirement to use domestic goods for any procurement of a Kissell-covered item. Some components within DHS issued additional, supplemental guidance to the HSAR, while other components determined that additional guidance would be duplicative, according to officials. For example, Transportation Security Administration’s (TSA) Internal Guidance and Procedure Memorandum, updated in June 2016, provides additional guidance to contracting officers at TSA on the procurement of textiles. This guidance specifically states that for certain textile products, TSA’s contracting officers can only evaluate and/or accept offers from specified countries. Other components determined that additional guidance was not needed because the HSAR adequately covers the requirements of the Kissell Amendment for their purposes. For example, U.S. Secret Service officials stated that, for any procurement of textiles, they insert the required language from the HSAR into the request for proposals in case an item could be considered directly related to U.S. national security interests and thereby subject to the Kissell Amendment restriction. DHS officials stated that contracts for the procurement of uniforms and body armor are their only contracts for textile-related products that are directly related to national security interests. See figure 2 for examples of DHS uniforms and body armor. According to DHS officials, other textile or apparel procurements, such as curtains for DHS offices, would likely not be subject to the foreign procurement restriction under the Kissell Amendment because they are not directly related to national security interests. DHS components can also procure textiles through the Federal Supply Schedules (FSS) program. When ordering from these contracts, DHS contracting officers would make the determination of whether or not the purchase is directly related to national security interests and therefore subject to the Kissell Amendment restriction, according to DHS officials. DHS officials also explained that if the purchase under the FSS program contract is subject to the Kissell Amendment, the contracting officer would be responsible for inserting the required language from the HSAR into the delivery order. All 11 of the contracts we reviewed for uniforms and body armor entered into by a DHS component since August 2009 included language regarding the restriction of the Kissell Amendment. Many of DHS’s components that buy uniforms, including TSA and U.S. Customs and Border Protection (CBP), were already under contract with a vendor to supply uniforms when the Kissell Amendment took effect in August 2009. The Kissell Amendment specified that it applied to contracts entered into by DHS 180 days after the enactment of the American Recovery and Reinvestment Act of 2009. Therefore, DHS and its components did not apply the Kissell restriction to contracts signed before August 16, 2009. Several components separately signed contracts with uniform vendors after prior contracts expired and the Kissell restriction was in effect. For example, in February 2010, TSA signed a contract for uniforms with a vendor that included language restricting the foreign procurement of those uniforms per the Kissell Amendment. In 2012, DHS decided to enter into a single, department-wide contract for the procurement of uniforms for all of its components. While that contract was being developed, several components signed additional contracts for uniforms with vendors to ensure a continuous supply of uniform items for their officers. This included a “bridge” contract between TSA and a vendor in February 2013, which also included language referencing the Kissell Amendment and language restricting the foreign procurement of those uniforms. In September 2014, DHS entered into its current 5-year, department-wide uniforms contract that provides eight DHS components with uniform clothing items. One vendor holds this uniforms contract. DHS Has Procedures to Ensure That the Kissell Amendment Restriction Is Properly Applied DHS employs multiple procedures, according to officials, in an effort to ensure that the restriction on the procurement of foreign textiles from the Kissell Amendment was and is properly applied, including (1) a standardized procurement contract review process; (2) a requirement for all DHS components to use established department-wide contracts; (3) verification procedures to ensure the stated country of origin is correct; and (4) trainings on foreign procurement restrictions. First, the DHS official review process for all procurements helps ensure that the Kissell restriction is applied, if appropriate, to contracts for textiles and apparel, according to officials. Specifically, each procurement goes through a standardized review process that includes several levels of acquisition supervisors and DHS legal counsel, depending on the estimated dollar amount of the procurement. The DHS Acquisition Manual requires this review and approval process, which is designed to ensure compliance with all relevant federal acquisition laws, regulations, policies, and procedures. Through this process, officials evaluate the proposed contract for a number of restrictions, such as the appropriate use of a small business set-aside or a sole-source contract, which must also be reviewed by supervisors and legal departments before contract approval. According to DHS officials, while the applicability of the Kissell Amendment is part of the standard review process, there is no separate review for whether the foreign procurement restriction should be applied to the procurement. Officials also stated that the small number of contracting officers handling these textile procurements are aware of the requirements. Second, DHS now uses department-wide contracts for uniforms and body armor rather than each component entering into its own contracts for those items. Establishing and using these department-wide contracts increases efficiencies and reduces duplication in the department’s procurement processes, according to DHS documentation. According to agency officials, the establishment of a department-wide uniforms contract for use by all DHS components reduces opportunities for mistakes, including the possibility of a contracting officer issuing a contract that does not include the required restriction for a Kissell-covered item. Third, the department relies on the vendor to verify that the item is in compliance with all applicable restrictions. It is not the responsibility of the agency or department to verify the country of origin of an item procured through a contract. According to the FAR, the contracting officer may rely on the vendor’s certification of the country of origin for an end product when evaluating a foreign offer. DHS officials told us that, for each contract, the vendor is responsible for certifying the country of origin and notifying DHS if a uniform item from a previously approved country is no longer available and a replacement must be located. According to representatives from the current uniforms vendor, both its manufacturing facilities and its subcontractors have measures and internal controls in place to ensure that all items under the current uniforms contract are sourced from designated countries. Furthermore, if an item is being misrepresented, or not from the reported country of origin, other vendors in the industry could report such suspected violations to DHS and the department would investigate possible false claims. According to DHS officials, no reports have been made against the vendor for the current uniforms contract. In addition, CBP’s Textiles and Trade Agreements Division is responsible for the Textile Production Verification Team Program. Under this program, CBP deploys teams of personnel drawn from many DHS components to FTA partner countries to visit manufacturers of textiles imported into the United States. These teams review textile production and verify compliance with the terms of the FTA. CBP provided information that showed it had made numerous verification visits to factories used by DHS’s uniform vendor since October 2011. However, CBP officials said they did not know the degree to which the vendor’s imports from these factories were used to fulfill the DHS uniform contract. Fourth, DHS provided training in 2009 and in 2017 to contracting personnel who conduct textile and apparel procurements subject to the Kissell Amendment and other Buy American-like provisions to ensure that the requirements are applied appropriately. The Kissell Amendment required that the Secretary of DHS ensure that each member of DHS’s acquisition workforce “who participates personally and substantially in the acquisition of textiles on a regular basis receives training during fiscal year 2009 on the requirements” of the Kissell Amendment and the regulations implementing the amendment. The amendment further states that any training program developed after August 2009 include comprehensive information on the Kissell Amendment restriction. According to officials, appropriate DHS contracting personnel were trained on the requirements of the Kissell Amendment through a presentation to DHS’s Acquisition Policy Board in July 2009. DHS officials, however, were unable to identify the number of personnel present during this meeting or the materials associated with this training. According to DHS officials, no further training on Kissell requirements was conducted until June and July 2017, when DHS officials conducted two webinars that included approximately 570 DHS acquisition professionals on the requirements of the Kissell Amendment and its implications under the President’s Buy American and Hire American Executive Order from April 2017. Our review on the implementation of the Kissell Amendment, as well as the President’s new actions to increase opportunities for government agencies to buy American and hire American, precipitated the trainings, stated DHS officials. We observed the July 2017 training, at the invitation of DHS, and confirmed that the materials and topics covered included Kissell Amendment requirements. The Kissell Amendment Restriction Has a Limited Effect on DHS Textile Procurements In practice, the Kissell Amendment affects DHS textile purchases in a limited manner due to multiple factors. For most DHS components, these factors limit the effect of the Kissell Amendment restriction to certain foreign textile procurements directly related to U.S. national security interests that fall between $150,000 and $191,000. Specifically, from October 2009 to June 2017, only 14 DHS-awarded textile contracts, excluding TSA, fell within this range, according to FPDS-NG data. TSA textile procurements, unlike most DHS components, are excluded from the coverage of most U.S. international agreements. Therefore, the Kissell Amendment restricts TSA’s procurement of certain foreign textiles above $150,000 from all but three foreign countries. According to DHS officials, the current contracts to which the Kissell Amendment applies are department-wide contracts for uniforms and body armor. As of June 2017, under the current uniforms contract, 58 percent of the value of ordered uniform items by DHS came from foreign sources. In addition, DHS officials stated that the current body armor contracts source all textile items from the United States. The Kissell Amendment Restriction Affects a Limited Number of DHS Textile Procurements Due to Multiple Factors The number of DHS’s textile procurements that could be affected by the Kissell Amendment restriction is limited by multiple factors. The Kissell Amendment restriction applies only to those textile items that are directly related to national security interests for procurements above the $150,000 simplified acquisition threshold, and must be applied in a manner consistent with U.S. obligations under international agreements. In practice, this limits the number of procurements that could be affected by the amendment’s restriction to those of Kissell-covered items between the current simplified acquisition threshold and the current WTO GPA threshold of $191,000, a $41,000 range, for most DHS components. Furthermore, statutory and regulatory provisions generally require that government agencies acquire U.S.-made or designated country end products and services for procurements covered by the WTO GPA. For most of DHS, the procurement of certain textiles is covered by the WTO GPA. Therefore, due to these regulations, most DHS components are limited in their textile procurements at or above $191,000 to the United States or designated countries, regardless of the Kissell Amendment. However, the number of TSA contracts that could be affected by the Kissell Amendment restriction is potentially greater since procurement of textiles by TSA is not subject to statutory and regulatory provisions that affect the rest of DHS’s procurement of textiles. U.S. obligations under international agreements, as implemented by the TAA and FAR, require that offers of eligible products receive equal consideration with domestic offers. The FAR additionally specifies that agencies, “in acquisitions covered by the WTO GPA, acquire only U.S.- made or designated country end products unless offers for such end products are either not received or are insufficient to fulfill the requirements.” To be a U.S. procurement covered by the WTO GPA, the procurement must (1) be performed by a covered government entity; (2) be for a covered item; and (3) be at or above the WTO GPA threshold, which is currently $191,000. Other international trade agreements have their own thresholds currently ranging from $25,000 to $191,000. Figure 3 outlines the various key procurement thresholds that may affect the designated and non-designated countries from which DHS could source textiles with respect to the Kissell Amendment. Most of these dollar thresholds are subject to revision approximately every 2 years. Due to the multiple factors that affect DHS’s textile procurements, most of DHS’s components may source eligible textiles from up to 128 designated countries outside the United States in procurements at or above $191,000 (see fig. 4). This is because most DHS components’ textile procurements are considered covered items under the WTO GPA. Therefore, most DHS components’ foreign textile procurements that either meet or exceed the current $191,000 threshold are restricted to designated countries regardless of the Kissell Amendment, due to the FAR. These designated countries include WTO GPA countries, Free Trade Agreement countries, least developed countries, and Caribbean Basin countries. As noted above, multiple factors influence DHS’s procurement of textiles and the number of contracts that could be affected by the Kissell Amendment restriction. Based on our analysis of contract data from FPDS-NG, from October 2009 to June 2017, DHS awarded 111 textile contracts above the simplified acquisition threshold. Of the 111 contracts, only 14 DHS textile contracts, excluding TSA, were valued between the simplified acquisition threshold and $191,000, the current threshold for coverage under the WTO GPA. In part, because FPDS-NG does not designate whether or not a contract is directly related to the national security interests of the United States, we could not determine whether these contracts were subject to the provisions of the Kissell Amendment. According to DHS officials, the only current contracts considered directly related to U.S. national security and therefore subject to the Kissell Amendment are for uniforms and body armor. The Kissell Amendment includes additional language regarding the use of any availability exception and states that any availability exception issued by DHS shall be publically posted on a government procurement internet site within 7 days of the contract. However, according to agency officials, since the passage of the Kissell Amendment, DHS has not issued any waivers for availability exceptions and has therefore been limited to procuring certain textile items from the United States and designated countries identified in the FAR. TSA Procurement Is Excluded from Coverage of Most U.S. International Trade Agreements The Kissell Amendment restriction affects TSA textile procurements differently than other DHS components. As implemented, the Kissell Amendment restricts TSA’s procurement of certain textiles above $150,000 to the United States, Canada, Mexico, and Chile. TSA’s procurement of textiles is different because it is not included in the U.S. coverage schedules of the WTO GPA and all U.S. free trade agreements, with the exception of the North American Free Trade Agreement and the U.S.-Chile Free Trade Agreement. According to USTR officials, some of TSA’s security functions were originally held by the Federal Aviation Administration (FAA), which is not subject to the FAR. Furthermore, TSA was also not subject to the FAR prior to 2008, until Congress passed legislation removing the requirement that TSA procurements be subject to the acquisition management system established by the administrator of the FAA. Those circumstances resulted in TSA’s exclusion from the WTO GPA for textiles and most other international trade agreements, according to USTR officials. Figure 5 illustrates when the Kissell Amendment could affect TSA procurements and the applicability of international trade agreements. Based on our analysis of FPDS-NG data, from October 2009 to June 2017, TSA entered into 13 textile contracts above the simplified acquisition threshold. DHS Procured Over Half of the Value of Textile Items for the Current Uniforms Contract from Foreign Sources From October 2014 to June 2017, 58 percent of the value of uniform items ordered by DHS came from outside the United States. In September 2014, DHS entered into its current department-wide uniforms contract, the largest value textile contract since the passage of the Kissell Amendment in 2009. In the request for proposals, DHS included a clause detailing the Kissell restriction on the purchase of foreign items in the uniforms contract documentation. As implemented, when combined with the purchasing restriction in the TAA, the clause in the Kissell Amendment that states the act shall be applied consistent with U.S. obligations under international agreements allows the uniforms contract vendor to source items from up to 128 designated countries. In the request for proposal for the current uniforms contract, DHS components included a list of over 900 uniform items including shirts, pants, shoes, and insignias. The vendor that was awarded the contract then reported the cost and expected country of origin for each item, which DHS approved. Table 1 shows the estimated cost and quantity of items estimated to be procured under the contract for components that primarily have a national security function. After the uniform contract was entered into by DHS in September 2014, DHS components began ordering uniform items under the contract. In addition to more than 900 types of uniform items that were agreed upon at the initiation of the contract, DHS components issued contract modifications to add or remove uniform items from the approved list. Common types of items expected to be ordered included uniform shirts, pants, socks, and shoes that met DHS component specifications. From October 2014 to June 2017, $164.6 million in uniform items was ordered by DHS components that primarily have a national security function. Of that amount, 58 percent, or $96 million, in uniform items ordered by DHS came from a reported 12 countries outside the United States. The remaining 42 percent, or $69 million, in uniform items was reported as originating in the United States. By value, Mexico, the largest source of uniform items from outside of the United States, accounted for 30 percent of the ordered uniform items. In addition, 8 percent of the value of uniform items was sourced from least developed countries, including Cambodia (5 percent) and Bangladesh (2 percent). Figure 6 illustrates the percentage value of DHS procurement of uniform items by reported country of origin for the current contract by components that primarily have a national security function. Based on our analysis of the vendor’s ordering data, the majority of the value of uniform items ordered by all five components were sourced from outside the United States. In addition, a larger value of the uniform items ordered by three of the five components were sourced from Mexico than from any other country, including the United States. Table 2 shows the total value of the uniform ordering data for the five DHS components that primarily have a national security function under the current uniforms contract. From October 2014 through June 2017, CBP ordered approximately $101.1 million in uniform items under the contract, and TSA ordered approximately $53.5 million. CBP and TSA accounted for the majority of the dollar value of uniform orders from October 2014 through June 2017, representing 94 percent of the value of uniform items ordered by DHS components that primarily have a national security function under the contract. Specifically, 32 percent of the value of TSA ordered uniform items were from the United States, with the other 68 percent sourced from Mexico. As mentioned above, the Kissell Amendment, as implemented, restricts TSA’s foreign procurement of certain textiles above $150,000 to Canada, Mexico, and Chile. Sourcing Only from the United States Could Be More Costly for DHS According to DHS officials and representatives of the current uniforms vendor, both the price of the uniform items and the time it would take to find appropriate U.S. sources could potentially increase if current statutory and trade agreements requirements changed and DHS was required to source all of its uniform items from the United States. According to the FAR, it is the responsibility of agencies to obtain the best value for the U.S. government. According to DHS officials, the best value may be sourced from foreign countries, especially when the country is a party to an international trade agreement with the United States. DHS officials and representatives of the vendor stated that it would be possible to source most of the items in the current uniforms contract from the United States. However, representatives of the vendor speculated that sourcing only from the United States could result in a 50 to 150 percent price increase for items that are currently sourced from foreign countries. Therefore, DHS costs could increase for over half of the uniform items currently procured from foreign sources. Additionally, DHS officials stated that the domestic availability of some items, such as footwear, is limited and that it could take approximately 2 years to find U.S. suppliers for all items currently procured from foreign sources. DHS Reported Procuring All Body Armor from U.S. Sources The second largest current textile contract is the department-wide contract for body armor. Effective November 1, 2016, the department- wide contract for body armor is not to exceed $93.8 million. As of June 2017, DHS had obligated $6.8 million under these body armor contracts. DHS did not provide GAO documentary evidence that the body armor is produced in the United States. However, according to DHS officials, textile items under the current body armor contracts are produced in the United States. According to DHS officials, to verify that materials are produced in the United States, DHS visited the site where these materials are produced and assembled in the United States. In addition, the contract contains specific language restricting the vendor from procuring items that are not in compliance with the Kissell Amendment. Agency Comments We provided a draft of this report for review and comment to DHS and USTR. DHS did not provide written comments on the draft report but provided a number of technical comments that we incorporated as appropriate. USTR did not provide written or technical comments to the draft report. We are sending copies of this report to the appropriate congressional committees, to the Secretary of Homeland Security, the U.S. Trade Representative, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology A Senate Report accompanying Senate Bill 1619, a bill related to the Consolidated Appropriations Act, 2016, includes a provision for us to review the Department of Homeland Security’s (DHS) implementation and compliance with the Kissell Amendment, as well as the effectiveness of the policy. This report examines the extent to which (1) DHS has incorporated the Kissell Amendment into its procurement policies and procedures and (2) the Kissell Amendment affects DHS’s procurement of textiles. To address these objectives, we reviewed relevant laws and policies, such as Section 604 of the American Recovery and Reinvestment Act of 2009 (the “Kissell Amendment”), the Trade Agreements Act of 1979 (TAA) as amended, the Federal Acquisition Regulations (FAR), Homeland Security Acquisition Regulations (HSAR), and the DHS Acquisition Manual, as well as select U.S. free trade agreements. We interviewed officials from DHS and the office of the U.S. Trade Representative (USTR). We also interviewed officials from the U.S. textile and apparel industry, including the National Council of Textile Organizations and the American Apparel and Footwear Association. Finally, we spoke with officials from the vendor for DHS’s current department-wide uniforms contract, VF Imagewear. To determine the extent to which DHS incorporated the Kissell Amendment into its procurement policies and procedures, we reviewed relevant DHS documents and policies, including the HSAR, interim and final rules on the implementation of the Kissell Amendment, and component-level procurement guidance. We also interviewed officials from DHS’s Office of the Chief Procurement Officer and from the components in DHS that have their own contracting authority, including U.S. Customs and Border Protection (CBP), Federal Emergency Management Agency (FEMA), U.S. Immigration and Customs Enforcement (ICE), Transportation Security Administration (TSA), U.S. Coast Guard, and U.S. Secret Service. To analyze whether or not language indicating the restriction on the procurement of foreign textiles from the Kissell Amendment was included in DHS and component level contracts, we reviewed contract files for 11 available uniforms and body armor contracts entered into since August 16, 2009, the date the Kissell Amendment became effective. We reviewed contract files from DHS uniform and body armor contracts because these are the only DHS textile contracts that are directly related to U.S. national security and therefore subject to the Kissell Amendment, according to DHS officials. We identified these uniforms and body armor contracts through reviews of Federal Procurement Data System–Next Generation (FPDS-NG) data for DHS and components contracts in groups 83 and 84 since August 16, 2009, and through discussions with CBP, DHS, and TSA officials. We were not, however, able to review every uniforms contract all DHS components have entered into since August 16, 2009, because, for example, some of the contract files were no longer available, consistent with federal document retention policies, according to DHS officials. The results of our reviews of selected contracts are not generalizable to all DHS textile contracts entered into since August 16, 2009. To determine the extent to which the Kissell Amendment affects DHS’s procurement of textiles, we reviewed relevant government regulations and laws, U.S. international agreements, DHS contract files, and ordering data for the largest textile contract since the effective date of the Kissell Amendment. We reviewed the FAR to evaluate which international agreements are applicable to DHS textile procurements, the thresholds for each international trade agreement, and the countries from which DHS may procure certain textiles. We reviewed the U.S. central government coverage schedule of the World Trade Organization (WTO) Government Procurement Agreement (GPA) to determine which procurements by DHS component are covered by the WTO GPA and therefore subject to the purchasing restriction in the TAA, as implemented in the FAR. To identify the dollar range for textile contracts that could be affected by the Kissell Amendment, we reviewed the Kissell Amendment and the relevant provisions of the FAR. We also interviewed USTR officials and DHS officials from the Office of the Chief Procurement Officer, CBP, and TSA to understand how international trade agreements affect DHS’s textile procurement under the Kissell Amendment. We reviewed award and obligation data from the FPDS-NG to identify the number of textile contracts awarded by DHS components and delivery orders through the General Services Administration’s Federal Supply Schedules program above the simplified acquisition threshold and those that could be affected by the Kissell Amendment. To assess the reliability of procurement data from FPDS-NG, we reviewed relevant documentation and performed verification through electronic testing. We determined the data to be sufficiently reliable for the purposes of this report. To evaluate DHS’s procurement of uniform items from the United States versus foreign sources, we reviewed the ordering estimates, which were provided as an attachment to DHS’s request for proposals for the current uniforms contract, and ordering data provided by the vendor for the current uniforms contract. The current uniform and body armor contracts are the only two active contracts to which the Kissell Amendment applies, according to DHS officials. For the purposes of ordering data and estimates, we did not review previous contracts. In addition, since all body armor items are sourced from the United States, we focused our ordering analysis on the current uniforms contract. Because we did not evaluate ordering data for previous DHS uniforms contracts, these values cannot be extrapolated to all DHS uniforms contracts. To calculate the ordering estimates for the current uniforms contract, we analyzed data created by DHS and the uniform vendor during the development phase of the contract. To focus on the DHS components that primarily have a national security function under the current uniforms contract, we analyzed ordering estimates to identify the number of uniform items that DHS components reported as being directly related to national security. Under the current uniforms contract estimates, CBP, ICE, National Protection and Programs Directorate (NPPD), TSA, and U.S. Secret Service are the five DHS components that reported the majority of uniform items as being directly related to national security. As a result, we included these five DHS components in our analysis of the ordering estimates under the current uniforms contract. We did not include FEMA or Federal Law Enforcement Training Center (FLETC) in our analysis because FEMA did not list any uniform items as related to national security and FLETC identified only one item out of 88 as related to national security. We also did not include ordering estimates from the Food and Drug Administration, which is a party to the contract but is not a DHS component. In addition, the U.S. Coast Guard did not provide ordering estimates since it was not included in the original proposal for the current uniforms contract. For each of the identified DHS components that reported the majority of uniform items as directly related to national security, we analyzed the estimated data based on description, the estimated quantity, the unit price, and the country of origin. While we did not analyze the value of any contract modifications that added or removed uniform items from the contract, we did review select modifications and found that contract modifications were generally consistent with the original contract estimates for that non-generalizable sample. To obtain insights into the countries of origin in the modifications, we reviewed a small, non- generalizable sample of 10 modifications. We concluded that the breakdown between domestic and foreign sourced items for the items added through the modifications was generally consistent with the breakdown between domestic and foreign sourced items in the original contracts’ estimates. To determine the reasonableness of the processes by which DHS and its vendors generated these estimates, we interviewed knowledgeable officials, reviewed documents submitted by the vendor, and performed data reliability testing. DHS officials told us that they had provided the contractor with detailed lists of the textile items it required, and the vendor reported that they determined the prices and countries of origin based on prevailing market conditions. DHS officials then reviewed the estimates provided by the vendor and approved the items, price, and country of origin under the contract. DHS officials and the vendor informed us that because these estimates reflected market conditions when the contract was signed, actual purchases of items might be from countries other than those listed in the contract, depending on changes in those conditions and availability of the items. We determined these estimates were sufficiently reliable to represent DHS’s intended purchases of textile products by country of origin under this contract. To analyze the orders of uniform items, we relied on ordering data provided by the vendor for the current uniform contract. We reviewed uniform ordering data for the five DHS components that reported the majority of uniform items as being directly related to national security: CBP, ICE, NPPD, TSA, and the U.S. Secret Service. The uniform ordering data included items ordered by individual DHS employees through an allowance system and by DHS components through bulk orders. We did not include the U.S. Coast Guard in our analysis since it primarily orders U.S.-made uniform items through the Department of Defense’s Defense Logistics Agency, according to Coast Guard officials. We analyzed the value of uniform items procured from the United States and foreign sources based on the reported country of origin and component from October 2014 to June 2017. To assess the reliability of the ordering data provided by the vendor, we reviewed the data for inconsistencies. We clarified with the vendor the relevant data sets for our analysis and any discrepancies we identified in the data. DHS relies on the vendor to provide the countries of origin, and it was beyond the scope of this engagement for us to verify the vendor provided country of origin. We determined that the ordering data were sufficiently reliable for the purposes of comparing orders to estimates by countries of origin for uniforms under the contract, and presenting details about purchases from the United States versus other countries of origin. The result of our analysis is limited to the current department-wide uniforms contract with DHS and cannot be extrapolated to other DHS textile contracts. For the body armor contracts, we relied on FPDS-NG data for the obligations under the current and previous contracts. We also interviewed DHS officials who identified the country of origin of the items purchased under the current body armor contracts; it was beyond the scope of this engagement to verify the agency-provided country of origin. To assess the reliability of the obligations data from FPDS-NG, we reviewed relevant documentation performed verification through electronic testing. We determined the data to be sufficiently reliable for the purposes of this report. We conducted this performance audit from January 2017 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual mentioned above, Adam Cowles (Assistant Director), Christopher J. Mulkins (Analyst-in-Charge), Martin Wilson, Lynn Cothern, Martin de Alteriis, Neil Doherty, Grace Lui, and Julia Kennon made key contributions to this report.
Why GAO Did This Study The U.S. textile industry sustained significant losses when textile production fell from $71 billion in 2006 to $46 billion in 2009, according to the U.S. Bureau of Economic Analysis. As a part of the American Recovery and Reinvestment Act of 2009, Congress passed the Kissell Amendment, which placed a restriction on DHS's procurement of certain textiles from foreign sources. DHS has applied this restriction to uniforms and body armor. The amendment was intended to increase opportunities for American textile and apparel manufacturers, according to the Senate Committee on Appropriations. The Senate report that accompanied Senate Bill 1619, a bill related to the Consolidated Appropriations Act, 2016, includes a provision for GAO to review DHS's implementation of the Kissell Amendment and its effectiveness. This report addresses the extent to which (1) DHS has incorporated the Kissell Amendment into its procurement policies and procedures and (2) the Kissell Amendment affects DHS's procurement of textiles. To perform this work, GAO analyzed DHS policies and procedures, procurement obligations data, textile contract files, and vendor ordering data from DHS's current uniforms contract. GAO also interviewed DHS and U.S. Trade Representative officials and private sector representatives, including the vendor for the current DHS uniforms contract. GAO received technical comments from DHS, which GAO incorporated as appropriate. What GAO Found The U.S. Department of Homeland Security (DHS) has updated its policies and procedures to incorporate a restriction on its procurement of certain textiles as specified in the “Kissell Amendment.” In August 2009, DHS amended its procurement policies to reflect the Kissell Amendment restriction and describe the limitations on DHS's procurement of specified textiles from sources outside the United States. All 11 contracts GAO reviewed for uniforms and body armor entered into by a DHS component since August 2009 included language regarding the Kissell Amendment restriction. In addition, according to officials, DHS has several procedures to ensure that contracting officers adhere to the requirements of the Kissell Amendment. These include a required acquisition review process; a requirement for all DHS components to use department-wide contracts; verification procedures; and training for contracting personnel on the Kissell Amendment restriction. In practice, the Kissell Amendment restriction affects a limited number of procurements due to multiple factors and has not fully restricted DHS from purchasing textiles from foreign sources. The restriction applies only to certain textile purchases directly related to U.S. national security interests above the simplified acquisition threshold of $150,000, and must be applied consistent with U.S. obligations under international agreements. For most of DHS, this restriction limits only procurements that fall between $150,000 and $191,000, the World Trade Organization Government Procurement Agreement threshold. However, because procurements by the Transportation Security Administration (TSA) of textiles are excluded from most international agreements, the Kissell Amendment prevents TSA's purchasing of certain textiles above $150,000 from all but three foreign countries. In September 2014, DHS signed a uniforms contract, the largest procurement covered by the Kissell Amendment. Under this contract, DHS has ordered 58 percent of the $164.6 million in uniform items from foreign sources through June 2017 (see figure).
gao_GAO-18-150
gao_GAO-18-150_0
Background The MTW demonstration was authorized by the Omnibus Consolidated Rescissions and Appropriations Act of 1996 (1996 Act). The demonstration’s ultimate goal is to identify successful approaches that can be applied to public housing agencies nationwide. As of November 2017, a total of 39 agencies were authorized to participate in the demonstration (see fig. 1); however, two agencies consolidated their MTW demonstration programs and are counted as one agency for purposes of MTW participation. The MTW Office within the Office of Public and Indian Housing (PIH) is responsible for implementing the demonstration. The MTW Office currently includes a program director and eight coordinators, who are each assigned to a specific group of MTW agencies. MTW coordinators facilitate the reviews of planned and implemented activities and are responsible for coordinating with other HUD offices, including local HUD field offices, to obtain additional input on MTW agencies’ planned activities and accomplishments. Objectives and Key Demonstration Requirements The 1996 Act that created the MTW demonstration provides three objectives for the demonstration: (1) reduce costs and achieve greater cost-effectiveness in federal housing expenditures; (2) give incentives to families with children where the head of household is working, seeking work, or is preparing for work by participating in job training, educational programs, or programs that assist people to obtain employment and become economically self-sufficient; and (3) increase housing choices for low-income families. In making these changes, MTW agencies must comply with the following five contractual requirements derived from the 1996 Act: 1. assist substantially the same total number of eligible low-income families under MTW as would have been served absent the demonstration; 2. maintain a mix of families (by family size) comparable to those they would have served without the demonstration; 3. ensure that at least 75 percent of the families assisted are very low- 4. establish a reasonable rent policy to encourage employment and self- 5. assure that the housing the agencies provide meets HUD’s housing quality standards. Funding for MTW Agencies MTW agencies do not receive special funding allocations; rather, they receive funds from the three traditional primary funding sources (public housing capital funds, public housing operating funds, and voucher funds). Public housing agencies generally are required to use the funds from each source only for specific purposes, but MTW agencies may combine the money from the three sources and use the funds for a variety of HUD-approved activities. This fungibility is intended to give MTW agencies greater flexibility. For example, public housing operating funds are traditionally used to make up the difference between rents charged for units and the cost of operating them. Capital funds are traditionally used for modernization and management improvements, while voucher funds traditionally provide rental assistance in the private market. However, an MTW agency may use public housing capital funds to issue additional vouchers or use voucher funds to develop more public housing. MTW agencies also have the authority to use their funds to implement innovative activities that differ from traditional housing assistance. For instance, an MTW agency can use funds to replace public housing with mixed-income communities or reach special-needs populations using vouchers paired with supportive services. Terms of Participation for MTW Agencies, Including Reporting HUD entered into a standard agreement with each existing MTW agency. HUD created the agreement in 2008 to standardize the contract terms. The agreement references an attachment that sets out reporting requirements (Attachment B or Form 50900) and another attachment (Attachment C) that lists the specific sections of the United States Housing Act of 1937, as amended, and its implementing regulations that an MTW agency may waive as part of its MTW flexibility. While the standard agreement is generally the same for all MTW agencies, two attachments are tailored to individual agencies: a description of the formulas for determining the amounts of funding each agency will receive (Attachment A) and a section that may include some agency-specific authorizations (Attachment D). In addition to statutory requirements, the agreement requires all existing MTW agencies to submit to HUD an annual plan for approval as well as an annual report. Attachment B outlines the information that agencies are required to include in their annual plans and annual reports. For example, MTW agencies must include certain elements in their annual plans for each activity they propose to adopt, such as (1) a description of the activity and its anticipated effect in relation to the statutory objective under which the activity is proposed; (2) the HUD metrics that will be used to quantify the changes the agency anticipates as a result of the activity, including baseline performance level and yearly benchmarks; and (3) the MTW authorizations that give the agency the flexibility to conduct the activity. Similarly, MTW agencies are required to include in their annual reports information about housing stocks and leasing as well as information required for HUD to assess compliance with key demonstration requirements (such as number and mix of families served and percentage of very low-income households served). For rent-reform activities, agencies are also required to describe the number and results of any hardship requests. MTW agencies also are required to report standard information through HUD data systems. MTW agencies must submit tenant-related data into the Moving to Work section of the Public and Indian Housing Information Center (MTW-PIC). According to HUD officials, the MTW-PIC module was created in 2007 because the standard PIC system that non-MTW agencies use could not accommodate some of the activities allowed under MTW, such as rent calculations that vary from HUD’s standard calculations. MTW agencies also must submit year-end financial information into FDS, and HUD issued special instructions to enable MTW agencies to complete the reporting. Finally, MTW agencies must report voucher unit utilization in VMS. MTW Demonstration Expansion The Consolidated Appropriations Act, 2016 authorized HUD to expand the MTW demonstration from the current 39 public housing agencies to an additional 100 agencies (expansion agencies) over 7 years. The 2016 act requires that the expansion agencies must be high performers at the time of application and that the selected agencies represent geographic diversity. The expansion agencies will be brought into the demonstration by cohort, as required by the 2016 act. HUD plans to designate the initial cohort by summer 2018. As directed by the 2016 act, within each cohort each agency will implement one policy change that HUD selects for that cohort to test. The 2016 act requires that expansion agencies be rigorously evaluated and that HUD establish a research advisory committee to advise the Secretary on policies to study and methods of research and evaluation. HUD established the committee and received its recommendations on which policy changes to test and how to evaluate them. As of November 2017, HUD had not announced the policy changes each cohort will be testing. On January 23, 2017, HUD published in the Federal Register a request for comment on a draft operations notice for the MTW expansion. The draft operations notice establishes requirements for the implementation and continued operations of the demonstration and describes waivers available, terms of participation, funding and financial reporting, and administration and oversight for agencies joining under the expansion. The comment period closed on June 5, 2017. According to HUD officials, there will be another opportunity for comment before the notice is finalized in early 2018. HUD Took Steps to Improve Oversight, but Has Not Conducted Workforce Planning for Demonstration Expansion Since our last review of the MTW demonstration in April 2012, HUD has taken steps to improve MTW agencies’ annual reporting and its process for monitoring agencies’ compliance with requirements of the demonstration. However, we found that HUD’s oversight—review of annual reports and compliance assessments—has not been timely and HUD has not fully documented its process for assessing compliance, due to limited staffing and competing priorities. While the MTW Office added staff to assist with the oversight of the current 39 MTW agencies, HUD has not conducted workforce planning to address the resources needed for overseeing the 100 agencies to be added through the MTW demonstration expansion. HUD Took Some Steps to Improve Reporting by MTW Agencies and Its Process for Monitoring Compliance HUD has taken steps to improve MTW agencies’ annual reporting. While agencies were already required to submit annual plans and reports, HUD revised its reporting requirements for MTW agencies in May 2013 in response to our recommendations. Specifically, HUD revised Attachment B to provide detailed clarifications on the meaning of the three statutory objectives of the demonstration and relevant standard metrics. For example, for each of the statutory objectives, the revised guidance requires that the MTW agency use and report on all of the applicable standard metrics listed in Attachment B. The revised attachment also includes standard tables for MTW agencies to provide operating information and financial information. Additionally, HUD conducted training on the revised Attachment B and issued a document containing answers to frequently asked questions about the revisions. HUD also took some steps to improve its monitoring of MTW agencies’ compliance with the five requirements of the demonstration. Specifically, in response to our 2012 recommendation that HUD implement a process for assessing compliance with the requirements, HUD developed a process and began to track MTW agencies’ compliance with each of the five requirements. The 2013 revisions to Attachment B added requirements for agencies to submit information in annual reports with which HUD assesses compliance. The attachment includes standard tables for MTW agencies to provide specific information on households served, family sizes, and income levels. According to our review of HUD documents and discussions with HUD officials, the MTW Office uses this information, along with information MTW agencies submitted in other HUD data systems, to assess compliance with the five requirements. Table 1 summarizes HUD’s description of its compliance processes for each of the five requirements. HUD’s Monitoring Was Not Timely and Its Process for Assessing MTW Agencies’ Compliance Was Not Well Documented Annual Report Review and Compliance Assessment Timeliness We found that HUD’s reviews of MTW agencies’ annual reports were not completed in a timely manner; reviews were completed multiple years after the annual reports were submitted. Specifically, HUD did not complete its review of the agencies’ 2013–2015 reports until March 2017 and its review of 2016 reports was still underway as of November 2017 (see fig. 2). As previously mentioned, MTW agencies submit information about their MTW activities, financial information, data related to compliance assessments, and other information through annual reports. Attachment B states that HUD officials will use this information to monitor MTW agencies, particularly their compliance with some of the five requirements. Although the standard agreement gives MTW agencies 90 days after the end of their fiscal year to submit the annual report to HUD, it does not specify a time frame for HUD’s review of the report. However, it states that HUD must notify an agency in writing if it requires additional information or clarifications to the information provided in the report. HUD officials said that limited staffing resources in the MTW Office in 2014–2016 led to delays in the reviews. Officials further noted that in 2014 and 2015 existing staff in the MTW Office had to focus on other priorities, including renegotiating the standard agreement, and then in 2016 on implementing the expansion of the demonstration. Untimely reviews of MTW annual reports diminishes oversight and can result in delays on HUD’s part in responding to issues arising from the review, agencies not having an opportunity to respond to concerns promptly, and HUD’s inability to assess the information reported to determine effects on tenants. As previously described, HUD developed a process to assess compliance with the five requirements of the demonstration, but its implementation of the process was not always timely. HUD did not complete its 2013–2015 reviews of MTW agencies’ compliance with the five requirements until 2017. In March 2016, HUD officials provided us with a tracker of agencies’ compliance with the requirements that indicated HUD started its review for 2013 but had not yet completed that assessment or started reviewing compliance for subsequent years. In July 2017, HUD provided us with evidence it had completed the 2013–2016 assessments for all five requirements. Documentation of Compliance Assessment Process In addition, HUD has not clearly documented its process for assessing compliance with the five requirements. HUD officials told us they did not have documentation of the process they used to assess compliance with most of the requirements, such as the methodologies and data used. As previously discussed, HUD has different processes for assessing compliance with each requirement and the information it uses to determine compliance comes from various data sources. Based on our review of HUD documents (including Attachment B and the recently completed compliance assessments) and discussions with HUD officials, it was not always clear what methods HUD used to support its compliance determinations. For example, documentation we reviewed on the requirement that MTW agencies ensure that 75 percent of the households served are very low-income did not state the methodology used to determine if MTW agencies were in compliance. While our review of the documentation indicated that tenant income in all relevant programs was used, it was not clear if the percentages of tenants in each income category were averaged or weighted to obtain the final percentage of tenants with very low incomes. Additionally, while Attachment B briefly describes the data sources used for some of the compliance assessments, HUD has no documentation specifying what data variables to extract and how to use them. The lack of written instructions led to HUD having to redo its assessment of compliance with the requirement that MTW agencies ensure that 75 percent of the households served are very low-income. Specifically, HUD officials noted that HUD staff initially determined compliance with this requirement based on tenants’ current income, but later determined that they needed to reassess compliance with the requirement using tenants’ income at the time of entry to the program. In September 2017, HUD officials said they were developing internal standard operating procedures to document their approach to assessing compliance with each requirement, and expected to complete the procedures by early calendar year 2018. However, because HUD has not finalized these standard operating procedures, it is unclear whether they fully document the steps and data needed to complete the compliance assessments. Federal internal control standards state that management should develop and maintain documentation of its internal control system, including for controls related to any compliance objectives of the agency. They note that effective documentation assists in management’s design of internal control by establishing and communicating purposes, roles and responsibilities, and specifics of implementation to agency staff. HUD officials stated that limited staffing in the MTW Office in 2014–2016 and competing priorities led to delays in compliance assessments and development of full documentation on procedures. Limited documentation for assessing compliance can lead to inconsistent monitoring of MTW agencies’ compliance with the five requirements. For example, as previously discussed, the lack of documentation on the process and data needed led to the need to reassess compliance with the requirement that MTW agencies ensure that 75 percent of the households served are very low-income. HUD Has Not Yet Completed Workforce Planning for the MTW Demonstration While HUD has taken some steps to address oversight and staff responsibilities for an expanded demonstration, it has not conducted workforce planning for the expanded demonstration. Federal internal control standards state that management should design control activities, including management of human capital, to achieve objectives and respond to risks. Management is to continually assess the knowledge, skills, and ability needs of the entity so that the entity is able to obtain a workforce that has the required knowledge, skills, and abilities to achieve organizational goals. In previous work on human capital, we identified key principles for effective strategic workforce planning, including determining the critical skills and competencies needed to achieve current and future programmatic results and developing strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. In 2014, the MTW Office engaged in a workforce analysis exercise to determine staffing levels needed to oversee the MTW demonstration as configured at that time. Based on the 2014 analysis, the MTW Office determined that seven staff were needed to oversee the 39 participating agencies. In 2014, the MTW Office had four staff and in 2015, five (see table 2). Officials told us that in 2016, an additional five staff were hired in the MTW Office and that one staff member would focus on financial analysis and compliance assessment. In 2017, the MTW staff count was nine. In July 2017, officials told us that based on the 2014 workforce analysis, they determined they had sufficient resources to oversee the current 39 MTW agencies. In response to a congressional request to determine resource needs for MTW expansion, in December 2015 the MTW Office updated its 2014 workforce analysis. As with the 2014 analysis, the 2015 workforce analysis discussed the level of staffing resources needed and not the skill sets and competencies needed to oversee the expanded MTW demonstration and actions to fill any gaps. According to this analysis, HUD determined that a significant number of staff would be needed to oversee the new agencies. Specifically, 41 full-time equivalent personnel across various HUD offices would be needed to meet the resource needs of the expansion in 2016–2020. In September 2017, HUD officials said that because of the current budget environment, the agency planned to address the staffing gap identified in the 2015 analysis by developing a joint oversight structure between the MTW Office and PIH’s Office of Field Operations. According to HUD officials, currently the MTW Office is primarily responsible for monitoring MTW agencies (reviewing annual plans and reports and assessing compliance with demonstration requirements). Field office staff in PIH assist with the review of MTW agencies’ overall financial health and public housing occupancy and voucher leasing information, among other things. HUD plans to continue to follow this oversight structure for the existing 39 agencies, but have field office staff assume more responsibilities for agencies that will join the MTW demonstration as a result of the expansion. MTW Office officials said they have been having internal discussions through a working group with field office staff in PIH to discuss the new oversight structure and determine how best to meet resource needs associated with the expansion. However, as of November 2017, the MTW Office and PIH had not completed plans for joint oversight of the expanded MTW demonstration with the field offices or assessed the knowledge, skills, or abilities needed to implement this new oversight structure. As previously stated, the first cohort of public housing agencies will join the expanded MTW demonstration by summer 2018. MTW Office officials also told us that PIH is planning to finalize a workforce plan by early calendar year 2018 that will address the broad resource needs of PIH. However, according to MTW Office officials, PIH has not yet determined the extent to which the human capital resource needs for the MTW expansion will be incorporated into the PIH workforce plan. Without strategic workforce planning that reflects the oversight strategy for the expanded MTW demonstration, identifies the critical skills and competencies needed, and includes strategies to address any gaps, HUD will not be able to reasonably ensure that it has the staffing resources necessary to oversee an expanded demonstration. Data Limitations Hinder Analysis of MTW Flexibilities, and Outcomes and MTW Reserve Levels Raise Questions We found significant differences between MTW agencies and comparable non-MTW agencies in key outcomes: MTW agencies had lower public housing occupancy rates, lower voucher unit utilization rates, and higher program expenses in 2009–2015 than similar non-MTW agencies. MTW funding flexibilities may partly explain the differences, but limitations in HUD data (such as the inability to determine which funding source was used to fund which activity) make it difficult to more fully understand the differences. MTW agencies accumulated relatively large reserves of voucher funding, but HUD has performed limited oversight of reserves for these agencies. MTW Agencies Had Lower Public Housing Occupancy and Voucher Utilization Rates and Higher Expenses Than Comparable Non-MTW Agencies in Recent Years We found significant differences between MTW agencies and comparable non-MTW agencies in key outcomes of the public housing and voucher programs, possibly affecting the number of tenants MTW agencies served. MTW agencies had lower yearly median public housing occupancy rates in fiscal years 2009–2015 than comparable non-MTW agencies, and the difference was statistically significant (see fig. 3). The median share of public housing units occupied (public housing occupancy rate) for MTW agencies was 3 percentage points lower than for similar non-MTW agencies (93 versus 96 percentage points). The middle 50 percent of MTW agencies in our analysis had occupancy rates that ranged from 88 to 96 percentage points, while the non-MTW agencies in our analysis had occupancy rates that ranged from 92 to 98 percentage points. MTW agencies also had lower rates of voucher unit utilization than comparable non-MTW agencies in each year during 2009–2015 (see fig. 4). The voucher unit utilization rate for MTW agencies was about 3 percentage points lower than for similar non-MTW agencies (about 93 percent versus about 96 percent). The middle 50 percent of the MTW agencies had utilization rates that ranged from about 82 to 97 percentage points, while the non-MTW agencies had occupancy rates that ranged from about 92 to 98 percentage points. We also analyzed expenses for the public housing and voucher programs of MTW agencies and comparable non-MTW agencies in 2009–2015. For the public housing program, we included all operating expenses the MTW and non-MTW agencies incurred that were associated with their public housing properties. As figure 5 shows, median public housing operating expenses for MTW agencies in each year during 2009–2015 were $7,853 per household and $6,622 for non-MTW agencies, a difference of about 19 percent. The middle 50 percent of the MTW agencies had total public housing expenses that ranged from $6,048 to $11,436, while the non-MTW agencies had expenses that ranged from $5,827 to $8,355. We also compared the operating expenses associated with the central office cost center of MTW and comparable non-MTW agencies. If larger public housing agencies implement HUD’s property management rules, they generally are required to create a central office cost center, which manages all the centralized activities of the agency and earns fees for providing day-to-day oversight of individual public housing properties such as property management. As figure 6 shows, median public housing operating expenses related to the central office cost center for MTW agencies were about 9 percent higher than comparable non-MTW agencies in each year during 2009–2015 ($2,745 per household and $2,520, respectively). The middle 50 percent of the MTW agencies had central office cost center expenses associated with their public housing program that ranged from $1,509 to $5,798, while the non-MTW agencies had expenses that ranged from $1,635 to $4,939 per household. For the voucher program, we separately examined expenses in 2009– 2015 related to administration, subsidy (housing assistance payments), and tenant services. MTW agencies had higher median administrative, subsidy, and tenant services expenses than comparable non-MTW agencies. As figure 7 shows, median yearly administrative expenses for MTW agencies were $922 per household and $642 for comparable non- MTW agencies, a difference of about 43 percent. The middle 50 percent of the MTW agencies had voucher administrative expenses that ranged from $713 to $1,179, while the non-MTW agencies had expenses that ranged from $555 to $762. As shown in figure 8, the yearly median voucher subsidy expenses for MTW agencies were about 25 percent higher than for comparable non- MTW agencies ($8,295 per household for MTW agencies and $6,629 per household for non-MTW agencies). The middle 50 percent of the MTW agencies had voucher subsidy expenses that ranged from $6,128 to $12,201, while the non-MTW agencies had expenses that ranged from $5,524 to $8,178. As shown in figure 9, the tenant services expenses for the voucher program were higher for MTW agencies than for comparable non-MTW agencies, and many non-MTW agencies did not record any expenses for tenant services in HUD’s database for the years we reviewed. These results are consistent with MTW agencies having more flexibility to use funds to provide tenant services. The median yearly expenses for tenant services for MTW agencies were about $37 per household. Although tenant services are an allowable administrative expense under the traditional voucher program, more than half of the non-MTW agencies in our sample did not report any expenses for tenant services for most of the years we examined. Non-MTW agencies generally use their voucher funds to make subsidy payments to landlords and for administrative expenses. The statistical matching and modeling analysis we conducted improved upon unadjusted comparisons of MTW and non-MTW agencies, but it was not designed to estimate the causal effects of MTW flexibilities. To reduce the influence of known differences between the two groups, we accounted for broad characteristics that differed between MTW agencies and non-MTW agencies. However, our analysis did not attempt to measure the unique circumstances of each MTW agency, but rather broad outcomes relevant to public housing and voucher programs in general. For additional details on our methods and results, see appendix II. As noted by others who studied the MTW demonstration and our previous report, no central source of systematic data exists for MTW activities and outcomes. However, a July 2017 report by Abt Associates, a research and consulting firm, identified and tested indicators they developed to track the performance of MTW demonstrations and compare them to similar non-MTW agencies. As with our analysis, the Abt study found MTW agencies tended to have worse outcomes than similar non-MTW agencies on the indicators of voucher administrative expenses and voucher unit utilization. The study also analyzed other indicators such as increases in earnings of nonelderly, nondisabled households; households served by a service coordinator; and share of voucher households in neighborhoods with lower poverty rates. On many of the other indicators analyzed, the study found that MTW agencies did better than similar non- MTW agencies. For example, for the self-sufficiency measures examined in the study, estimates showed that household earnings were more likely to increase at MTW agencies than at comparison non-MTW agencies. The study also concluded that MTW agencies were able to serve a significant number of individuals not reached by traditional housing assistance and that in many cases, they were also able to offer additional supportive services. However, because our analysis did not look at these other indicators, we could not confirm these results. Limitations in HUD Data Make It Difficult to Fully Understand Differences Potential Reasons for Observed Differences The observed differences in public housing occupancy and voucher unit utilization rates and program expenses between MTW and non-MTW agencies, which could affect the number of tenants served, may be a result of MTW agencies’ ability to (1) combine their public housing and voucher funds and use them interchangeably and (2) use funds to implement policies that go beyond traditional forms of housing assistance. Combined funding and fungibility. The single fund authorization permits MTW agencies to combine their public housing operating, public housing capital, and voucher funds into a single agency-wide funding source and use the funds interchangeably. For instance, voucher funds may be used for public housing expenses and vice versa, which could affect utilization and occupancy rates. Our analysis of 2015 data from FDS, which HUD uses to account for the agencies’ MTW financial data, showed that 19 MTW agencies transferred voucher funding to their public housing program as the result of the single-fund authorization (that is, they transferred more funding to their public housing accounts than they received through their public housing funding allocation). This analysis was possible because HUD requires agencies to report financial information in FDS at the public housing project level. However, the data could not be used to determine whether all the funds transferred to the public housing accounts were spent on public housing expenses because, according to HUD officials, FDS is not a system that tracks the actual drawdown or disbursement of funds. Instead, public housing agencies use the system to report year-end financial activity. (As discussed later in this report, FDS data could not be used to determine the extent to which public housing funds were used for voucher expenses.) Nontraditional activities. Public housing occupancy and voucher unit utilization rates might be lower for MTW agencies in part because MTW agencies can use funds to implement policies that go beyond traditional forms of housing assistance. Since October 2009, the demonstration’s “broader uses of funds” authorization under the standard agreement has permitted all MTW agencies to adopt local, nontraditional activities, which HUD guidance organizes into four categories (see table 3). In July 2017, HUD provided us with data it had recently compiled on the number of households served through local, nontraditional activities, by MTW agency, during 2009–2016 (see fig. 10). According to these data, in 2009 four agencies implemented at least one type of local, nontraditional housing assistance activity and served 1,177 households (that is, less than 1 household served through local, nontraditional housing assistance for every 100 MTW public housing and voucher units available). In 2016, the number of agencies that implemented at least one local, nontraditional housing assistance activity grew to 25 agencies, which served 9,787 households (about 2 households served through local, nontraditional housing assistance for every 100 MTW public housing and voucher unit available). Some of these households could be served through a rental assistance program that offers a lower level of subsidy than is available to households served through traditional voucher and public housing programs. For example, a local, nontraditional activity could result in an MTW agency lowering its share of housing assistance, thereby increasing the tenant’s share of rent. Conversely, HUD officials pointed out that because MTW agencies assist hard-to-serve households, the subsidies provided to these households could be higher than the subsidy provided under HUD’s traditional housing assistance programs. As such, a household served through local, nontraditional housing activity may not be equivalent to a household served under the traditional voucher or public housing program. Other factors related to expenses. According to HUD officials, factors that could explain the observed differences in the expenses for the public housing and voucher programs of MTW agencies and non-MTW agencies include that MTW agencies typically (1) need more time and resources to develop and implement “innovative” activities, (2) serve hard-to-serve households such as those experiencing homelessness, and (3) provide additional services to the households they serve as a result of the funding flexibilities. According to a University of North Carolina at Chapel Hill study, nearly all MTW agencies have used program flexibility to provide supportive housing for various hard-to-serve populations, including the previously homeless, mentally disabled, developmentally disabled, formerly incarcerated, domestic abuse victims, youth aging out of foster care, and those with substance abuse issues. Some of these programs were provided through sponsor-based voucher programs administered by partner agencies, which required coordination between the MTW agency and the partnering agencies. Data Limitations Hinder Fuller Explanations Limitations in HUD data make it difficult to more fully explain the differences that may affect the number of households served. For instance, HUD cannot measure how participation in the demonstration affected the occupancy and voucher unit utilization rates of MTW agencies. As previously discussed, HUD uses FDS to account for the agencies’ MTW funds, but once combined in the system, the funds are decoupled from the original funding source and it is difficult to determine how these funds were used. As described earlier, although FDS data could be used to illustrate how many agencies transferred voucher funding to their public housing program, these data could not be used to illustrate how many agencies transferred public housing funding to their voucher program because, according to HUD officials, FDS does not identify the source of funding that is available for the voucher program and local, nontraditional activities. Similarly, FDS cannot measure expenses that were for local, nontraditional activities because FDS expenditure categories are not tailored to the MTW demonstration. HUD officials said the reporting of expenses associated with local, nontraditional activities varies by MTW agency, which affects where FDS captures such expenses. HUD has not made changes to FDS because, according to HUD officials, FDS is an accounting system that tracks agencies’ year-end financial activity and, therefore, is not designed to keep track of these data. Furthermore, historical data do not exist on the households served through local, nontraditional activities. Although HUD provided us a spreadsheet it compiled in July 2017 with data on the number of households served through local, nontraditional housing assistance activities from 2009 through 2016, HUD had to manually compile the spreadsheet because its PIC system does not capture data on these households. HUD officials said the agency was considering capturing some data on local, nontraditional households in PIC, but making this change would require HUD and MTW agencies to devote resources to update their systems. HUD previously considered making changes to the system. In 2012, HUD issued a Federal Register notice requesting public comment on changes to the system to track households provided assistance through local, nontraditional activities. According to the notice, agencies had not been reporting these families into the system, which made it difficult to accurately account for the number of MTW families being served. The notice further stated that the MTW Office was manually collecting data on the number of families served each year but the PIC system needed to be revised to make information collection easier for MTW agencies and HUD. HUD officials said HUD did not have the information technology resources needed to make this change in PIC. Federal internal control standards state that management should use quality information to achieve the entity’s objectives. Additionally, one of the statutory objectives of the MTW demonstration is to reduce costs and achieve greater cost-effectiveness in federal housing expenditures, and a key demonstration requirement is to assist substantially the same total number of eligible low-income families under MTW as would have been served absent the demonstration. As discussed previously, intermingled funding streams, the purpose and structure of FDS, and limitations in PIC have combined to limit the data collected and readily available on the MTW demonstration. According to HUD officials, it would be difficult for HUD to require existing agencies to report additional financial data because doing so would require changes to the standard agreement, which generally cannot occur without mutual agreement between the agencies and HUD. Yet agencies’ specific reporting obligations are not set forth in the general standard agreement but rather in Attachment B, which HUD already expanded without requiring an amendment to the standard agreement in 2011 and 2013 and proposed to do in 2016. The standard agreement states that agencies must provide in their annual plan the information required in Attachment B, and under the standard agreement, HUD retains flexibility to determine what constitutes satisfactory completion of the annual plan. Further, the standard agreement, which sets forth general covenants for the demonstration and not specific data points or reporting definitions, specifically acknowledges that HUD must have the “flexibility to design and test various approaches” for housing assistance and that the agencies agree “to cooperate fully with HUD” in the monitoring and evaluation of the MTW demonstration. Under the standard agreement, MTW agencies must provide in their annual report “the information necessary for HUD to assess the Agency’s activities,” without specific detail. As with the annual plan, HUD retains flexibility to determine what data agencies must report. Without more comprehensive data on the uses of MTW demonstration funds and households served through local, nontraditional activities, HUD cannot assess the performance of MTW agencies in relation to public housing occupancy and voucher unit utilization rates and program expenses, which could affect the number of tenants served. MTW Agencies Had Relatively Large Reserves of Unspent Voucher Funding, but HUD Performed Limited Oversight Agency Reserves of Funding MTW agencies have accumulated relatively large reserves of voucher funding. The agencies are able to accumulate more reserves because their voucher funding formula differs from the formula used for the traditional voucher program. HUD allocates voucher funds to non-MTW agencies based on leasing rates and subsidy costs from the prior year. As a result, these agencies have an incentive to expend their voucher funding to keep their budget utilization rate high. However, the voucher formula for MTW agencies, which is outlined in an attachment to each agency’s standard agreement, is generally based on the actual, per-unit costs in the year prior to the agency joining the MTW demonstration. Because the voucher allocation is not tied to prior-year subsidy expenses, MTW agencies do not have the same incentive that non-MTW agencies have to use all their voucher funds in a given year. According to 2016 HUD voucher reserve data, the 39 MTW agencies had almost as much voucher reserves as the 2,166 non-MTW agencies combined. Specifically, as of December 31, 2016, MTW agencies had a total of about $1.11 billion in voucher reserves, whereas the 2,166 non- MTW agencies had slightly higher reserves of $1.13 billion. Similar to our analysis above, we compared the voucher reserves MTW agencies held to the reserves comparable non-MTW agencies held. As figure 11 shows, as of December 31, 2016, the median amount of reserves per household held by MTW agencies was $2,462 compared to $480 for comparable non-MTW agencies (a difference of $1,982 or about 5 times higher). After we completed our analysis, HUD provided updated reserve levels as of June 30, 2017, that showed that MTW agencies’ reserves exceeded non- MTW agencies’ reserves. MTW agencies had a total of about $808 million in reserves while non-MTW agencies had reserves of about $737 million. HUD has performed limited oversight of MTW reserves. For example, before 2016 HUD did not capture data that would help it determine the amount of voucher reserves held by MTW agencies. In January 2012, as part of a new cash management requirement for the voucher program, HUD implemented a process to help transition the accrual of excess funds held at the agency level to HUD-held reserves. According to HUD officials, this process was only partially implemented for MTW agencies at that time because voucher subsidy expenses were comingled with expenses associated with other allowable MTW activities in VMS. In 2016, HUD added new fields in VMS to distinguish various MTW nonvoucher subsidy expenses (such as those for capital improvements of existing public housing units and operation of local, nontraditional activities) from unspent funding. According to HUD officials, these enhancements to VMS now allow HUD to keep track of MTW agencies’ reserves. Consequently, in 2016, HUD started cash reconciliations for MTW agencies, consistent with the cash management procedures for non-MTW agencies. HUD also does not have a process to systematically determine if MTW agencies have public housing reserves. Unlike for the voucher program, HUD was unable to determine the extent to which MTW agencies had unspent public housing funding in reserves. According to HUD officials, FDS tracks overall MTW reserves but HUD cannot distinguish between public housing and voucher reserves because the MTW funds are combined into a single account and because HUD does not have a system similar to VMS that separately tracks public housing reserves for MTW agencies. According to federal internal control standards, management should internally communicate the necessary quality information, such as through written communication, to help achieve the agency’s objectives. Management should design control activities—policies, procedures, techniques, and mechanisms—to achieve objectives and respond to risks. Maintaining comprehensive written policies and procedures will help ensure that control activities are in place to address risks and carry out management directives. We also developed criteria—a set of questions— that agency managers and Congress could use to identify and manage fee revenue instability, including identifying common principles and leading practices for managing reserve funds. For example, managers should ask what level of reserves is to be maintained. In addition, they should consider establishing minimum and maximum reserve levels to ensure accountability and adherence to the reserve’s goals, justifying the numbers with program data and risk management considerations. When established reserve goals have been achieved, such as to fund planned capital investments, the level of reserve should be assessed for reasonableness. However, HUD has not developed and implemented a process to monitor MTW reserves. Specifically, it does not monitor existing MTW agencies’ reserves to determine what agencies plan to do with these reserves and assess whether the plans are reasonable given the amount of reserves. HUD officials said it would require a significant amount of time to individually compare the MTW agencies’ reserves to their planned activities. However, HUD officials said that the draft operations notice for the MTW expansion proposes requiring that expansion agencies hold no more than 1 year of voucher subsidy funds in reserves. But the notice did not outline a plan to evaluate whether this cap was appropriate, and HUD has not yet finalized the notice. Without a process to monitor existing MTW agencies’ plans for their reserves and the appropriateness of the cap for expansion agencies, HUD cannot provide reasonable assurance that MTW agencies have sound plans for expending their reserves. HUD Does Not Have a Framework for Monitoring the Effect of Certain Policies on Tenants HUD does not have a framework—standard definitions for rent reform and self-sufficiency, clear guidance on reporting requirements, or analysis plans—for monitoring the effect of rent-reform, work-requirement, and time-limit policies. HUD Definition for Rent Reform and Agency- Determined Definitions for Self-Sufficiency Resulted in Inconsistent Reporting and Prevented Data Aggregation Rent Reform HUD’s definition of rent reform is unclear, leading to agencies inconsistently categorizing some policies and not reporting required information for rent-reform policies. Federal internal control standards state that management should use quality information—relevant and reliable data—to achieve the entity’s objectives. HUD defines rent reform as “any change in the regulations on how rent is calculated for a household.” Under traditional public housing and voucher program rules, an assisted household generally must contribute the greater of 30 percent of its monthly adjusted income or the housing-agency established minimum rent—up to $50—toward its monthly rent. Statute and HUD regulations direct how public housing agencies are to certify tenant income and determine a participating household’s tenant rental payments. Non-MTW agencies must implement this determination process when a household first joins the program and then on a regular basis. In addition, the total housing costs, which are used to calculate a household’s tenant rental payment, include both the rent for the unit and utility costs. As such, an agency is responsible for establishing and maintaining a utility allowance schedule that provides reasonable allowances for tenant-paid utilities. MTW agencies can propose rent- reform policies that make changes to these program rules, such as changing how often tenants are recertified, eliminating certain exclusions or deductions, or changing the approach agencies use to determine a household’s tenant contribution. HUD has 15 categories of activities it considers to be rent reform under the MTW demonstration, but does not further define the activities under each category (see table 4). Based on our review of MTW agencies’ 2015 annual reports, we identified 194 activities that involved one or more rent-reform changes based generally on HUD’s categories of rent-reform activities. When we requested that agencies provide information on their rent- reform activities, several MTW agencies asked for clarification on how rent reform was defined and what activities fell into this category. Based on our analysis of the agencies’ 2015 annual reports, we found five agencies did not consider 15 of the 194 activities we identified to be rent reform using HUD’s definition. Based on our review of the agencies’ 2011–2016 annual plans, we found that some agencies did not report information they are required to report when proposing a rent-reform activity in their annual plans. Based on our review of the 2015 annual reports, we found that 83 of the 194 policies we identified as rent reform did not include any of the hardship data HUD requires agencies to report for rent-reform activities. Officials from some MTW agencies said they did not agree with some of the categories HUD considers to be rent reform. For example, officials from three agencies told us that they did not consider changes to the recertification schedule to be rent reform because such changes do not change how rent is calculated, only the frequency of the calculation. Officials from one agency said that HUD’s definition did not match their agency’s definition because the agency restricts its view of rent reform to any change that affects the actual rent calculation. HUD’s definition includes any change that affects the process related to rent. Officials from another agency told us that they believe HUD does not uniformly apply its definition of rent reform when reviewing agencies’ policies. HUD officials also told us that they plan to clarify the rent-reform definition for expansion agencies. But, as noted previously, HUD told us that making changes for existing MTW agencies could be difficult because doing so could require changes to the standard agreement, which generally cannot occur without mutual agreement between the agencies and HUD. However, HUD’s definition for rent reform is set forth in Attachment B, which HUD already has revised without changes to the standard agreement and is currently revising to clarify existing reporting requirements. Without a more clear definition of rent reform and specific criteria or standards with which to classify activities as rent reform, HUD lacks the quality information needed to monitor all rent-reform activities. Self-Sufficiency Although one of the requirements of the MTW demonstration is to establish a reasonable rent policy to encourage employment and self- sufficiency, HUD has not defined self-sufficiency, but rather allowed each agency to develop its own definition. To measure the extent to which certain MTW activities, including rent-reform activities, encourage households to achieve self-sufficiency, HUD requires MTW agencies to report on the number of households that transitioned to self-sufficiency, among other things. According to Attachment B of the standard agreement, MTW agencies are allowed to define self-sufficiency for each activity that is tied to this HUD metric. MTW agencies’ definitions of self-sufficiency can diverge widely and sometimes are inconsistent within an MTW agency. Some examples include defining self-sufficiency as attaining a total gross household income at 80 percent of the area’s paying a minimum rent of $225; voluntarily terminating housing assistance and other forms of government assistance; and attaining a household income of 50 percent of the area median income, even if the family may be receiving other state benefits. In addition, some agencies use multiple definitions of self-sufficiency. For example, one agency uses three definitions for self-sufficiency (one for its public housing minimum rent activity, one for its voucher rent-reform activity that combined various changes, and another for its public housing earned income disregard alternative activity). Previously, we found that clarity, reliability, and balance are three of several key attributes of successful performance measures, which are means of objectively assessing the outcomes of programs, products, projects, or services. A measure has clarity when it is clearly stated and the name and definition are consistent with the methodology used for calculating the measure. A measure that is not clearly stated can confuse users and cause managers or other stakeholders to think performance was better or worse than it actually was. A measure is reliable when it produces the same result under similar conditions. Lack of reliability causes reported performance data to be inconsistent and adds uncertainty. Another key attribute of successful performance measures is balance, which exists when measures ensure that an agency’s various priorities are covered. Performance measurement efforts that overemphasize one or two priorities at the expense of others may skew the agency’s performance and keep managers from understanding the effectiveness of their program. According to HUD officials, they have not defined self-sufficiency for MTW agencies because they want to give agencies the ability to address local needs. However, the individualized definitions have led to measurements of self-sufficiency that cannot be consistently evaluated across activities or agencies. In addition, officials said that it would be inappropriate for them to develop a definition of self-sufficiency for the MTW demonstration because HUD has not defined it for the department. However, despite the lack of an agency-wide definition of self-sufficiency, HUD regulations define self-sufficiency for certain other HUD programs. As such, HUD also could develop a self-sufficiency definition for the MTW demonstration. Without a more standardized definition of self-sufficiency for the MTW demonstration, HUD cannot collect consistent information that would allow for the evaluation of the effect of MTW rent-reform and occupancy policies on tenants. HUD Guidance for Analyses and Reevaluations of Rent- Reform and Hardship Policies Was Not Detailed HUD’s guidance on how agencies are to perform impact analyses, reevaluate activities, and establish hardship policies has not described the elements of the analysis, required submission of reevaluations, or described elements of hardship policies. Attachment B of agencies’ standard agreement contains general instructions for reporting information in MTW annual plans and annual reports, including on rent- reform activities. For example, when an agency proposes a rent-reform activity, the agency must conduct an impact analysis, describe how it will annually reevaluate the activity, and develop a hardship policy for the activity. According to HUD officials, HUD implemented these reporting requirements for rent-reform activities because they could have significant effects on tenants. Impact Analysis Attachment B suggests agencies take four steps when developing an impact analysis and include the results, including describing the rent- reform activity and identifying the intended and possible unintended effects of the activity; however, it does not provide any explanation or suggestions for how agencies should approach each step. According to HUD officials, these steps are not required and the only other guidance provided to agencies to monitor the effect of rent-reform activities is draft guidance from 2009. The 2009 draft guidance reiterates the four suggested steps of an impact analysis and provides a narrative explanation of the purpose of each step along with examples; however, agencies are not required to follow the guidance and HUD never finalized it. We reviewed the impact analyses agencies reported in their annual plans from 2011 through 2016 and found that agencies’ impact analyses for their rent-reform policies varied widely in the type of information included and level of detail. For example, a majority of impact analyses included whether the activity would increase or decrease tenants’ rent burden and a majority included other benefits or costs to tenants, but analyses less often discussed possible unintended consequences of their rent-reform policies. In addition, some agencies did not include the same type of information across the analyses of their activities. One agency provided an example of how a hypothetical tenant’s rent could change when the agency moved to biennial recertifications, but did not analyze how tenants’ rent could change for its minimum rent or tiered rent policies. Another agency included the potential impact on the agency for each of its proposed activities, but only analyzed the potential rent burden on tenants for one activity. In addition, the level of detail included in the impact analyses varied. For example, in discussing a policy that would change what sources of income were included in a tenant’s rent calculation, one agency’s impact analysis stated that the change would save money for tenants. An impact analysis for a similar policy from another agency included the number of tenants who would be affected by the policy and a dollar estimate of how much money tenants could save. Activities that might be considered administrative, such as changes to the frequency of tenant recertifications, were less likely to include details such as analysis of the rent burden on tenants than were other activities. In several agencies’ impact analyses, as well as in interviews with agency officials, agencies generally indicated that they think of these MTW policies or activities as being good for tenants, which may explain why agencies were less likely to discuss burden on tenants. HUD officials acknowledged the need for more detailed guidance and said they planned to provide such guidance for the expansion agencies. HUD officials said that they have not created such guidance for the existing agencies because they have been focused on the recent expansion of the demonstration and because doing so could require changes to the standard agreement. However, the steps for an impact analysis are contained in Attachment B, to which, under the standard agreement, agencies must adhere to satisfy their annual reporting obligations. Further, HUD has already revised Attachment B and agencies’ reporting requirements contained therein on multiple occasions without requiring changes to the standard agreement. Officials stated they could encourage existing agencies to follow the guidance for the expansion agencies. Federal internal control standards state that management should externally communicate the necessary quality information to achieve the agency’s objectives. By framing the steps in Attachment B as suggestions and not prescribing the elements of impact analyses, HUD cannot consistently collect the type of information it needs to assess the effect of MTW activities on tenants across agencies. For example, according to HUD officials, one of the purposes of the impact analysis is to encourage agencies to consider potential unintended consequences of their activities. However, unintended consequences cannot be assessed without more detailed impact analyses. Annual Reevaluations Attachment B does not describe the elements MTW agencies must include in their annual reevaluation, and HUD does not require MTW agencies to submit the results of those reevaluations. According to Attachment B, when agencies propose a rent-reform activity in their annual plan, they should provide an overview of how they will annually reevaluate the proposed activity and revise the activity as necessary to mitigate the negative effects of any unintended consequences. However, it does not provide any further detail or examples of what agencies should annually reevaluate. In addition, while HUD requires agencies to perform annual reevaluations of rent-reform activities, HUD guidance does not require MTW agencies to report the results of their annual reevaluations. According to federal internal control standards, management should externally communicate the necessary quality information to achieve the agency’s objectives. Based on our review of agencies’ annual plans submitted from 2011 through 2016, about one-third of the rent-reform policies proposed by agencies included a description of how agencies planned to annually reevaluate the policies. The remaining proposals either did not include a description or agencies stated that they would evaluate the activity annually without providing further description of how they would perform the evaluation. When we requested that agencies provide their 2015 annual reevaluations of their rent-reform policies, several of the MTW agencies were confused about what we meant by annual reevaluation. Some of those agencies asked if we were referring to their annual report and one agency asked how an annual reevaluation was different from an impact analysis. When we received documentation of what the agencies considered to be the annual reevaluations of their rent-reform activities, 30 of the agencies provided us information they are required to include for all of their activities in their annual reports. For example, agencies must include a description of their activities and their impact, compare policy outcomes to HUD metrics, and explain challenges they faced if benchmarks were not achieved. Most agencies referred us to all or part of this information. However, some agencies provided analyses that went beyond those required for annual reports, including evaluations from third-party researchers. For example, one agency partners with a local university to conduct an annual survey that allows the agency to assess the effect of its rent-reform activities on households. During the course of our work, a HUD official said the agency had not required MTW agencies to report annual reevaluations because, as long as agencies had a plan to annually reevaluate their activities and HUD had the ability to request the reevaluations if concerns arose, HUD did not want to require agencies to report information HUD did not intend to analyze. HUD officials later stated that the agency plans to provide more detailed guidance for the expansion agencies and has been updating Attachment B to clarify that agencies’ annual reports must include the results of their annual reevaluations of their rent-reform activities. In addition, HUD officials said they could issue guidance that encouraged existing agencies to follow the guidance for the expansion agencies but it would be difficult to require existing agencies to include specific elements in these annual reevaluations without changes to the standard agreement. However, the standard agreement merely requires that MTW agencies fulfill the annual reporting requirements set forth in Attachment B, which provides the detailed description of the required elements of the annual plan and report and which HUD has already revised on multiple occasions without requiring changes to the standard agreement. Because HUD allows agencies to determine the process for reevaluating their activities, most MTW agencies have not collected or reported additional information on rent-reform activities (including effects or unintended consequences) outside of the requirements of their annual reports. This leaves HUD and the agencies themselves less able to assess the effects of MTW activities on tenants. Hardship Policies While MTW agencies must establish a hardship policy to define the circumstances under which households may be exempted or receive temporary waivers from a new rent-reform activity, Attachment B does not define what elements must be included in the hardship policy. The nonbinding draft guidance from 2009 we previously discussed suggested four questions hardship policies should address (including the process households would use to request an exemption or waiver and how hardship cases would be resolved). Officials from the seven agencies we interviewed said they looked to a range of tools to create their hardship policies. For example, officials from one agency said they relied on the 2009 draft guidance and officials from another agency said they relied on Attachment B when developing their policies. Officials from three other agencies said they reviewed the hardship policies of other MTW agencies, had conversations with HUD while planning the activity or waiting for HUD’s review of their annual plan, or looked to relevant federal regulations. In contrast, officials from another agency said that there was no guidance available on how to create their hardship policies because their agency joined the demonstration the year it began. Our review of MTW agencies’ hardship policies for rent-reform activities showed that while these hardship policies had some commonalities, they also were inconsistent in terms of the type of information included. For example, of the 84 hardship policies we reviewed, MTW agencies included a discussion of how the agency processes a hardship complaint in 56 policies and what remedies are available for residents approved for a hardship exemption or waiver in 75 policies. In contrast, 26 policies included information about whether tenants have the ability to reapply for a hardship exemption or waiver, and 26 policies mentioned if the agencies have different rules for the elderly or persons with disabilities. In addition, although most hardship policies generally discussed how a tenant may claim a hardship and apply for an exemption, some agencies were much more specific about the process. For example, one agency stated only that tenants may request a hardship exemption in writing, while another agency explained which application a tenant needed to fill out, what supporting documentation to include, and how to submit the application. Some agencies have created more parameters around a tenant’s ability to request a hardship exemption or waiver than others. For example, some hardship policies are time-limited (that is, tenants have a certain window of time in which to apply). One agency instituted a hardship policy for its minimum rent that stated that tenants had 15 days from receipt of notice of their new household tenant rental payment to apply for a hardship exemption or waiver. Another agency instituting a hardship policy for a similar activity did not seem to impose a time limit for a tenant to request an exemption. In addition, some hardship policies provided relief for current tenants. For example, one-third of agencies created a hardship policy for at least one of their activities that either exempted current residents from the rent-reform activity or provided some form of temporary relief as the rent-reform policy was implemented. We also found variation in the information MTW agencies were able to provide on the households that requested a hardship exemption. We asked all the MTW agencies to provide us a list of all tenants who requested a hardship exemption in 2011–2015, including the result of each request (denied or approved), the current status of each tenant, and the reason the tenant was no longer receiving housing assistance, if applicable. Of all the MTW agencies, five said they had not received any requests for hardship exemptions. Three agencies were only able to provide us information on those hardship requests that were approved, two agencies did not indicate if the requests they received were approved or denied, and one agency did not provide any data because it could not distinguish hardship requests for its traditional programs from its MTW activities. Additionally, five agencies did not provide the reasons why tenants who requested a hardship exemption were no longer receiving assistance. The remaining 22 agencies were able to provide the information as requested. Tenants and advocates expressed mixed opinions about the rent-reform hardship policies created by the MTW agencies we interviewed. Some tenants with whom we spoke said they were aware of rent-reform hardship policies the agencies developed. For example, tenants who participated in one of our group meetings told us that during their income recertification the case worker assigned to their case provided them a checklist that outlined each of the agency’s hardship policies. When we spoke with advocates who work with tenants subject to MTW activities, some said most tenants do not know about the hardship policies available to them. Some tenants and advocates with whom we spoke said the process for requesting a hardship could be difficult. For example, one tenant said that although the MTW agency mailed tenants “frequently asked questions” that described the hardship policy, the document was confusing and included a citation to the Federal Register for more information, which was difficult for tenants to access. Advocates at one organization also said tenants asked for help because the tenants applied for a hardship waiver through their case manager, but never received a response. In contrast, during these meetings some other tenants told us that they had no issues with the hardship policies or the way in which the MTW agencies implemented them. As discussed previously, federal internal control standards require agencies to communicate effectively with external stakeholders to help achieve agency goals. While HUD’s proposed update to Attachment B provides more detail than the current version, HUD officials said it could be difficult to develop more descriptive guidance for existing MTW agencies because doing so could require changes to the standard agreement. In addition, officials said they had not been able to develop more guidance for existing agencies because of their focus on the expansion demonstration. However, the standard agreement merely requires that MTW agencies fulfill the requirements contained in Attachment B, which HUD has already revised on multiple occasions without requiring changes to the standard agreement. Officials said that they plan to provide more descriptive guidance for expansion agencies and encourage existing agencies to follow such guidance. By not providing more specific direction to the MTW agencies about what to include in their hardship policies and therefore what is communicated to tenants, existing agencies may not be adequately communicating all of the information tenants need to understand the circumstances in which they may be exempted from rent-reform activities. HUD Does Not Have Consistent Requirements for MTW Agencies for Rent-Reform, Work- Requirement, and Time- Limit Activities HUD requirements for MTW agencies that establish policies for work requirements and time limits are largely inconsistent with requirements pertaining to rent-reform activities (see table 5). Although HUD has said it considers work-requirement and time-limit activities to have a great and direct impact on tenants, the current MTW agencies in the demonstration are not subject to the same reporting requirements when proposing those policies as when proposing rent-reform activities. For example, as previously discussed, HUD guidance in Attachment B requires agencies to include an impact analysis, annual reevaluation, and hardship policy for rent-reform activities in their annual plans when the activity is proposed. However, Attachment B does not include similar requirements for proposed work-requirement or time-limit policies. Further inconsistencies include that Attachment C of the standard agreement, which lists the various MTW flexibilities available to agencies, requires MTW agencies to create a hardship policy if they establish a time-limit policy for public housing assistance. However, HUD did not develop guidance requiring agencies to report on their hardship policies for time-limit policies for public housing assistance. Furthermore, HUD does not have a similar requirement for time-limit policies established for voucher assistance. In addition, in the Federal Register operations notice for the expansion of the MTW demonstration published in January 2017, HUD proposed requiring the new MTW agencies to conduct an impact analysis and develop a hardship policy for rent-reform and time-limit policies, but develop only a hardship policy for work requirements. As previously discussed, federal internal control standards require management to design control activities—policies, procedures, techniques, and mechanisms—in response to the entity’s risks. In determining the necessary level of precision for a control activity, management is to evaluate, among other things, consistency of performance. A control activity that is performed routinely and consistently generally is more precise than one performed sporadically. HUD officials have said that they consider rent-reform, work-requirement, and time-limit policies to have a great and direct impact on tenants. HUD was not able to provide an explanation as to why they do not require similar reporting for all of these activities. HUD officials said they did not know why MTW agencies were not initially required to report on impact analyses, annual reevaluations, and hardship policies associated with work-requirement and time-limit policies in general. However, they said, currently, these policies are typically implemented in conjunction with a rent-reform activity so there is still reporting on the combined policies. HUD officials also stated that if an agency proposed an activity with a time limit for public housing, the MTW coordinator reviewing the agency’s annual plan would ensure that a hardship policy was in place. In addition, when MTW staff review a proposed work requirement for both the public housing and voucher programs and a proposed time limit for the voucher program, staff suggest that MTW agencies adopt hardship policies and conduct impact analyses for these policies. HUD officials also stated that the agency plans to require expansion agencies to develop an impact analysis, annual reevaluation, and hardship policy for rent-reform, work-requirement, and time-limit policies. Although HUD officials said it would be difficult to set a similar requirement for existing MTW agencies because doing so would require changes to the standard agreement, they stated they could update Attachment B to incorporate the requirement for a hardship policy for public housing time limits and develop guidance encouraging existing agencies to comply with the additional requirements put in place for the expansion agencies. Without taking these steps, HUD will miss an opportunity to collect information needed to evaluate the effect of work- requirement and time-limit policies on tenants. HUD Has Not Incorporated MTW Agency Reporting into Its Monitoring and Does Not Have an Analysis Plan Although HUD requires MTW agencies to report annually on their rent- reform, work-requirement, and time-limit policies, HUD could not provide us with documentation of how it analyzed, used, or planned to use the information it received from agencies on a continuous basis. According to HUD officials, because of the recently resolved backlog of annual reports, the MTW Office now can begin to use the years of reported data it previously had not used. Officials added they provide the annual plans and reports to other departments in HUD to conduct ad hoc analysis and that other HUD offices have used MTW plans and reports when proposing new rules or legislation related to housing. For example, officials said HUD used MTW plans and reports when working on HUD’s 2016 rule intended to provide greater flexibility for agencies administering HUD’s rental assistance programs. HUD provided us documentation showing that it used lessons learned from the MTW demonstration to inform legislative proposals in the agency’s fiscal year 2018 and 2019 budgets. Also, MTW officials said they intend to use the data in annual reports to inform some oversight rules. When asked about the agency’s plan to analyze the information provided in the annual plans and reports, HUD officials said it had awarded a contract to the Urban Institute to perform a retrospective evaluation of the demonstration, and the results will be available in 2018. Officials said although they have not finalized their reporting requirements for agencies in the expansion, these agencies likely will not be required to create annual plans or reports but instead to annually create a supplemental document to their annual public housing plan. With those agencies, HUD will be able to learn from each of the cohorts about the effect of a specific policy being evaluated. However, the plan to analyze the supplemental documentation and cohorts of the expansion agencies does not address how HUD plans to use the information it receives from the current MTW agencies. Federal internal control standards state that management should establish monitoring activities and evaluate results. Analysis (evaluation of results) contributes to the operating effectiveness of monitoring. The internal control standards also state that management should use quality information to achieve the entity’s objectives. In doing this, management is expected to use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. Because the MTW Office has not systematically analyzed or evaluated the information it requires MTW agencies to report—or determined how best to evaluate it—the agency cannot assess the effect of MTW rent- reform, work-requirement, and time-limit policies on tenants. More specifically, without a plan for analyzing information in agencies’ impact analyses, annual reevaluations, and hardship policies, HUD cannot monitor the effect of rent-reform, work requirement, and time limit policies on tenants. These limitations also extend to the definitional and guidance issues we previously discussed. As a result, without a comprehensive framework—standard definitions, clear guidance on reporting requirements, and analysis plans—HUD cannot provide assurance that it is adequately monitoring how MTW activities affect tenants. Conclusions The MTW demonstration is on the brink of significant expansion, but HUD does not yet have the people, data, and processes in place to effectively oversee agency participants and assess the demonstration’s performance and effects on tenants. Workforce planning. Insufficient staffing for the MTW demonstration already has had negative effects. For instance, HUD has not always reviewed annual reports that include information needed to determine the demonstration’s effect on tenants in a timely manner, annually assessed whether current MTW agencies comply with demonstration requirements, and fully documented its review processes. When complete, expansion of the demonstration would more than triple the number of MTW agencies. By finalizing its workforce planning (including an assessment of competencies and skills needed) and documenting its compliance review process, HUD can provide assurance that it would be positioned to oversee an expanded demonstration before new agencies start being added in 2018. Data collection. Our comparison of public housing occupancy and voucher unit utilization rates and program expenses among MTW and non-MTW agencies raises questions about agency performance and use of funding that cannot be fully answered with current data. The differences among agencies may result in part from the MTW demonstration’s funding flexibilities. However, HUD is limited in its ability to readily determine the extent to which MTW funds were used for other allowable purposes. More comprehensively capturing and tracking data on uses of funding and the characteristics of households served by local nontraditional activities would allow HUD to better assess agency performance. HUD also would be better able to account for differences in outcomes—especially in relation to occupancy and voucher utilization rates and program expenses—that affect the number of tenants served. MTW reserves. The accumulation of relatively large reserves by MTW agencies also raises questions about funding uses. HUD has performed limited oversight of MTW voucher reserves and its data and financial reporting systems are not structured to effectively track public housing reserves. Developing and implementing a process to monitor MTW reserves could help HUD provide reasonable assurance that MTW agencies have sound plans for expending reserves. Framework for assessing effect of rent-reform, work-requirement, and time-limit policies on tenants. The effectiveness of certain MTW activities and their effects on tenants remain largely unknown because HUD does not have a framework—standard definitions for key terms, clear guidance on reporting requirements, and analysis plans—for monitoring rent-reform, work-requirement, and time-limit policies. For example, the variations in reporting on rent reform and self-sufficiency as a result of inconsistent definitions of these terms; limited guidance (often couched as suggestions) HUD provided to agencies for developing impact analyses, annual reevaluations, and tenant hardship policies; and inconsistent treatment of rent-reform and work-requirement and time-limit policies suggest that HUD may have emphasized flexibility to the detriment of oversight. In addition, HUD does not have a plan for assessing the information agencies report on the effect of these policies. Developing such a framework will help both HUD and MTW agencies to assess performance and determine if activities have advanced demonstration goals. We recognize the challenges involved with monitoring the MTW demonstration, but maintain it is important for HUD to take steps to achieve and sustain a better balance between flexibility and prudent oversight. Improving oversight of the demonstration would help HUD assess what MTW agencies have done, including their use of funding. Such information also would help inform Congress and the public about how demonstration innovations have affected tenants. Recommendations for Executive Action We are making the following 11 recommendations to HUD: The Assistant Secretary for PIH should complete workforce planning for the MTW demonstration to help ensure that PIH has sufficient staff with appropriate skills and competencies to manage an expanded demonstration, including reviewing reports and carrying out compliance reviews in a timely manner. (Recommendation 1) The Assistant Secretary for PIH should more fully document the process for annually assessing compliance with the five demonstration requirements. (Recommendation 2) The Assistant Secretary for PIH should develop and implement a process to track how MTW demonstration funds are being used for other allowable activities, including local, nontraditional activities. (Recommendation 3) The Assistant Secretary for PIH should identify and implement changes to PIC to capture household data for households served through local, nontraditional activities. (Recommendation 4) The Assistant Secretary for PIH should develop and implement a process to monitor MTW agencies’ reserves. (Recommendation 5) The Assistant Secretary for PIH should clarify HUD’s rent-reform definition for the MTW demonstration as part of a framework for monitoring the effect of rent-reform, work-requirement, and time-limit policies on tenants. (Recommendation 6) The Assistant Secretary for PIH should set parameters for HUD’s definition of self-sufficiency for the demonstration, either by providing one definition or a range of options from which agencies could choose, as part of a framework for monitoring the effect of rent-reform, work- requirement, and time-limit policies on tenants. (Recommendation 7) The Assistant Secretary for PIH should revise HUD’s guidance to MTW agencies to make it clear which elements are required in impact analyses, annual reevaluations, and hardship policies and the information required for each element as part of a framework for monitoring the effect of rent-reform, work-requirement, and time-limit policies on tenants. (Recommendation 8) The Assistant Secretary for PIH should develop written guidance for existing MTW agencies that requires a hardship policy for public housing time limits and encourages an impact analysis, annual reevaluation, and hardship policy for work-requirement and time-limit policies for public housing and voucher programs as part of a framework for monitoring the effect of these policies on tenants. (Recommendation 9) The Assistant Secretary for PIH should require an impact analysis, annual reevaluation, and hardship policy for work-requirement and time- limit policies new MTW agencies adopt for their public housing and voucher programs as part of a framework for monitoring the effect of these policies on tenants. (Recommendation 10) The Assistant Secretary for PIH should develop and implement a plan for analyzing the information that agencies report on the effect of rent- reform, work-requirement, and time-limit policies on tenants as part of a framework for monitoring the effect of these policies on tenants. (Recommendation 11) Agency Comments and Our Evaluation We provided a draft of this report to HUD for comment. In written comments, which are summarized below and reproduced in appendix III, HUD disagreed with three of our recommendations and generally agreed with the remaining eight. In its general comments, HUD made the following points: HUD noted that our report did not identify any harmful effects on tenants as a result of MTW flexibilities. As discussed in the draft report, due to data limitations, we could not evaluate the effect of MTW flexibilities on tenants. Instead, we focused on the extent to which HUD monitored the effects of rent-reform, work-requirement, and time-limit policies on tenants. Furthermore, our analysis of available data showed that MTW agencies had lower public housing occupancy rates and voucher unit utilization rates and higher program expenses than comparable non- MTW agencies, which could affect the number of tenants served. HUD also stated that it seemed we reviewed MTW agencies through the lens of the traditional housing and voucher programs. HUD noted fundamental differences in MTW and non-MTW agency operations and stated it must consider the extensive MTW flexibilities and the locally- designed nature of each MTW agency’s program in administering the demonstration. HUD stated it did not agree with three of our recommendations (discussed below) that it noted would restrict an MTW agency’s ability to exercise MTW flexibility and respond to variations in local markets. As stated in the draft report, we recognize the challenges involved with monitoring the MTW demonstration, but maintain it is important for HUD to take steps to achieve and sustain a better balance between flexibility and prudent oversight. Furthermore, given that the demonstration’s ultimate goal is to identify successful approaches that can be applied to public housing agencies nationwide, we believe we looked objectively and with the appropriate rigor and contextual sophistication at MTW agencies. HUD disagreed with the draft report’s third recommendation to develop and implement a process to track how public housing and voucher funding is being used for other allowable activities, including local, nontraditional activities. HUD stated that funding fungibility and policy flexibility are the core tenets of the MTW demonstration. As a result, identifying and tracking expenses paid from a specific funding source are not necessary and should not be a requirement. We acknowledge the demonstration’s funding and policy flexibility and did not intend for our recommendation to be interpreted solely as a suggestion to track funding sources. We therefore clarified our recommendation to focus on tracking how MTW demonstration funds are being used for allowable activities, such as local, nontraditional activities. HUD stated that the revised HUD Form 50900 or Attachment B (expected to be published in early 2018) would require existing MTW agencies to estimate the cost of each planned activity. Although this would provide some cost information, it would be limited to planned activities only and would not capture actual costs. Therefore, we continue to believe that more comprehensively tracking data on uses of funding would allow HUD to better account for differences in outcomes—especially in relation to occupancy and voucher utilization rates and program expenses—that affect the number of tenants served. HUD disagreed with the fifth recommendation to develop and implement a process to monitor MTW agencies' reserves. HUD stated that there is no language in the 1996 Act that limits the reserves of MTW agencies to a certain level. Although our draft report noted that leading practices for managing reserve funds include considering establishing a maximum reserve level, we did not recommend that HUD set such a reserve level for MTW agencies because we recognized the demonstration’s funding flexibilities. Rather, we recommended that HUD develop a process to monitor MTW agencies’ plans for reserves. HUD also commented that by reviewing and granting approval for all MTW activities that the existing 39 agencies implemented, it already had a process to determine if spending of reserve funds was reasonable. However, as HUD noted in its comments on the draft report’s third recommendation, the agency does not currently require MTW agencies to include the cost of a planned activity when proposing the activity. An approval process that does not include a review of information on planned costs, including the extent to which reserves would be used to fund the activity, is not sufficient because HUD lacks data needed to determine that reserve expenditures are reasonable. Finally, HUD noted that PIH’s Financial Management Division currently tracks the public housing and voucher reserves of MTW agencies. However, this does not address our concern that HUD does not monitor existing MTW agencies’ plans for their reserves and whether the plans are reasonable given the amount of reserves. In order to provide reasonable assurance that MTW agencies have sound plans for expanding their reserves, HUD still would have to develop a process to monitor MTW agencies’ reserves. Therefore, we maintain our recommendation. Similarly, HUD disagreed with our seventh recommendation to set parameters for its definition of self-sufficiency for the demonstration, either by providing one definition or a range of options from which agencies could choose. It noted that the MTW demonstration provides agencies with the ability to develop creative solutions to address local conditions, and a one-size-fits-all approach is not appropriate. HUD stated it intentionally has not developed a standard definition for self- sufficiency, because the definition could depend on local conditions such as employment opportunities and availability of supportive services. We recognized the need for flexibility in our recommendation by suggesting that HUD could develop a range of definitions from which MTW agencies could choose. This approach would provide the necessary flexibility while still allowing HUD to collect the consistent information needed to evaluate the effect of MTW rent-reform and occupancy policies on tenants. Therefore, we maintain our recommendation. HUD generally agreed with our remaining eight recommendations. For example, HUD agreed with the draft report’s first recommendation on workforce planning, but requested that due to the cross-cutting nature of MTW, we expand the recommendation to include other PIH offices. We acknowledge that the staff needed to manage the expanded demonstration may be found outside the MTW Office, and therefore we modified our recommendation. HUD also agreed with the second recommendation to more fully document the process for annually assessing compliance with the five demonstration requirements and said it will finalize internal written procedures in early 2018. In addition, in commenting on the fourth recommendation, HUD described plans to update its data system to capture information on households served through local, nontraditional MTW activities. Furthermore, in regard to the eighth recommendation, HUD noted that it plans to develop guidance for MTW agencies for the monitoring of high-impact activities such as rent reform, work requirements, and time limits. Finally, in commenting on the eleventh recommendation, HUD stated it will improve its process of analyzing the data MTW agencies provide on high-impact activities. In commenting on our workforce planning finding, HUD made the following points: HUD stated that our finding that planning for the MTW expansion workforce structure has not been completed is not an accurate characterization. It noted that HUD completed a workforce analysis and hired five additional staff in 2016 in anticipation of the MTW expansion. In our draft report, we acknowledged steps that HUD took to increase the staffing levels of the MTW Office. However, we found that in its workforce analysis, HUD had not assessed the knowledge, skills, and abilities needed to implement an oversight structure for the MTW expansion demonstration. HUD acknowledged in its response to the recommendation that its workforce planning efforts will continue in 2018. HUD said our draft report did not discuss two other factors (beyond insufficient staff) that affected oversight of the MTW demonstration: (1) 2013 was the first year HUD assessed each agency’s compliance with the five demonstration requirements, and (2) from 2013 to 2015, HUD was in protracted and complex negotiations with the existing MTW agencies to determine the terms of the extension of their MTW participation. Our draft report acknowledged both factors. Specifically, we noted that HUD developed a process for assessing compliance with the five demonstration requirements in response to a recommendation in our 2012 report and that the process was implemented in 2013. Our draft report also stated that HUD officials noted that in 2014 and 2015 existing staff in the MTW Office had to focus on other priorities, including renegotiating the standard agreement, and then in 2016 on implementing the expansion of the demonstration. HUD said that even with limited staff, MTW agency plans had been reviewed and approved within the required time frames. In commenting on our data collection finding, HUD made the following points: Related to our multivariate statistical analysis to examine any association between MTW flexibilities and program outcomes, HUD stated that HUD and MTW agencies historically found it difficult to establish comparison groups because MTW and non-MTW agencies implement significantly different interventions. We agree that comparisons of MTW and non- MTW agencies are difficult to make. We acknowledge that MTW agencies differ substantially from non-MTW agencies on factors such as size and market housing costs. Accordingly, we used statistical techniques to improve on simple comparisons between MTW and non- MTW agencies. These techniques enabled us to identify a group of comparison non-MTW agencies that were similar to MTW agencies on important factors such as geographic location, households served, and county median rents. We then compared outcomes between the two groups of agencies over a number of years (2009 through 2015). We did not compare a single MTW agency to a non-MTW comparison group, as HUD stated. For more detailed information on our analysis, see appendix II. HUD also stated that our finding that MTW agencies had higher tenant services expenses for the voucher program than non-MTW agencies was an expected outcome (because the demonstration encourages MTW agencies to engage in employment, self-sufficiency programming, and tenant services). In our draft report, we stated that the results of the analysis were consistent with MTW agencies having more flexibility to use funds to provide tenant services. Furthermore, HUD said that a comparison of voucher administrative expenses for MTW and non-MTW agencies was skewed and not a valid comparison because administrative expenses for MTW agencies included voucher administrative expenses and other administrative expenses not permitted under the traditional voucher program. Differences in financial and performance outcomes that only MTW flexibilities allow, such as a broader range of administrative expenses, represent the potential effects of the demonstration, not a source of bias. The purpose of our analysis was to determine any association between MTW flexibilities and program outcomes. Because MTW rules allow for additional administrative expenses, it was appropriate to include these expenses in our analysis. In addition, HUD stated that that it had requested the list of the comparison group of non-MTW agencies to MTW agencies and suggested the list be included in our report. The agency noted that without this information, HUD was not able to validate our analysis. As noted previously, our analysis was not a simple comparison of MTW and non-MTW agencies. We developed a comparison group, applied algorithms based on certain assumptions, and conducted sensitivity analyses that tested these assumptions. Therefore, simply providing the list would not enable HUD to reproduce our analysis. Furthermore, we selected the variables for matching because they were similar across all agencies in each group (that is, the full distributions), not for any particular pair of matched agencies. Consequently, we evaluated the quality of our comparison group using the distributions of these variables across all agencies in each group. We included those statistics in our report, rather than the identity of particular agencies, to encourage systematic evaluations of the matched comparison agencies using aggregate statistics, rather than anecdotal evaluations of particular matched pairs. Finally, we communicated with HUD throughout the review about our data analysis. For example, we met with HUD to discuss our methodology, provided initial results, and worked with HUD officials to ensure we were using appropriate data fields. HUD also provided technical comments, which we incorporated as appropriate. We considered one comment to be more than technical in nature. Specifically, in response to our finding that HUD does not require MTW agencies to submit the results of their annual reevaluations of the impact of rent-reform activities, HUD officials stated that they consider the annual report (and information therein) to be the annual reevaluation of rent-reform activities. However, Attachment B does not include a requirement that agencies report the results of their annual reevaluations. Furthermore, if the information currently required to be included in the annual report satisfied the annual reevaluation requirement, then there would be no need for HUD to update Attachment B to clarify that agencies’ annual reports must include the results of their annual reevaluations, as the agency plans to do. Therefore, we maintain our finding and made revisions to the report to clarify what is currently required in Attachment B. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Department of Housing and Urban Development, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) the Department of Housing and Urban Development’s (HUD) oversight of agencies participating in the Moving to Work (MTW) demonstration, including agency reporting and compliance with demonstration requirements; (2) any association between MTW flexibilities and program outcomes, including public housing occupancy rates and voucher unit utilization rates; and (3) the extent to which HUD monitored effects of rent-reform, work-requirement, and time-limit policies on tenants. For all our objectives, we interviewed officials from the following seven MTW agencies: Boulder Housing Partners (Boulder, Colorado); Chicago Housing Authority (Chicago, Illinois); Delaware State Housing Authority (Dover, Delaware); Lincoln Housing Authority (Lincoln, Nebraska); Louisville Metropolitan Housing Authority (Louisville, Kentucky); Housing Authority of the County of San Bernardino (San Bernardino, San Diego Housing Commission (San Diego, California). In selecting these agencies, we focused on agencies that had implemented major rent-reform changes and work-requirement and time- limit policies based on information in a study conducted in January 2015 by the Center for Urban and Regional Studies at the University of North Carolina at Chapel Hill. We focused on these policies because, according to HUD, they have a great and direct impact on tenants. We also considered agency size, length of time in the demonstration, and geographic diversity. Although the results of the interviews cannot be generalized to all MTW agencies, they provide insight into the ways in which agencies implemented MTW flexibilities and report to HUD, among other things. In addition, we interviewed representatives of the following research groups to discuss their recent or ongoing work on the MTW demonstration: Abt Associates, the Center for Urban and Regional Studies at the University of North Carolina at Chapel Hill, HAI Group, Public and Affordable Housing Research Corporation, and the Urban Institute. We also interviewed representatives of affordable housing advocacy groups such as the Council of Large Public Housing Agencies; National Association of Housing and Redevelopment Officials; National Leased Housing Association; and Public Housing Authorities Directors Association. Finally, we interviewed resident advocacy organizations such as the Center on Budget Policy and Priorities, National Housing Law Project, and National Low-Income Housing Coalition. To select the groups to interview, we reviewed our 2012 report on MTW, identified organizations through our background literature review, and obtained recommendations from those we interviewed. To examine HUD’s oversight of MTW agencies, we reviewed our 2012 report, relevant HUD policies and procedures, and HUD documentation relating to compliance with the demonstration. Specifically, we reviewed the standard agreement that governs the participation of the existing 39 MTW agencies in the demonstration and HUD’s guidance on agency reporting and the five demonstration requirements. We also interviewed HUD officials about the processes HUD uses to review the agencies’ annual reports and assess compliance with the demonstration requirements. We also reviewed workforce analyses and interviewed HUD officials about their resource needs and plans to monitor the current MTW agencies and any agencies that may join the MTW demonstration through its expansion. We compared relevant internal control standards that apply to federal agencies and best practices we identified for workforce planning with HUD’s monitoring policies and procedures. To assess the extent to which HUD follows its processes, we reviewed HUD’s documentation of compliance assessments from 2013 through 2016, the only years for which HUD had completed such analysis. To identify and examine any association between MTW flexibilities and program outcomes, we obtained the following 2009–2015 data on MTW and non-MTW agencies: agency and tenant characteristics from the Public and Indian Housing Information Center (PIC) system, public housing occupancy rates from the Picture of Subsidized Households database, voucher unit utilization rates from the Voucher Management System (VMS), and expense data from the Financial Data Schedule (FDS). These were the most reliable and recent data available at the time of our analysis. We combined the HUD data with data from the American Community Survey (1-year estimates) conducted by the Census Bureau. To assess the reliability of these data, we reviewed relevant documentation on the information systems, conducted electronic testing, and interviewed officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purpose of identifying a comparison group and comparing the outcomes of certain measures for MTW and comparable non-MTW agencies. We used these data and multivariate statistical methods to compare MTW and non-MTW agencies to estimate any association between MTW flexibilities and public housing occupancy rates, voucher unit utilization rates, and various public housing and voucher expenses. We used statistical matching and modeling methods to identify a comparison group of non-MTW agencies that closely resembled MTW agencies on characteristics including number of households served, geographic location, and housing market characteristics. For more detailed information on our analysis, see appendix II. To determine the factors that could partially explain the results of our analysis, we reviewed Attachment C of the standard agreement to identify the funding flexibilities the MTW demonstration affords participating agencies. We also reviewed MTW agencies’ 2011–2016 annual plans to identify the MTW activities that were proposed under those funding flexibilities and interviewed officials from the seven selected agencies to learn how they used the funding flexibilities. We started with the 2011 annual plans because that was the first year in which all MTW agencies were required to include specific information when proposing rent-reform policies. We ended with 2016 annual plans because it was the most recent year for which annual plans were available for all MTW agencies at the time of our analysis. To illustrate how MTW agencies used their funding flexibility for public housing, we used FDS data to determine the amount of MTW funds that were transferred from the Housing Choice Voucher (voucher) program to the public housing program. To perform this analysis, we compared the MTW agencies’ 2015 public housing funding—the sum of FDS line items 70600 (HUD public housing agency operating grants) and 70610 (capital grants)—to the aggregate amount MTW agencies transferred into individual public housing project accounts. We selected 2015 because it was the most recent FDS data available at the time of our analysis. We also reviewed 2009–2016 data from HUD on the number of households MTW agencies served through their local, nontraditional activities. We determined that HUD’s process for compiling this information was sufficiently reliable for our purposes of reporting on local nontraditional activities by tracing 2015 data in the spreadsheet to data in the agencies’ 2015 annual reports (the most recent reports available) and interviewing HUD staff. Finally, we analyzed program data that HUD prepared using information derived from the Central Accounting and Program System and VMS on unspent voucher funds as of December 31, 2016, for MTW agencies and the comparison group of non-MTW agencies. To determine the extent to which HUD monitors the effect on tenants of rent-reform, work-requirement, and time-limit policies, we reviewed HUD documents such as Attachment B of the standard agreement and HUD’s Table of Applicable Standard Metrics by Activity to determine how HUD defines these types of activities and the guidance HUD provides on monitoring and reporting their effects on tenants. As previously discussed, we compared HUD’s monitoring policies and procedures with relevant internal control standards. We reviewed MTW agencies’ 2015 annual reports to determine the extent to which agencies adopted rent- reform, work-requirement, and time-limit policies. We selected 2015 because it was the most recent year for which annual reports were available for all MTW agencies at the time of our analysis. We also reviewed agencies’ 2011–2016 annual plans and collected information from all MTW agencies on tools they use to monitor the effects of rent- reform policies on tenants. We reviewed information from all 39 MTW agencies on their hardship policies and data and their annual reevaluations of the impact of rent-reform activities. We also collected information from all MTW agencies on how they monitor the effect of work-requirement and time-limit policies on tenants. We interviewed officials from the seven selected agencies about their monitoring of rent- reform, work-requirement, and time-limit policies’ effects on tenants and associated hardship policies and to obtain their views about HUD guidance. We also conducted group interviews with tenants from five agencies to get their perspective on the effects of rent-reform, work-requirement, and time-limit policies the agencies had implemented and associated hardship policies. To select the tenants to invite to these group interviews, we focused on the populations (for example, those able to work) subject to these policies. To the extent the MTW agency had a resident advisory board or comparable resident association, we worked with the boards or associations to contact tenants. When appropriate, we asked the MTW agencies to post notices on their websites and throughout their properties and send mailings to tenants of interest to notify them about the meetings. Finally, we interviewed representatives from tenant advocacy organizations. The organizations represented tenants served by four of the agencies we visited as well as tenants served by two additional MTW agencies that were not part of the group of seven selected agencies but that also had implemented major rent-reform changes, work-requirement, or time-limit policies. We obtained information on the effect of these policies on tenants and the extent to which tenants were aware of the hardship policies associated with these policies. To select these groups, we generally relied on recommendations from a representative of the National Housing Law Project. For those areas for which a recommendation was not provided, we identified the local legal aid association through an Internet search. We conducted this performance audit from February 2016 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Statistical Analysis of Program Outcomes in MTW and Non-MTW Agencies We analyzed associations between the Moving to Work (MTW) demonstration’s flexibilities and two types of outcomes: housing availability, measured by public housing occupancy and voucher unit utilization rates, and program expenses, measured by public housing operating expenses and voucher administrative, subsidy, tenant services expenses, and voucher reserves per household. These outcomes are broadly consistent with the goals of the demonstration’s authorizing statute. MTW was designed to provide flexibility to participating public housing agencies to design and test innovative strategies, while meeting certain statutory objectives and demonstration requirements, including reducing costs and achieving greater cost-effectiveness and assisting substantially the same number of eligible low-income households as would have been served absent the demonstration. In this appendix, we summarize the statistical methods we used to analyze a dataset we assembled from administrative databases maintained by the Department of Housing and Urban Development (HUD) and the American Community Survey (ACS), conducted by the Census Bureau, to compare MTW and non-MTW agencies on these outcomes. Our analysis did not seek to conduct a definitive evaluation of the MTW demonstration’s causal impacts. MTW agencies carry out varied and unique activities. The agencies also vary widely in size, location, housing market, and area and tenant demographics—both compared to non-MTW agencies and among themselves. A persuasive impact evaluation would need to assess the unique circumstances of each activity and outcome at each agency. In contrast, our analysis sought to improve on simple comparisons of outcomes between MTW and non-MTW agencies, by constructing a comparison group of non-MTW agencies that were similar to MTW agencies on variables broadly relevant to housing programs. Although this multivariate analysis reduced the risk that factors other than MTW participation may have biased the comparison, we did not seek to hold constant all factors uniquely relevant to each MTW agency and activity. As a result, our analysis cannot provide definitive estimates of causal impacts. Target Population and Scope of Analysis Developing and applying statistical “treatments” to MTW agencies is complex, due to demonstration rules that allow agencies to conduct various activities tailored to their unique needs. We considered the option of forming several groups of MTW agencies, defined by similar activities. For example, we might have identified all agencies reforming HUD’s rent calculation formula, and included those agencies in one level of a multilevel treatment variable. We ultimately rejected this approach due to limited sample sizes and the difficulty of developing homogeneous groups of activities. A multilevel approach would have limited the number of agencies in each level of the treatment. Small sample sizes would have limited our statistical power to identify differences between treatment groups, if they existed. In addition, the wide variety of MTW activities would have made it difficult to produce a sufficient number of homogenous groups, and would have required subjective judgment about what activities were sufficiently similar. Instead, we used a binary treatment measure identifying agencies that participated in MTW or operated under traditional public housing rules in a given year. The timing of MTW implementation limited our ability to account for changes in participation and outcomes over time. Agencies joined the MTW demonstration at various times between 1996 and 2012, and many joined before sufficient data became available. Only nine agencies entered the demonstration after 2009, when HUD’s Public and Indian Housing Information Center (PIC) system began to provide sufficiently complete and reliable data on the characteristics of housing agencies we needed to measure. All agencies that exited the demonstration did so before 2009. Comparisons within agencies over time can implicitly control for other factors that may not substantially change before and after implementation by using data collected before and after agencies joined the MTW demonstration. We might have been able to implicitly control for many factors that did not substantially change over short periods, such as land prices, or that changed in identical ways for MTW and non-MTW agencies, such as national economic cycles. However, the implementation of the MTW demonstration and available data limited our analysis to repeated cross-sectional comparisons of MTW and non-MTW agencies from 2009 through 2015. Measuring participation in the MTW demonstration at any one time was somewhat imprecise. The MTW demonstration was not implemented at uniform times across agencies, due to variation in the ratification dates of MTW agreements between HUD and the agency and variation in when each MTW agency began to implement activities under the demonstration. For our primary analysis, we classified an agency as participating in the MTW demonstration if it had ratified an MTW agreement with HUD at least 1 year before the year measured. In sensitivity analyses, described below, we assessed how classifying MTW participants according to different time lags affected our results. Table 6 lists the number of MTW and non-MTW agencies in our dataset, based on how MTW participation was defined in the analysis for housing agencies in the PIC database from 2009 through 2015. Outcomes We compared MTW and non-MTW agencies on several outcomes that are broad measures of housing availability and expenses. The outcomes were available in HUD data systems and were reliable for our purposes. However, they do not exhaust the potential outcomes that may be relevant under the MTW authorizing statute or the design of specific agency activities. For example, potential outcomes could measure the number of households that achieve self-sufficiency (as defined by a MTW agency) or move to a low-poverty neighborhood. Our specific outcome measures were the following: Public housing occupancy rate. Occupied units as a percentage of units available. Voucher unit utilization rate. Monthly rate of unit months leased divided by unit months available for the public housing agency. Public housing operating expenses per household. Total yearly operating expenses, divided by number of public housing households. Public housing central office cost center expenses per household. Total yearly central office cost center operating expenses, divided by number of public housing households. Voucher administrative expenses per household. Total yearly administrative expenses, divided by the number of voucher households. Voucher subsidy expenses per household. Total yearly expenses for housing assistance payments, divided by the number of voucher households. Voucher tenant services expenses per household. Total yearly expenses for tenant services, divided by the number of voucher households. Reserves per household (2016 only). Unspent voucher housing assistance funds as of December 31, 2016, divided by the number of voucher households. Following the Rubin Causal Model, our primary parameter of interest was the average (or median) treatment effect on the treated: where Yij(T) denotes the outcome for agency i at time j in (potentially counterfactual) treatment condition T. That is, we estimated the expected difference in outcomes that would exist due to MTW participation, among those agencies that actually participated in the demonstration. Estimating the average treatment on the treated is conservative and appropriate, given the varied and unique nature of MTW activities. Generalizing the effect of MTW participation from the treated agencies to the rest of the public housing agency population makes the implausible assumption that the untreated agencies would have implemented the same activities, in the same ways, as the treated agencies. Due to the discretion inherent to the MTW demonstration, the experiences of the treated agencies may not generalize to the whole population, as would be required for estimating the average treatment effect. We specify a parameter of interest (that is, a value to be estimated) for methodological completeness and to specify the population of inference (the target population of agencies). However, we do not interpret our results as robust causal impact estimates, due to the inability to measure the unique circumstances relevant for each MTW agency, demonstration activity, and outcome. Matched Comparison Group Our analysis measured and held constant conditions that could have otherwise explained differences in outcomes between MTW and non- MTW agencies. For each MTW and non-MTW agency, we measured the following agency-level covariates (with sources in parentheses): Number of households (PIC) Percent of households with a member over the age of 65 (PIC) Percent of households with a member under the age of 18 (PIC) Percent of households with a disabled member (PIC) Whether an agency issues vouchers (VMS) County median household income (ACS) County median rent (ACS) County rental vacancy rate (ACS) County population density, measured as county population/land area (2010 Census) HUD region (HUD website) Latitude (Picture of Subsidized Households) Longitude (Picture of Subsidized Households) We assessed the reliability of the ACS estimates by calculating the ratio of each estimate’s 95 percent margin of error to the estimate. For example, this ratio would equal 5 for an estimated rental vacancy rate of 10 percentage points, with a margin of error equal to plus or minus 2 percentage points. Across all variables we used from ACS, we found that this ratio did not exceed 2.0 for 99 percent of agency-county observations. This level of reliability was acceptable for our purposes. When PIC showed that agencies spanned multiple counties, we aggregated the data to the agency level by either summing count variables across counties or calculating averages of ACS descriptive statistics, such as county mean incomes. We calculated unweighted averages because the Census Bureau does not release ACS microdata with the exact geographic locations needed to re-estimate the statistics of interest within public housing agency boundaries. Weighting by the total area population or number of households served by each public housing agency would have had unknown effects on the bias of the published ACS estimates, due to their complex weighting methods. Our aggregation methods should minimally influence our measurements, due to limited variation across counties within agencies. To quantify this variation, we calculated the coefficient of variation (CV) across counties served by each agency in our analysis, and these CVs of the ACS statistics did not exceed 0.99 for 50 percent of the agencies and 1.73 for 95 percent of the agencies. Matching Methods We used statistical matching methods to construct the comparison group of non-MTW agencies. The general iterative matching process involves 1. identifying some distance measure that quantifies how “close” units are to each other on the covariates of interest; 2. implementing a matching method that uses this distance measure to identify comparison units; and 3. assessing the quality of the matched samples and iterating between the first two steps, until the treatment and comparison groups become sufficiently close on the distance measure. We developed our specific matching approach using recent reviews of the statistical literature. Two established matching methods rely on propensity scores and Mahalanobis distance (MD). In the context of this analysis, propensity scores estimate the probability that an agency is an MTW or non-MTW agency, such as when Pr(MTW | X) = logit-1(Xβ), where X is a matrix of covariates and β is a vector of coefficients. Propensity scores are calculated using the estimated coefficients and X to obtain a predicted probability that an agency participates in the MTW demonstration. MD is a multivariate sample statistic measuring the distance between agency i and j, similar to the number of standard deviations away from the sample mean vector of the covariates: where X is the ith row vector of X and S is the sample covariance matrix. Propensity scores and MD measures can have several limitations in practice. Matching on known propensity scores is used to balance the covariate distributions between the treatment and comparison groups and matching using MD tends to improve balance across all measured covariates. However, both approaches are optimal under assumptions of normally distributed data, and may worsen covariate balance if this assumption does not hold. Genetic matching methods seek to solve the problem of achieving sample balance in practice, using computer algorithms to search over the space of possible distance measures. Genetic matching generalizes MD by weighting covariates according to how they achieve balance in any particular sample, rather than by constants equal to the inverse of their sample covariance matrix, as in MD: where W is the covariate weighting matrix. If desired, genetic matching can incorporate propensity scores by including them as a covariate, with the algorithm assigning as much weight to them as necessary to optimize balance. The genetic matching algorithm, as implemented by the R software package “Matching,” has the following steps: 1. Initialize covariate weights, W, at starting values. 2. Calculate the distance matrix between MTW and non-MTW agencies. 3. Specify the number of non-MTW agencies to be matched comparison agencies for each MTW agency. 4. Assess the balance between the sample distributions of the treatment and control groups, using p-values from matched t-tests of equal means for each covariate or Kolmogorov–Smirnov tests of equal distributions. 5. Apply a loss function to the vector of p-values to quantify overall sample balance. 6. If the loss function is not minimized, regenerate W using a genetic algorithm. 7. Repeat steps 2–6 until the loss function is optimized and covariate balance is maximized. In sum, the genetic matching algorithm searches for the best k matches, incorporating covariates and distance metrics as desired and minimizing the distance in a candidate matched set by weighting and reweighting the covariates and metrics, according to how they influence balance. In our primary analysis, we ultimately used one-to-one matching (k = 1), with one comparison agency selected for each MTW agency. Large imbalances in the number of households served by the MTW and non- MTW agencies substantially reduced the pool of similar comparison agencies, such that setting k > 1 substantially worsened the balance for some variables. In addition to the automated matching criteria above, we compared the sample distributions of the covariates before and after matching using descriptive statistics and nonparametric density estimates. We required exact matches on the year of measurement to ensure that observations were compared at roughly the same times. We also required exact matches on whether an agency issued vouchers and HUD region. Due to data limitations, we compared 2016 reserve spending between MTW and non-MTW agencies for the 2015 matched set. Figure 12 compares MTW agencies and non-MTW agencies on the covariates we identified, before constructing a matched sample of comparable non-MTW agencies. As the figure shows, there are some covariates for which there are significant differences between the group of MTW agencies and non-MTW agencies. After implementing the matching method described above, we identified a primary group of comparison agencies that were similar to the MTW agencies on most of the covariates, but differed on a few, as shown in table 7. Examples of matched agencies in our primary analysis include: Oakland Housing Authority (MTW) and Housing Authority of the County of Sacramento (non-MTW); San Antonio Housing Authority (MTW) and Housing Authority of New Orleans (non-MTW); and Housing Authority of the City of Pittsburgh (MTW) and Allegheny County Housing Authority (non-MTW). Imbalances between MTW and comparison agencies for the main analyses remained after our primary matching analysis for county median income, county median rental cost, number of households, percent of households with a disabled member, and county rental vacancy rate, as shown in table 7. Figure 13 shows the covariate density estimates for MTW and non-MTW agencies, after matching. As the figure shows, there are fewer differences in the group of MTW agencies and the matched non-MTW agencies after matching. MTW agencies had higher county median incomes and rent, lower percentages of disabled household members, and lower rental vacancy rates, as compared to the primary matched non-MTW agencies. These imbalances decreased when we allowed for matches across HUD region and required matches within calipers (1 standard deviation), as shown in table 8. However, allowing HUD region to vary potentially allowed other unmeasured factors within a HUD region to vary between the MTW and non-MTW groups. Applying caliper constraints failed to match a comparison agency for 91 of the 232 yearly observations for MTW agencies during 2009–2015, which changes the population for inference. We used these matched samples with improved balance for sensitivity checks, in our discussion of the results below. Statistical Estimation and Inference After constructing the primary matched analysis sample, we estimated outcome descriptive statistics for MTW and non-MTW agencies. We estimated differences in mean and median outcomes using paired t-tests and nonparametric Wilcoxon signed-rank tests, respectively, that account for correlations over time within and between matched groups of MTW and non-MTW agencies. We estimated differences in medians between groups using nonparametric Wilcoxon signed-rank tests to address potential outliers. For example, the tenant services cost distributions for MTW agencies (median = $37; 25th quantile = $2.80; 75th quantile = $110) and non-MTW agencies (median = $0; 25th quantile = $0; 75th quantile = $20) were highly skewed. The nonparametric test was not influenced by these skewed distributions and outliers. To complement this matched comparison, we used Generalized Linear Models to model outcomes in 2009–2015 using the matched sample of MTW and non-MTW agencies. The models had the form: i = 1, …, n indexes agencies j = 2009, …, 2015 indexes years MTWij indicates whether agency i participated in the MTW demonstration µij is the mean outcome, conditional on the covariates g is the Gaussian link function Year is a vector of indicators for each year from 2010 through 2015 (excluding 2009), which accounts for common period effects across agencies, γ Xij is a vector of linear continuous (e.g., number of households) and categorical (e.g., HUD region) control variables that may confound the association between agency type and the outcome of interest (discussed above for the matched sample) β is the parameter of interest, estimating the association between MTW Repeated observations from 2009 through 2015 for MTW agencies and their corresponding matched non-MTW agencies can introduce autocorrelation within these clusters of observations, and the differences across matched clusters can introduce heteroscedasticity (that is, the variance in one cluster of agencies may be not be consistent with the variance in another cluster). A conventional linear model does not account for these interdependencies and inconsistent variances in the data, leading to potential bias in the variance estimation for the parameters of interest (such as variances for β and γ) and any subsequent statistical inference on the association (and p-values) between the outcome and covariates. To account for the potential bias arising from heteroscedasticity and autocorrelation, the variance-covariance matrix used to generate the variances for the parameters incorporated weights that (1) decreased the influence of extreme observations, clusters, or both; (2) used an autoregressive approximation in which the correlation was strongest for observations closest in time and decays as time lengthens; and (3) preprocesses (“prewhitens”) the variance-covariance matrix using an autoregressive function to reduce the temporal dependence in the data. These processes lead to statistical inference on associations of interest that account for the interdependencies within agency clusters and the differences across clusters. In the sensitivity analyses described below, we will fit this model on the unmatched population of agencies. Primary Results In the matched sample, MTW agencies had lower median public housing occupancy rates and voucher unit utilization rates compared to non-MTW agencies, as shown in table 9. Compared to non-MTW agencies, MTW agencies had higher median public housing expenses per household (operating and central office cost center operating expenses) and higher median voucher administrative expenses per household, subsidy expenses per household, tenant services expenses per household, and reserves per household. These differences were significant at the 0.05 level for all variables using the nonparametric Wilcoxon signed-rank test. However, using the parametric t-tests and related t-tests from the regression models, there was not a significant difference in central office cost center operating expenses. This could arise from the presence of outliers skewing the distribution, leading to different results compared to the Wilcoxon test that does not make any distributional assumptions. Regardless of the particular method used, small sample sizes in each group, as well as repeated observations over time, may limit our statistical power to identify differences, if they existed. Sample sizes resulting from missing data also affect the degree to which comparable non-MTW agencies can be found, given the limited overlap in the covariate distributions between groups. Sensitivity Analyses We assessed the results above for sensitivity to various methodological assumptions. For the matching analysis, we assessed the impact of 1. measuring MTW status as of the agreement year, rather than as of 1 year following the agreement (i.e., 1 year lag); 2. matching within 1 standard deviation calipers for each covariate; 3. allowing matches between HUD regions; 4. including county unemployment and poverty rates as covariates; 5. including estimated propensity scores, as a logistic function of the control variables described for the primary analysis, as a matching covariate; 6. increasing the number of comparison agencies for each MTW agency to k = {2, 3, 4} using the control variables described for the primary analysis; and 7. excluding clusters where the MTW and/or non-MTW agencies had an outlying value for an outcome of interest. For the regression model, we compared the results obtained from fitting the model to the matched and unmatched data. The sensitivity tests above showed no substantively meaningful differences in the results as compared to the primary analysis, with several exceptions. Adding the caliper constraint and dropping the HUD region constraint improved covariate balance. Dropping the HUD region constraint led to MTW agencies having a smaller difference in voucher subsidy expenses, compared to non-MTW agencies. In our primary analysis, MTW agencies had higher subsidy expenses. However, allowing matches between HUD regions may introduce unmeasured geographic characteristics into the comparison group of non-MTW agencies, which may limit the comparability of subsidy expenses and bias the estimated difference in outcomes. Appendix III: Comments from the Department of Housing and Urban Development Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Paige Smith (Assistant Director), Josephine Perez (Analyst in Charge), Enyinnaya David Aja, Bethany Benitez, Farrah Graham, Anar N. Jessani, Morgan Jones, Courtney LaFountain, Won Lee, Marc Molino, Anna Maria Ortiz, Barbara Roesmann, Shannon Smith, and Jeff Tessin made key contributions to this report.
Why GAO Did This Study The MTW demonstration gives 39 participating public housing agencies the flexibility to use funding for HUD-approved purposes other than housing assistance, such as developing affordable housing; change HUD's tenant rent calculation; and impose work requirements and time limits on tenants. In 2015, Congress authorized the expansion of MTW by adding 100 new agencies. GAO was asked to evaluate the MTW demonstration. GAO examined HUD oversight of MTW agencies, including its monitoring of demonstration effects on tenants. For this report, GAO reviewed HUD and MTW agency policies and documentation; interviewed officials at HUD and seven MTW agencies (selected based on type of policy changes, size, and geographic diversity); and interviewed tenants served by selected agencies. GAO also conducted a statistical analysis comparing data for MTW and non-MTW agencies on public housing occupancy rates, voucher utilization rates, and program expenses. What GAO Found The Department of Housing and Urban Development‘s (HUD) oversight of the Moving to Work (MTW) demonstration has been limited. Improving oversight—particularly for information collection and analysis—would help HUD assess what MTW agencies have done, including funding use. HUD took steps to improve oversight and reporting, but GAO found limitations in the following areas: Workforce planning. While HUD has taken steps to address staffing to oversee the current 39 MTW agencies, HUD has not finalized its workforce planning for 100 agencies to be added to the demonstration. According to a 2015 HUD analysis, a large number of additional staff would be needed for the expansion. HUD officials said field office staff might assume greater oversight responsibilities to fill this gap, but a joint (headquarters-field) oversight structure is not final and HUD's workforce analysis has not been updated to reflect this proposed oversight structure. Data collection. Due to limited data, HUD cannot fully determine the extent to which demonstration flexibilities affected the performance of MTW agencies, especially in relation to outcomes that affect the number of tenants served—occupancy and voucher utilization rates and program expenses. GAO found that MTW agencies had lower yearly median rates for public housing occupancy and Housing Choice Voucher (voucher) unit utilization and higher yearly median program expenses than comparable non-MTW agencies. The differences may be partly the result of demonstration funding flexibilities, such as the ability to use public housing and voucher funding for purposes such as gap financing for affordable housing (a nontraditional activity). But limitations in HUD data (such as not differentiating expenses for nontraditional activities) make it difficult to fully explain differences in outcomes GAO analyzed. Oversight of reserves. HUD has not implemented a process to monitor MTW reserves or agencies' plans for such reserves, which led to agencies accruing relatively large amounts of unused funds that could be used for vouchers. According to HUD data as of June 30, 2017, the 39 MTW agencies had more voucher reserves than the 2,166 non-MTW agencies that administer the voucher program combined ($808 million compared to $737 million). Without a monitoring process, HUD cannot provide reasonable assurance that MTW agencies have sound plans for expending reserves. Monitoring the effect of rent reform, work requirements, and time limits on tenants. HUD is limited in its ability to evaluate the effect of MTW policies on tenants. HUD does not have a framework—including clear guidance on reporting requirements and analysis plans—for monitoring the effect of rent-reform, work-requirement, and time-limit policies. HUD guidance instructs agencies to analyze the impact of their rent reform activities, describe how they will reevaluate them, and develop a tenant hardship policy for such policies (but not for time limits or work requirements). But the guidance does not describe what must be included in the analyses or policies, leading to wide variation in how agencies develop them. Also, HUD does not assess the results of agencies' analyses. What GAO Recommends GAO makes 11 recommendations to HUD, which include completing workforce planning, developing processes to track use of funds and monitor agencies' reserves, and developing a framework—including clear guidance on reporting requirements and analysis plans—to monitor effects on tenants. HUD generally agreed with eight of the recommendations and disagreed with three, citing the need for flexibility. GAO maintains the recommendations, as discussed further in the report.
gao_GAO-19-171
gao_GAO-19-171_0
Background Effective communication is vital to first responders’ ability to respond to emergencies and to ensure their safety. For example, first responders use public-safety communications systems to gather information, coordinate a response, and request additional resources and assistance from neighboring jurisdictions and the federal government. OEC has taken a number of steps aimed at supporting and promoting the ability of public-safety officials to communicate in emergencies and work toward operable and interoperable emergency communications nationwide. OEC develops policy and guidance supporting emergency communications across all levels of government and across various types of emerging technologies such as broadband, Wi-Fi, and NextGen 911, among others. OEC also provides technical assistance—including training, tools, and online and on-site assistance—for federal, state, local, and tribal first responders. First responders use different communications systems, such as land mobile radio (LMR), commercial wireless services, and FirstNet’s network. LMR: These systems are the primary means for first responders to use voice communications to gather and share information while conducting their daily operations and coordinating their emergency response efforts. LMR systems are intended to provide secure, reliable voice communications in a variety of environments, scenarios, and emergencies. Across the nation, there are thousands of separate LMR systems. Commercial wireless services: Public-safety entities often pay for commercial wireless services to send data transmissions such as location information, images, and video. Some jurisdictions also use commercial wireless services for voice communications. Nationwide dedicated-broadband network: Consistent with the law, FirstNet is working to establish a nationwide dedicated network for public-safety use that is intended to foster greater interoperability, support important data transmissions, and meet public-safety officials’ reliability needs. In creating FirstNet in 2012, Congress provided it with $7 billion in federal funds for the network’s initial build-out and valuable spectrum for the network to operate on. Unlike current LMR systems, the devices operating on FirstNet’s network will use the same radio frequency band nationwide. It is expected that these devices will be interoperable among first responders using the network because the devices will be built using the same open, non- proprietary, commercially available standards. Communications systems must work together, or be interoperable, even though the systems or equipment vendors may differ. The interoperability of emergency communications enables first responders and public-safety officials to use their radios and other equipment to communicate with each other across agencies and jurisdictions when needed and as authorized, as shown in figure 1. OEC is tasked with developing and implementing a comprehensive national approach to advance interoperable communications capabilities. For example, according to OEC, it supports and promotes communications used by emergency responders and government officials and leads the nation’s operable and interoperable public-safety and national security/emergency preparedness communications efforts. OEC notes that it plays a key role in ensuring federal, state, local, tribal, and territorial agencies have the necessary plans, resources, and training needed to support operable and interoperable emergency communications. To help in this effort, OEC instituted a coordination program that established regional coordinators across the nation. According to OEC, its coordinators work to build trusted relationships, enhance collaboration, and stimulate the sharing of best practices and information between all levels of government, critical infrastructure owners and operators, and key non-government organizations. OEC developed the National Emergency Communications Plan in 2008 and worked with federal, state, local, and tribal jurisdictions to update it in 2014 to reflect an evolving communications environment. The long-term vision of the plan—which OEC views as the nation’s current strategic plan for emergency communications—is to enable the nation’s emergency- response community to communicate and share information across all levels of government, jurisdictions, disciplines, and organizations for all threats and hazards, as needed and when authorized. To help it accomplish this mission, OEC works with three emergency communications advisory groups: SAFECOM, the Emergency Communications Preparedness Center (ECPC), and the National Council of Statewide Interoperability Coordinators (NCSWIC). These organizations promote the interoperability of emergency communications systems by focusing on technologies including, but not limited to, LMR and satellite technology. SAFECOM: According to the 2018 SAFECOM Strategic Plan, SAFECOM develops products and completes a range of activities each year in support of its vision and mission, including providing a national view of public-safety priorities and challenges, developing resources and tools aligned to the 2014 National Emergency Communications Plan, and collaborating with partner organizations to promote the interoperability of emergency communications. One of the products developed by SAFECOM each year is the Guidance on Emergency Communications Grants. SAFECOM consists of more than 50 members that represent local, tribal, and state governments; federal agencies; state emergency responders; and intergovernmental and national public-safety organizations. ECPC: The ECPC is an interagency collaborative group that provides a venue for coordinating federal emergency-communications efforts. The ECPC works to improve coordination and information sharing among federal emergency-communications programs. The ECPC does this by serving as the focal point for emergency communications issues across the federal agencies; supporting the coordination of federal programs, such as grant programs; and serving as a clearing house for emergency communications information, among other responsibilities. The ECPC has 14 member agencies that are responsible for setting its priorities. NCSWIC: This council consists of SWICs and their alternates from 50 states, 5 territories, and the District of Columbia. According to SAFECOM, NCSWIC develops products and services to assist the SWICs with leveraging their relationships, professional knowledge, and experience with public-safety partners involved in interoperable communications at all levels of government. Additionally, in 2013, FirstNet established the PSAC to provide advice to FirstNet. The committee is composed of members who represent local, tribal, and state public-safety organizations; federal agencies; and national public-safety organizations. FEMA is responsible for coordinating government-wide disaster response efforts, including on-the-ground emergency communications support and some technical assistance. For example, FEMA’s regional emergency- communications coordinator is responsible for providing emergency communications assistance on an as-needed basis and coordinating FEMA’s tactical communications support during a disaster or emergency. FEMA also provides a range of grant assistance to state, local, tribal, and territorial entities, including preparedness grants that can be used for emergency communications. As noted above, in November 2018, legislation was signed into law that reorganized and renamed NPPD and OEC. Previously, OEC was one of five divisions under the Office of Cyber Security and Communications which in turn was one of five divisions within NPPD. However, NPPD has been renamed the Cybersecurity and Infrastructure Security Agency, and OEC was renamed the Emergency Communications Division and was elevated to one of three direct reporting divisions within the new agency. See figure 2 for an illustration of changes made to OEC’s organizational placement. OEC’s and FEMA’s Joint Efforts for Emergency Communications Grants Generally Follow Key Features for Effective Interagency Collaboration OEC and FEMA have responsibilities for developing and implementing grant guidance for grantees using federal funds for interoperable emergency communications. Specifically, OEC and FEMA officials told us FEMA is responsible for administering the grants, and OEC coordinates emergency communications grant guidance annually through SAFECOM’s Guidance on Emergency Communications Grants. We reviewed OEC’s and FEMA’s collaborative efforts related to grant guidance and found that their efforts generally follow our previously identified leading practices for effective interagency collaboration, as described below. Written Guidance and Agreements. Agencies that formally document their agreements can strengthen their commitment to working collaboratively. OEC and FEMA formalized their coordination efforts for interoperable emergency communications grants in a memorandum of agreement in 2014. This memorandum assigned OEC and FEMA responsibilities and established a joint working group to develop standard operating procedures, which OEC said were drafted the following year but not formally approved by FEMA, that govern coordination between the agencies. We also reported that written agreements are most effective when the collaborators regularly monitor and update them. When we started our review, OEC and FEMA officials told us that they had not updated the memorandum of agreement, which included the draft standard operating procedures as an appendix. However, the agencies approved an updated memorandum of agreement and standard operating procedures, and OEC provided them to us in July 2018. Leadership. When buy-in is required from multiple agencies, involving leadership from each can convey the agencies’ support for the collaborative effort. According to OEC and FEMA officials, their grants coordination efforts include high-level leadership. Specifically, senior leaders from both agencies signed the 2014 and 2018 memorandums of agreement. Also, OEC officials told us that their leaders in the grants program office are responsible for overseeing the collaborative effort. Bridging Organizational Culture. Collaborating agencies should establish ways to operate across agency boundaries and address their different organizational cultures. OEC and FEMA operate across agency boundaries in several ways. First, both agencies told us that they participate in the ECPC Grants Focus Group, whose members coordinate across federal grant programs to support interoperable emergency communications. The group reviews SAFECOM guidance and, according to FEMA officials, meets on a quarterly basis. Second, OEC officials said the agencies foster open lines of direct communication via conference calls, e-mail correspondence, and in-person meetings. OEC and FEMA officials told us their communications include sharing and reviewing language in FEMA’s notices that announce grant opportunities and OEC’s SAFECOM guidance. Third, the agencies said that OEC officials conduct emergency-communications-related trainings and briefings for FEMA at least once a year. According to OEC officials, these trainings have included a discussion on the movement toward broadband and FirstNet. Finally, FEMA officials told us that their program analysts have attended conferences with OEC to speak to the SWICs about grant programs. They said the program analysts explained how the grant money can be leveraged to support projects within the individual states and answered questions about the grants. OEC officials said having FEMA attend conferences to discuss specific grant information is useful for public-safety stakeholders. Clarity of Roles and Responsibilities. Collaborating agencies can get clarity when they define and agree upon their respective roles and responsibilities. As part of the 2014 and 2018 memorandums of agreement, OEC and FEMA established clear responsibilities for how each agency will support the grants coordination effort. For example, both offices were responsible for assigning experienced program staff and contributing to the development of standard operating procedures by attending meetings and conducting research. Also, the standard operating procedures clarify how OEC and FEMA will share information, solicit input on grants guidance language, and review grant applications. Participants. Including relevant participants helps ensure individuals with the necessary knowledge, skills, and abilities will contribute to the collaborative effort. OEC and FEMA identify points of contact in their memorandums of agreement. According to OEC officials, they did not always work with the correct FEMA staff before the 2014 memorandum was developed. Also, FEMA officials told us that their grants program staff who participate in the coordination effort with OEC perform those specific responsibilities as a collateral duty on an as needed basis. According to OEC officials, OEC’s performance plans outline coordination with FEMA and areas related to the agencies’ memorandum of agreement for the staff who handle grant issues. OEC and FEMA officials said participants’ responsibilities include serving as technical subject matter experts and reviewing language for grants guidance and notices of funding opportunities. Resources. Collaborating agencies should identify the human, financial, and technological resources they need to initiate or sustain their efforts. OEC and FEMA staff their collaborative effort with employees from their grants offices to address their human resource needs. These employees perform work related to emergency communications grants as outlined in their performance plans or as a collateral duty. The agencies also provide OEC access to FEMA’s non-disaster grants system to share grantee information. According to OEC and FEMA officials, their collaboration efforts do not require either agency to obligate funds or use special technology, such as online information-sharing tools. Outcomes and Accountability. Collaborating agencies that create a means to monitor and evaluate their efforts can better identify areas for improvement. According to OEC and FEMA documentation, the primary goal of the draft standard operating procedures was to prevent grantees from improperly using federal funds, such as purchasing equipment that is not interoperable. OEC officials said the biggest gap in those standard operating procedures was that they did not include a monitoring program to ensure grantees were compliant with grant guidance, which include requirements for interoperability. OEC’s and FEMA’s July 2018 standard operating procedures established a process to track and monitor grantee compliance. They also identified a process for assessing the information they collect and how it will be shared among OEC and FEMA, and when appropriate, other stakeholders. At the time of our review, OEC and FEMA officials told us they had not implemented the monitoring procedures because the grants for the 2018 grant cycle were not yet awarded. Accordingly, we could not evaluate the effectiveness of the new procedures to monitor and assess grantee compliance, and without conducting such an evaluation, we could not determine whether OEC’s and FEMA’s efforts align with the key practice in this area. Senior officials from both agencies said the monitoring procedures would be updated if they do not work as intended. OEC Incorporates FirstNet’s Network and Emerging Technologies into Its Plans and Offerings After being established in 2007, OEC initially focused on enhancing the interoperability and continuity of LMR systems. However, according to OEC officials, its programs, products, and services have adapted and evolved to incorporate new modes of communications and technologies. Additionally, OEC’s technical assistance offerings for emergency communications technology have evolved over time as new technologies have come into use. For example, OEC’s technical assistance catalog contains new or enhanced offerings on topics related to broadband issues such as FirstNet’s network, Next Generation 911, alerts and warnings, and incident management. In 2014, DHS released its second National Emergency Communications Plan, which identified the need to focus on broadband technologies, including FirstNet’s nationwide public-safety broadband network. One of the plan’s top priorities is “ensuring emergency responders and government officials plan and prepare for the adoption, integration, and use of broadband technologies, including the planning and deployment of the nationwide public-safety broadband network.” To meet this priority, OEC officials told us that they provide stakeholders with a wide range of products and services to help prepare for the adoption, integration, and use of broadband. For instance, officials said that they leverage OEC’s governance groups—SAFECOM, NCSWIC, and ECPC—to develop products and services and to identify specific challenges and requirements regarding broadband. Additionally, OEC officials told us that they coordinate regularly with FirstNet staff and invite FirstNet to meet and brief the stakeholder community on the latest deployment information. However, OEC officials told us that FirstNet’s network is one option available to public-safety and government officials to access broadband communications and information sharing and explained that OEC maintains a neutral position for all technologies and vendors. Accordingly, OEC is not responsible for promoting any vendor solutions, including FirstNet’s network, and there is no requirement for OEC to do so. Additionally, five of six OEC coordinators we interviewed told us that FirstNet’s network is only one of several emergency-communications technology options and that OEC should continue to provide information to public-safety stakeholders regarding other providers. For example, there are commercial carriers that provide wireless broadband services, and we have previously reported that these commercial carriers could choose to compete with FirstNet. According to OEC officials, prior to the start of each fiscal year, OEC engages with stakeholders to gather feedback on new or revised technical assistance offerings, as well as updates to existing plans and documents. OEC officials told us that they expect an increase in technical assistance requests that focus on issues related to mobile data use, broadband governance, standard operating procedures, and policies and procedures. According to OEC officials, OEC has delivered more than 2,000 technical-assistance-training courses and workshops since 2007, and OEC will continually update its technical assistance offerings to incorporate new modes of communications and technologies into training, exercises, and standard operating procedures for its stakeholders. The majority (7 of 10) of public-safety organizations that we interviewed told us that OEC sufficiently incorporates information regarding FirstNet’s network into its guidance and offerings. For example, officials from 6 of 10 organizations that we interviewed told us that OEC must strike a balance between FirstNet’s network and other emerging technologies, and that OEC has successfully accomplished this task. Additionally, the majority of SWICs responded to our survey that it is at least moderately important for OEC to incorporate the FirstNet network and emerging technologies into its written guidance, technical assistance offerings, training opportunities, workshops, and grant guidance, Furthermore, in most cases, SWICs responded that OEC has incorporated FirstNet’s network and emerging technologies into these areas, as follows: FirstNet network. In our survey, the majority of SWICs responded that OEC has incorporated, to a large or moderate extent, FirstNet’s network into its written guidance (65 percent) and technical assistance offerings (59 percent), and half of SWICs said the same for OEC’s workshops. However, fewer SWICs reported that OEC incorporated FirstNet’s network, to a large or moderate extent, into its training opportunities (39 percent) and grant guidance (33 percent). Emerging technologies. The majority of SWICs reported that OEC has incorporated, to a large or moderate extent, emerging technologies into its written guidance (87 percent); technical assistance offerings (81 percent); training opportunities (74 percent); workshops (78 percent); and grant guidance (56 percent). See figure 3 for complete survey data regarding SWICs’ views on the extent that OEC has incorporated FirstNet’s network and emerging technologies into its offerings. In surveying SWICs on the usefulness of OEC’s efforts to incorporate FirstNet’s network and emerging technologies into its offerings, we found the following: FirstNet network. The majority of SWICs reported that OEC’s efforts to incorporate FirstNet’s network into its written guidance (67 percent), technical assistance offerings (59 percent), and workshops (59 percent) have been very or moderately useful. However, less than a majority of SWICs reported that OEC’s efforts to incorporate FirstNet’s network into its training opportunities (46 percent) and grant guidance (40 percent) have been very or moderately useful. Emerging technologies. The majority of SWICs reported that OEC’s efforts to incorporate emerging technologies into its written guidance (93 percent), technical assistance offerings (85 percent), training opportunities (74 percent), workshops (85 percent), and grant guidance (72 percent) have been very or moderately useful. See figure 4 for complete survey data regarding SWICs’ views on the usefulness of OEC’s efforts to incorporate FirstNet’s network and emerging technologies into its offerings. Even following the implementation of FirstNet, public-safety stakeholders told us they expect OEC will play an important role in ensuring interoperable emergency communications, both regarding the FirstNet network and other technologies. For example, 45 of 54 (83 percent) of SWICs we surveyed reported that OEC will likely have a large or moderate role for ensuring interoperable emergency communications once FirstNet’s network is fully operational. Additionally, nearly all (9 of 10) of public-safety organizations we interviewed said that they believe OEC will continue to play an important role in ensuring interoperable emergency communications after the implementation of FirstNet’s network. OEC Has Not Assessed Its Methods for Communicating with External Stakeholders OEC is required to conduct extensive nationwide outreach to support and promote interoperable emergency-communications capabilities by state, regional, local, and tribal governments and public-safety agencies in the event of natural disasters and acts of terrorism and other man-made disasters. According to federal standards for internal control, management should externally communicate the necessary quality information to achieve the entity’s objectives. This includes communicating with external parties and using the appropriate methods of communication. The federal standards state that management should periodically assess the entity’s methods of communication so that the organization has the appropriate tools to communicate quality information throughout and outside of the entity on a timely basis. Most public-safety organizations we interviewed told us that OEC communicates with their organization frequently through committee meetings and other means. For example, 9 of the 10 organizations told us that a key form of communication between their organization and OEC is participation in emergency-communications advisory groups such as SAFECOM, NCSWIC, and PSAC. Furthermore, OEC officials reported that OEC’s guidance documents, plans, tools, and technical assistance offerings are formally provided to the public-safety community through the SAFECOM, NCSWIC, and ECPC distribution lists. Governing body representatives then distribute the information to their organizations and stakeholders. These documents are also available on DHS’s website. Furthermore, 4 of the 10 organizations told us that they regularly have direct communications with OEC staff. The large majority of SWICs responded that they are very or moderately satisfied with the communication efforts from both OEC headquarters (81 percent) and OEC coordinators (93 percent). However, some stakeholders identified communication challenges as well as opportunities for OEC to improve communication. For example, approximately one quarter (26 percent) of SWICs said that OEC does not communicate training well, and these SWICs reported that they are either unaware of OEC training opportunities related to FirstNet’s network and other emerging technologies, or that they mostly learn about OEC training opportunities from other sources. See figure 5 below for additional survey information regarding SWICs’ views on how well OEC communicates training opportunities related to FirstNet’s network and other emerging technologies. Also with respect to OEC’s communication efforts with stakeholders, four of six OEC coordinators and 3 of 10 public-safety organizations we interviewed, along with 26 of 54 (48 percent) of the SWICs we surveyed, identified the need for OEC to use additional tools or approaches for improving communication with SWICs and the public-safety community. For example, one coordinator said that there are public-safety stakeholders who are unaware of OEC. Similarly, representatives from a public-safety organization we interviewed told us that OEC should help public-safety stakeholders better understand what OEC does. Both the OEC coordinator and public-safety stakeholders in these examples identified the need for OEC to use social media to improve public-safety stakeholders’ understanding of OEC and its offerings. Additionally, an OEC coordinator told us that each region is different, and unless there is an OEC coordinator who is proactive about communicating information to the public-safety community, then important information does not get out to the appropriate people. The coordinator also said that it is difficult to communicate information to all of the needed stakeholders because he is solely responsible for communicating with many public-safety entities and jurisdictions within multiple states. Furthermore, a SWIC reported that other organizations use social media for communicating during disasters and for notifying interested parties about events and trainings, and that OEC should do the same. OEC officials told us that NPPD recently established a Twitter account that OEC has used to increase awareness of programs, products, and services. However, since the establishment of the account in February 2018 through September 2018, only 23 of NPPD’s 280 tweets and retweets (8.2 percent) made mention of OEC, 15 of which occurred in March 2018. In addition to social media, some public-safety organizations and SWICs identified additional tools or approaches that OEC could use to improve communication with the public-safety community. These tools and approaches include designating an intergovernmental specialist or liaison within OEC to coordinate with public-safety stakeholders, developing additional regional-focused meetings such as conferences and workshops, and creating online or distance-learning opportunities (e.g., online training, webinars, online chat or bulletin board services, etc.). Although OEC officials told us that they employ mechanisms to understand the effectiveness of OEC’s programs, products, and services, we found OEC has not specifically assessed its methods of communication. For example, OEC analyzes feedback forms provided at meetings and stakeholder engagements, gathers direct input from stakeholders through in-person and phone discussions and e-mail, tracks the open rate of e-mails and website and blog post traffic, and reviews social media analytics for specific event campaigns. At the time of our review, OEC officials told us that they were developing a formal performance-management program to measure the impact of OEC’s programs on the public-safety and national security/emergency preparedness communities. However, these broad efforts aimed at reviewing the overall programs are not designed for the specific purpose of assessing OEC’s methods of communication, and OEC does not have any plans in place for doing so. Lacking an assessment of its methods of communication, OEC may be missing opportunities to learn which tools and approaches are the most effective and to use those to deliver timely information to public-safety stakeholders. As noted above, this can result in public-safety officials missing trainings or not receiving other helpful information. Furthermore, not using additional methods of communication or tools could contribute to uncertainty among the public-safety community about OEC’s mission and its efforts to improve the interoperability of emergency communications. Conclusions OEC has multiple efforts supporting interoperable emergency communications that the public-safety community relies on to better respond to emergency situations. Although public-safety stakeholders we contacted were generally satisfied with OEC’s communications efforts, OEC could be missing opportunities to use additional tools and approaches, such as social media, to improve communication with public- safety officials. Absent an assessment of its methods of communication, OEC cannot ensure it is using the best methods to provide relevant and timely information on training opportunities, workshops, technical assistance offerings, and other emergency-communications information to the public-safety community. Recommendation for Executive Action OEC should assess its methods of communication to help ensure it has the appropriate tools and approaches to communicate quality information to public-safety stakeholders, and as appropriate, make adjustments to its communications strategy. (Recommendation 1) Agency Comments We provided a draft of this report to DHS for review and comment. In response, DHS provided written comments, which are reprinted in appendix III. DHS concurred with our recommendation and provided an attachment describing the actions it would take to implement the recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) the Office of Emergency Communications’ (OEC) and the Federal Emergency Management Agency’s (FEMA) collaborative efforts to develop and implement guidance for grantees using federal grants for interoperable emergency communications; (2) how OEC incorporates FirstNet’s nationwide public-safety broadband network and other emerging technologies into its plans and offerings, and stakeholders’ views regarding those efforts; and (3) the extent to which OEC has assessed its methods of communication. To evaluate OEC’s and FEMA’s collaborative efforts to develop and implement grant guidance, we collected and reviewed documentation relevant to the collaborative effort, including memorandums of agreements, standard operating procedures, and meeting agendas. We assessed OEC’s and FEMA’s actions against the seven key considerations for interagency collaborations. We also interviewed OEC and FEMA Grant Programs Directorate officials who have responsibilities for Department of Homeland Security (DHS) grants. We asked them to discuss their approach to interagency collaboration, including the process to jointly develop grant guidance language. We asked agency officials questions that were based on the key considerations for implementing interagency collaborative mechanisms that we identified in a prior report. To determine how OEC has incorporated FirstNet’s network and other emerging technologies into its plans and offerings, we reviewed relevant OEC documentation, including fact sheets and technical assistance guides. We also reviewed the 2014 National Emergency Communications Plan (NECP) and OEC’s March 2017 biennial report to Congress on the progress toward meeting NECP goals. We interviewed OEC headquarters officials about the agency’s efforts to date, including how OEC develops its offerings and workshops and communicates this information to the public-safety community. We also interviewed 6 of 10 OEC coordinators using a semi-structured interview format to get on-the- ground perspectives from OEC staff who serve as points of contact for public-safety stakeholders. We selected OEC coordinators to achieve variety across geography, population density, tribal presence, and territory representation. We interviewed OEC coordinators to obtain their perspectives as subject matter experts, but their views should not be attributed to OEC’s official agency position. In addition, to obtain stakeholders’ views on OEC’s efforts to incorporate FirstNet’s network and other emerging technologies into plans and offerings, we surveyed all 54 statewide interoperability coordinators (SWIC) from 48 states, five territories, and the District of Columbia. We obtained a list of SWICs from DHS and confirmed additional contact information via e-mail. We conducted a web-based survey to learn SWICs’ perspectives on issues including the importance of incorporating FirstNet’s network and other emerging technologies into OEC’s plans and offerings, OEC’s communication with the public-safety community, and SWICs’ level of satisfaction with OEC’s efforts. To ensure the survey questions were clear and accurately addressed the relevant terms and concepts, we pretested the survey with SWICs from three states: Illinois, Massachusetts, and Texas. These SWICs were selected to get perspectives from officials who have served in the role for at least several years and SWICs who are new to the position. We administered our survey from May 2018 to July 2018 and received 54 responses for a 100 percent response rate. We also used a semi-structured interview format to obtain views from representatives from 10 public-safety organizations who have expertise in public-safety and federal emergency-communications efforts (see table 1). To identify relevant organizations, we reviewed our prior report that identified 34 organizations that are members of both OEC’s SAFECOM advisory group and FirstNet’s Public Safety Advisory Committee (PSAC). We researched the members to help determine the extent to which each organization is involved in issues related to our review. We selected 10 public-safety organizations to interview on the basis of: (1) this research, (2) information from DHS, and (3) a literature review. Because one association declined our request for an interview, we contacted and interviewed another relevant organization from the original list of 34 member organizations. The views shared by the representatives we interviewed are not generalizable to all public-safety organizations that interact with OEC; however, we were able to secure the participation of organizations that focus on various public-safety issues across federal, state, local, and tribal jurisdictions and thus believe their views provide a balanced and informed perspective on the topics discussed. To evaluate the extent that OEC has assessed its methods of communication, we reviewed OEC’s documentation for collecting stakeholders’ feedback. We also reviewed the interview responses from OEC officials and the public-safety organizations listed in table 1 and the SWIC survey data pertaining to OEC’s communications efforts. We assessed OEC’s efforts against federal standards for internal control regarding external communications and periodic evaluation of its methods of communication. Appendix II: Survey of Statewide Interoperability Coordinators The questions we asked in our survey of statewide interoperability coordinators (SWIC) and the aggregate results of responses to the closed-ended questions are shown below. We do not provide results for the open-ended questions. We surveyed all SWICs from 48 states, five territories, and the District of Columbia. We administered our survey from May 2018 to July 2018 and received 54 responses for a 100 percent response rate. Due to rounding, the aggregated results for each closed- ended question may not add up to exactly 100 percent. For a more detailed discussion of our survey methodology see appendix I. Governance 1. What best describes the Statewide Interoperability Coordinator (SWIC) in your state? 1a. If you selected “Other,” please explain. (Written responses not included) 2. Does the SWIC also serve in the role of the FirstNet State Point of Contact (SPOC)? 0% 2a. If no, how often does the SWIC coordinate with the SPOC on FirstNet’s nationwide public safety broadband network? 2b. If you selected “rarely or never,” please explain. (Written responses not included) OEC Coordination Support - FirstNet’s Nationwide Public Safety Broadband Network The questions in this section ask your opinion about OEC’s efforts to help the public safety community improve interoperable emergency communications capabilities. This section will be about FirstNet’s nationwide public safety broadband network. 3. In your opinion, how important is it for OEC to incorporate FirstNet’s nationwide public safety broadband network into the following areas? Please specify the other area in the box below. (Written responses not included) 4. To what extent has OEC incorporated FirstNet’s nationwide public safety broadband network into the following areas? Please specify the other area in the box below. (Written responses not included) 5. In your opinion, how useful have OEC’s efforts to incorporate FirstNet’s nationwide public safety broadband network into the following areas been in helping your state address challenges with its emergency communications? Please specify the other area in the box below. (Written responses not included) 6. Please provide any additional comments you have on OEC’s efforts to address FirstNet’s nationwide public safety broadband network as part of interoperable emergency communications. (Written responses not included) 7. What, if anything, could OEC do to further address FirstNet’s nationwide public-safety broadband network in its interoperable emergency communications efforts? (Written responses not included) 8. In your opinion, to what extent will OEC have a role for ensuring interoperable emergency communications once FirstNet’s nationwide public-safety broadband network is fully operational? 8a. Please explain your response to question 8 in the box below. (Written responses not included) OEC Coordination Support - Emerging Technologies The questions in this section ask your opinion about OEC’s efforts to help the public safety community improve interoperable emergency- communications capabilities. This section will be about other emerging technologies. 9. Should OEC address the following emerging technologies in its interoperable emergency communications efforts? Wireless Local Area Networks (e.g., Wi-Fi) 9a. If you responded “Yes” to other, please specify in the box below. (Written responses not included) 10. In your opinion, how important is it for OEC to incorporate emerging technologies into the following areas? Please specify the other area in the box below. (Written responses not included) 11. To what extent has OEC incorporated emerging technologies into the following areas? Please specify the other area in the box below. (Written responses not included) 12. In your opinion, how useful have OEC’s efforts to incorporate emerging technologies into the following areas been in helping your state address challenges with its emergency communications? Please specify the other area in the box below. (Written responses not included) 13. Please provide any additional comments you have on the usefulness of OEC’s efforts to incorporate emerging technologies into interoperable emergency communications. (Written responses not included) 14. What, if anything, could OEC do to further incorporate emerging technologies into its interoperable emergency communications efforts? (Written responses not included) OEC Communication Efforts The following questions are about OEC’s communication efforts with SWICs and the public safety community. 15. In your opinion, how well does OEC communicate to SWICs training opportunities in the following areas? Emerging technologies (i.e., Wi-Fi, NextGen 911, etc.) 15a. If you responded to other, please specify in the box below. (Written responses not included) 16. How satisfied or dissatisfied are you with the communication efforts from the following OEC organizational levels? 16a. If you responded to other, please specify in the box below. (Written responses not included) 17. In your opinion, are there additional tools or approaches that OEC could use to improve communication with SWICs and the public-safety stakeholder community? 17a. Please identify and describe additional tools and approaches in the box below. (Written responses not included) 18. In your opinion, does OEC face any challenges that affect its ability to meet the needs of the public safety community? 18a. Please explain in the box below. (Written responses not included) SAFECOM Grant Guidance The following questions ask your opinion about SAFECOM grant guidance for interoperable emergency communications equipment. OEC develops annual SAFECOM guidance in an effort to provide current information on emergency communications policies, eligible costs, best practices, and technical standards for state, local, tribal, and territorial grantees investing federal funds in emergency communications projects. 19. In your opinion, how clear are the following aspects of the SAFECOM grant guidance for interoperable emergency communications equipment? 19a. If you responded to other, please specify in the box below. (Written responses not included) 20. In the past 2 years, has your state developed supplemental statewide guidance to clarify the SAFECOM grant guidance for interoperable emergency communications equipment? 20a. Please explain in the box below, why your state developed supplemental statewide guidance. (Written responses not included) 21. In your opinion, is there a need to improve the SAFECOM grant guidance for interoperable emergency communications equipment? 21a. If yes, please explain in the box below. (Written responses not included) Closing 22. If you would like to expand upon any of your responses to the questions above, or if you have any other comments about OEC’s interoperable emergency communications efforts, please write them in the box below. (Written responses not included) Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Sally Moino (Assistant Director); Ray Griffith (Analyst in Charge); Josh Ormond; Cheryl Peterson; Kelly Rubin; Andrew Stavisky; Sarah Veale; Michelle Weathers; and Ralanda Winborn made key contributions to this report.
Why GAO Did This Study Public-safety communications systems are used by thousands of federal, state, and local jurisdictions. It is vital that first responders have communications systems that allow them to connect with their counterparts in other agencies and jurisdictions. OEC offers written guidance, governance planning, and technical assistance to help ensure public-safety entities have the necessary plans, resources, and training to support emergency communications. FirstNet, an independent authority within the Department of Commerce, is establishing a public-safety network. GAO was asked to review OEC's efforts related to interoperable emergency communications. This report examines (1) OEC's and FEMA's collaborative efforts to develop grant guidance; (2) how OEC incorporates FirstNet's network and other emerging technologies into its plans and offerings; and (3) the extent to which OEC has assessed its methods of communication. GAO evaluated OEC's and FEMA's coordination against GAO's leading practices for interagency collaboration; surveyed all 54 state-designated SWICs; evaluated OEC's communications efforts against federal internal control standards; and interviewed officials that represented various areas of public safety. What GAO Found The Department of Homeland Security's (DHS) Office of Emergency Communications (OEC) and the Federal Emergency Management Agency (FEMA) collaborate on grant guidance to help public-safety stakeholders use federal funds for interoperable emergency communications. GAO found that OEC's and FEMA's efforts generally align with GAO's leading practices for effective interagency collaboration. For example, OEC's and FEMA's memorandum of agreement and standard operating procedures articulate their agreement in formal documents, define their respective responsibilities, and include relevant participants. During this review, the agencies established a process to monitor and assess grantees' compliance with the grant guidance. However, because the grants for 2018 were not yet awarded at the time of GAO's review, GAO was unable to assess the effectiveness of the new process. OEC incorporates the First Responder Network Authority's (FirstNet) nationwide public-safety broadband network and other emerging technologies into various offerings such as written guidance, governance planning, and technical assistance. Public-safety organizations GAO interviewed and statewide interoperability coordinators (SWIC) GAO surveyed were generally satisfied with OEC's communication efforts. OEC has not assessed its methods for communicating with external stakeholders. According to federal internal control standards, management should externally communicate the necessary quality information to achieve the entity's objectives and periodically assess its methods of communication so that the organization has the appropriate tools to communicate quality information on a timely basis. Some SWIC survey respondents and public-safety representatives identified an opportunity for OEC to improve its methods of communication. For example, 26 of the 54 SWICs responded that OEC could use additional tools or approaches, such as social media, for improving communication with its stakeholders. In addition, public-safety officials reported that they have missed training because they were unaware of opportunities. Because OEC has not assessed its methods of communication, OEC may not be using the best tools and approaches to provide timely information on training opportunities, workshops, and other emergency communications information to the public-safety community. What GAO Recommends OEC should assess its methods of communication to help ensure it is using the appropriate tools in communicating with external stakeholders. DHS concurred with the recommendation.
gao_GAO-18-287
gao_GAO-18-287_0
Background RRB administers retirement, survivor, disability, unemployment, and sickness benefits for railroad workers and their families under the provisions of the Railroad Retirement Act of 1974, as amended (RRA) and the Railroad Unemployment Insurance Act of 1938, as amended (RUIA). Individuals who work for railroads engaged in interstate commerce, for railroad associations, and for railroad labor organizations are among those covered by RRB’s benefits system instead of Social Security or the federal-state unemployment insurance system. During fiscal year 2016, RRB received approximately $12 billion in funding, about half of which came from payroll taxes levied on railroad workers and their employers. Almost all of the funding was used to pay benefits, including unemployment benefits. In addition, RRB administers Medicare coverage for railroad workers. The railroad retirement system and Social Security system are separate but linked with regard to benefit payments and taxes. Railroad workers and their employers pay the same payroll taxes as other workers covered by Social Security for comparable retirement, disability, unemployment insurance, and Medicare benefits. RRB also collects taxes to cover additional benefits. A financial interchange between the two systems allows funds to be transferred between RRB and Social Security accounts based on the amount of Social Security benefits that workers would have received if they were covered by Social Security, as well as the payroll taxes that would have been collected if the railroad workers were covered by Social Security instead of their own system. According to RRB data, the railroad retirement, survivor, and disability system is projected to remain solvent over at least the next 25 years. Railroad Retirement Act Disability Benefits Under the RRA, RRB provides two distinct disability benefits for railroad workers—total and permanent (T&P) disability and occupational disability. For T&P disability, RRB makes determinations about railroad workers’ disability claims independent of but using the same general criteria that the Social Security Administration (SSA) uses to administer its Disability Insurance program. That is, a worker must have a medically determinable physical or mental impairment that (1) has lasted (or is expected to last) at least 1 year or is expected to result in death, and (2) prevents them from engaging in substantial gainful activity, defined as work activity that involves significant physical or mental activities performed for pay or profit. In other words, these workers are essentially deemed unable to engage in any regular employment. According to RRB’s 2017 Annual Report, at the end of fiscal year 2016, the agency was paying 10,300 T&P disability beneficiaries an average of $1,911 each per month for a total of about $236 million annually. In fiscal year 2016, data provided by RRB indicate that the agency approved about 78 percent of the 843 applications for T&P disability benefits it received. Occupational disability is a unique benefit for railroad workers. RRB provides these benefits to workers who have physical or mental impairments that prevent them from performing their specific railroad job, even though they may be able to perform other kinds of work. For example, a railroad engineer who cannot frequently climb, bend, or reach, as required by the job, may be found to be occupationally disabled. To be eligible for occupational disability benefits, workers must meet certain labor- and management-negotiated disability criteria as well as certain age and service requirements. Railroad workers age 60 and older with at least 10 years of service are eligible to apply, as well as workers of any age with at least 20 years of service. Workers determined to be eligible for occupational disability benefits may be able to return to the workforce, but generally not to their original occupation. According to RRB, at the end of fiscal year 2016, the agency was paying 21,000 occupational disability beneficiaries an average of $3,053 each per month for a total of about $769 million annually. In fiscal year 2016, data provided by RRB indicate that the agency approved about 98 percent of the 984 applications for occupational disability benefits it received. Continuing Disability Reviews Federal law generally requires RRB to conduct CDRs to determine if beneficiaries continue to meet the disability requirements of the law. RRB conducts two overall types of CDRs: medical and earnings. In a medical CDR, disability examiners review a beneficiary’s medical records and may order additional examinations to determine whether the individual’s medical condition has improved to the point where it is no longer considered disabling. In an earnings CDR, disability examiners review earnings to determine whether beneficiaries are earning income that exceeds program limits, which could make them ineligible for benefits. If the agency, while conducting an earnings review, obtains information that indicates the beneficiary’s medical condition has improved, RRB can initiate a medical CDR as well. Similarly, if RRB discovers earnings above program limits while developing evidence for a medical review, the agency may initiate an earnings CDR. Medical Continuing Disability Reviews RRB generally conducts medical reviews with a frequency determined by the beneficiary’s likelihood of medical improvement, which may fall into one of three categories: medical improvement expected (MIE)—when a beneficiary’s impairment demonstrates medical improvement, when improvement is unpredictable, or when medical intervention may change the impairment’s severity, among other reasons; medical improvement possible (MIP)—when a beneficiary’s disability may improve, or the likelihood of medical improvement within 3 years is not probable; or medical improvement not expected (MINE)—when a beneficiary’s impairment meets certain listings such as blindness or hearing loss and generally when a beneficiary has attained 54 ½ years of age. If a beneficiary’s disability is classified as MIE, RRB generally reviews the beneficiary’s continuing eligibility for disability benefits at intervals from 6 months to 18 months. For MIP cases, RRB mails a questionnaire at least once every 3 years that asks a beneficiary to update medical and earnings information. If the self-reported information indicates medical improvement or a return to work, RRB may conduct a CDR. For MINE cases, RRB’s regulations state that it will not routinely review the beneficiary’s continuing eligibility. (See fig. 1.) According to RRB’s guidance, factors such as age and work experience may also affect how and when RRB classifies a beneficiary as MINE, or whether it should schedule (or “diary”) a CDR. RRB maintains a list of scheduled CDRs in its CDR Call-Up program. It uses this program to both identify CDRs that require completion and to schedule CDRs based on the likelihood of medical improvement. RRB also started conducting CDRs in 2015 that target cases at high risk of potential fraud and which officials said could result in the termination of benefits. Earnings Continuing Disability Reviews In addition to medical reviews, RRB conducts earnings CDRs for beneficiaries detected with earnings that exceed disability program limits. Most earnings CDRs are triggered by unreported earnings detected through RRB’s policing operation. Policing for earnings involves an annual data match by SSA in which it uses RRB’s disability beneficiary database and Internal Revenue Service (IRS) earnings data to detect unreported earnings. (See fig. 2.) In this process, RRB provides SSA with a record of all disability beneficiaries, and SSA matches these beneficiaries against IRS earnings data. For those cases in which earnings are identified, RRB has an earnings reconciliation process to determine which beneficiaries may be excluded from an earnings CDR and which should receive one. For example, RRB considers whether a beneficiary has any disability-related work expenses, such as the cost of special transportation or medication, which are deducted from any earnings, or if the beneficiary has reached full retirement age. (See fig. 2.) In addition, a beneficiary who returns to work or has earnings from employment is required to report that information to RRB, and the agency may initiate a CDR depending on the circumstances. If a potential overpayment is identified as a result of a CDR, the Disability Benefits Division refers the case to another division within RRB to calculate the overpayment amount. Most Beneficiaries Are Not Subject to Medical CDRs, and Data Used to Identify Unreported Earnings and Potential Overpayments Are Outdated RRB Completed CDRs for Slightly More Than 1 Percent of Disability Beneficiaries in Fiscal Years 2014-2016 Over the 3 years for which RRB was able to provide us with complete data, the agency completed 427 CDRs. This number represents CDRs for slightly more than 1 percent of the railroad workers who received disability benefits during that period, an average of about 35,000, including both occupational and T&P beneficiaries. Most of the reviews it completed from fiscal years 2014 through 2016 were medical CDRs, but earnings CDRs identified most of the ineligible beneficiaries and overpayments. Of the 427 CDRs completed, 209 were medical CDRs and 163 were earnings CDRs. In 55 cases, both a medical and an earnings CDR were completed. Forty-three of the scheduled medical CDRs completed were based on medical improvement criteria. Another 166 of the medical CDRs completed were based on “high-risk” selection criteria that were developed after fraudulent activities came to light among Long Island Rail Road (LIRR) beneficiaries in the late 1990s through 2008. RRB uses the high-risk selection criteria to target occupational disability beneficiaries who share certain characteristics that are common to the employees who participated in the LIRR fraud scheme. Overall, RRB determined that about 86 percent of beneficiaries remained eligible for benefits as a result of all of the CDRs completed in fiscal years 2014-2016. (See fig. 3.) Medical CDRs Identified Few Ineligible Beneficiaries and Overpayments During fiscal years 2014-2016, RRB completed a total of 43 medical CDRs for beneficiaries–about 0.1 percent of disability beneficiaries–that were scheduled based on beneficiaries’ medical improvement category. Our analysis of RRB’s data and policies suggests that RRB completes few medical CDRs relative to the total number of disability beneficiaries because it has a high percentage of older disability beneficiaries who may not be subject to a medical CDR. According to the data provided by RRB, about 90 percent of individuals who received a disability payment in fiscal year 2016 were age 55 or older (see fig. 4), and RRB’s Disability Claims Manual states that at age 54½, a combination of medical and vocational factors, such as medical condition, age, and work experience, may preclude a return to work. More specifically, the manual instructs disability examiners to classify beneficiaries over age 54½ as “medical improvement not expected” because of the remote likelihood that they will be able to engage in medium or heavy work. Scheduled medical CDRs resulted in few terminations and identified few overpayments. Data provided by RRB indicate that of the 43 medical CDRs completed during fiscal years 2014-2016, 3 ineligible beneficiaries were identified and 1 overpayment of $28,000 was identified and calculated. RRB determined that 40 of the 43 beneficiaries (93 percent) continued to meet the appropriate disability criteria for occupational or T&P disability, as applicable, and qualify for benefits (see sidebar). These results largely mirror RRB’s initial approval rates for disability benefits. In fiscal year 2016, 89 percent of all disability applicants were approved for benefits. RRB Developed Criteria for Conducting Additional Medical CDRs to Target Cases at Risk for Fraud, but These CDRs Identified No Ineligible Beneficiaries in Two Years In fiscal year 2015, RRB expanded the use of medical CDRs to include certain high-risk occupational disability cases that would previously only have been selected for a CDR if RRB received a report of medical recovery or identified earnings that could affect entitlement to benefits. As part of its Disability Program Improvement Plan, RRB developed selection criteria to target cases at high risk for potential fraud that could result in termination of benefits. According to RRB officials, the criteria for targeting these cases are based on characteristics common to the employees who participated in the LIRR fraud scheme. In order to fall within the high-risk group, a beneficiary must (1) have an occupational disability, (2) have an orthopedic or psychological impairment, (3) be under age 55, and (4) not have a disability freeze. Despite these targeted criteria, the 166 high-risk CDRs completed in fiscal years 2015 and 2016 identified no ineligible beneficiaries and no overpayments. According to a senior RRB official, the agency is not yet ready to abandon its high-risk CDR effort, and it continues to consider these reviews as potentially effective. However, high-risk CDR outcomes raise questions about the value and benefit of RRB dedicating resources to conduct these additional reviews. Earnings CDRs Identified the Greatest Number of Ineligible Beneficiaries and Overpayments, but Outdated Earnings Information Limits Their Effectiveness Earnings CDRs resulted in a higher percentage of terminations and identified more overpayments than scheduled and high-risk medical CDRs combined. During fiscal years 2014-2016, RRB completed 163 earnings CDRs. Most of these earnings CDRs (127) were initiated as a result of RRB’s annual earnings policing effort, in which RRB’s beneficiary database is matched against Internal Revenue Service earnings data. Other CDRs were initiated as a result of self-reported earnings information from beneficiaries. Over this 3-year period, earnings CDRs identified 47 ineligible beneficiaries and terminated their benefits. During this same time period, earnings CDRs identified at least $970,550 in overpayments that had been calculated for CDRs completed during fiscal years 2014-2016. However, earnings CDRs that were conducted may identify additional overpayments but RRB is slow to calculate overpayments. We determined that the overpayment data RRB provided for CDRs completed during 2014-2016 were incomplete. For example, a case file review of six randomly selected earnings CDRs completed in fiscal year 2016 found that in three of the cases, the Retirement and Survivor Benefits Division (RSBD), the office responsible for calculating overpayments, had not calculated the overpayments identified by those 2016 reviews as of July 2017. RRB officials acknowledged delays of a year or more in calculating overpayments for disability beneficiaries identified by CDRs, and that RRB lacks a standard time frame for doing so. The officials attributed the delays to competing priorities and staffing shortages within RSBD. RRB has no plans to establish a standard time frame for processing overpayments identified through CDRs. Identifying and calculating overpayments in a timely manner are important to RRB’s long-term performance goal related to payment accuracy, as outlined in its strategic plan. Further, federal internal control standards state that transactions should be recorded promptly to maintain their relevance and value to management in controlling operations. In addition, although RRB’s annual earnings policing effort has identified numerous beneficiaries with earnings over program limits as well as overpayments, the data RRB uses for its policing effort can be up to 2 years old. The data RRB uses to identify unreported earnings and determine whether it should initiate a CDR are based on outdated IRS earnings information. For example, income earned in calendar year 2014 that is filed with the IRS in 2015 would not become available for earnings policing until 2016. Further, the earnings discovered during the course of a CDR may be even older than 2 years. Our review of the six earnings CDRs completed in fiscal year 2016 found the earned income in question ranged from 2011 through 2013. RRB officials acknowledged that the data it currently uses for its policing effort cause delays in identifying earnings. When overpayments are not identified in a timely manner, RRB’s ability to detect when a beneficiary is not eligible for benefits is hindered, thereby increasing the potential for lost federal dollars. In addition, the delay may also cause larger overpayments since undetected overpayments can accrue over several years. We previously recommended that RRB explore options to obtain more timely earnings data for use in making disability benefit eligibility determinations, which includes CDRs. In response, RRB officials said one step they have recently taken is to use The Work Number, which includes payroll data from over 5,500 employers nationwide, on a case- by-case basis for CDRs to obtain more recent earnings information from employers for a specific beneficiary. In addition, RRB contacts employers directly to obtain earnings information needed for CDRs. However, according to a RRB official, IRS earnings data are currently the only source to which RRB has access for earnings policing that includes all of its disability beneficiaries. RRB has considered conducting earnings policing using the Department of Health and Human Services’ quarterly earnings data from the National Directory of New Hires, which includes the most recent eight quarters of wages reported from all states. In its budget submissions for fiscal years 2017-2019, RRB included a legislative proposal seeking access to these quarterly earnings data, since access is limited by statute. Several federal agencies, including the Departments of the Treasury, Education, Housing and Urban Development, and the Social Security Administration, are currently authorized by law to use data from the National Directory of New Hires to verify program eligibility and detect and prevent overpayments. Providing RRB with similar access to more recent earnings data would enable it to identify potential overpayments sooner than is currently possible. SSA has legal authority to access quarterly wage data from the National Directory of New Hires for the purpose of making disability benefit eligibility determinations, which includes CDRs. In March 2017, SSA implemented the Quarterly Earnings Project in which it matched certain Social Security Disability Insurance beneficiaries against these earnings data with the goal of reducing overpayments. According to SSA officials, the project identified beneficiaries with substantial earnings, on average, 1 year earlier when using quarterly wage data instead of earnings data from the IRS—the data RRB currently uses to conduct its annual earnings match. SSA officials project that the Quarterly Earnings Project will achieve an estimated $10.3 million in savings and benefit terminations in 22 percent of the roughly 10,000 cases selected for review in fiscal year 2017. RRB’s Oversight of Its CDR Program Has Focused on High- Risk Reviews Rather Than Overall Program Data RRB Has Reviewed High- Risk CDRs, but Outcomes Call This Effort into Question RRB’s Program Evaluation and Management Services (PEMS), which is tasked with conducting reviews to ensure efficient program performance, has conducted two internal reviews of the high-risk medical CDRs since they were first implemented in 2015. PEMS concluded in its 2016 report that conducting high-risk CDRs based solely on the likelihood of medical improvement demonstrated no return on investment. PEMS officials recommended that the Disability Benefits Division focus its resources on investigating non-reported work and earnings rather than on developing medical evidence; however, RRB continues to dedicate resources to developing medical evidence for high-risk CDRs, and a senior RRB official said the agency is not ready to abandon its high-risk CDR effort. RRB officials said they plan to track certain annual measures for high-risk CDRs, such as the number of cases referred to the OIG for potential fraud, CDR outcomes (continuances, suspensions, and terminations), and any overpayments identified. Our findings and PEMS’s 2016 conclusions indicate that these high-risk medical CDRs have not been effective in identifying ineligible beneficiaries, or identifying potential fraud. High initial approval rates for occupational disability benefits—over 96 percent for fiscal years 2008- 2016—may be an indication that high-risk CDRs for occupational beneficiaries would result in most beneficiaries continuing to qualify for benefits, since the same disability criteria are used to evaluate medical condition for initial decisions and CDRs. By continuing to conduct high- risk CDRs, RRB may be expending resources that could be used for other purposes that are more effective in identifying ineligible beneficiaries. RRB Does Not Routinely Compile and Analyze CDR Program Data for All CDRs Aside from RRB’s efforts to oversee its high-risk medical reviews, it does not routinely analyze program data for its CDR operations as a whole. The lack of routine data collection and analysis limits its ability to identify potential gaps in oversight and monitor program performance. RRB officials said compiling comprehensive information for the CDR program can be challenging because CDR data are housed in multiple systems, some of which use outdated software and are not compatible with each other. For example, information related to CDR overpayments is housed in at least three separate systems. Further, according to RRB officials, some case information is only available in paper files. RRB has taken some steps to improve its ability to access information, such as converting its paper files to electronic images, but the information in the images cannot easily be analyzed. RRB was able to compile data for fiscal years 2014-2016 for our review that made it possible for us to analyze different aspects of the CDR program, such as the number of medical and earnings CDRs completed each year, the amount of overpayments identified as a result of CDRs, and CDR outcomes. However, RRB was unable to provide complete historical data for CDRs completed before fiscal year 2014. If RRB routinely compiled and analyzed these data for its own purposes, it could better monitor CDR program performance. This would be consistent with federal internal control standards, which state that management should use program data for effective program monitoring. Routinely compiling and analyzing CDR program data would also allow RRB to identify potential gaps in oversight. For example, our analysis of the data provided by RRB indicated that 10 percent of the 427 cases for which it completed a CDR during fiscal years 2014-2016 lacked a valid initial medical improvement category—medical improvement expected, possible, or not expected—which is assigned when beneficiaries are first awarded benefits. Since RRB schedules medical CDRs based on medical improvement category information, and we found that medical improvement category data are incomplete for 10 percent of the CDRs completed during fiscal years 2014-2016, this raises questions as to whether RRB is scheduling and conducting medical CDRs for everyone it should be. RRB officials said the only way to verify a beneficiary’s medical improvement category is to perform an individual query in the CDR Call-Up program or check the paper files, which could be very time- consuming and labor-intensive to do for all beneficiaries. RRB also lacks data on the total number of beneficiaries currently in each medical improvement category. Without these data, RRB cannot anticipate how many medical CDRs it should expect to conduct and when. Federal internal control standards state that management should use quality information to make informed decisions, and that quality information is current, complete, and accurate. RRB’s ability to monitor the performance of its CDR program is also limited because it does not track all costs or benefits of conducting CDRs. For example, RRB officials told us they do not analyze certain program data, such as administrative costs and recovered overpayments for CDRs. Analyzing these program data would enable RRB to compare any savings produced by CDRs against the cost of administering them. RRB’s strategic plan states that the agency measures the efficiency of its agency-wide program integrity efforts by comparing any savings they produce against the cost of administering the activities. According to the plan, program integrity efforts that can identify savings include computer matching to prevent payments to deceased beneficiaries and referring suspected fraud to the OIG. In its fiscal year 2017 Performance and Accountability Report, RRB reported a return on investment of $4.18 for each dollar spent on combined program integrity efforts in fiscal year 2016. However, we do not know how CDRs contributed to this return on investment or the savings resulting from CDRs specifically. Conclusions As a steward of tax dollars, it is important that RRB take all necessary steps to operate and manage its disability program effectively and efficiently, while minimizing overpayments. RRB’s continued reliance on outdated earnings information to identify beneficiaries who, at the time a CDR is conducted, may no longer be eligible for benefits, increases the likelihood of making improper benefit payments and having to try to recover the money in the future. In addition, even for those overpayments that RRB identifies, it lacks a standard for processing them in a timely manner, which increases the potential loss of federal dollars. Furthermore, despite a RRB report that high-risk medical CDRs have not been effective, the agency expends resources on these reviews that could be used for other purposes that are more effective in identifying ineligible beneficiaries. RRB’s lack of routine data collection and analysis hampers its ability to monitor program performance and determine what changes, if any, should be made to improve the CDR program, including determining the number of beneficiaries in each medical improvement category and the costs and benefits of conducting the various types of reviews. While paper files and disparate data systems present challenges to collecting and analyzing program data and may hinder oversight efforts, RRB could be doing more with the data it has to identify potential gaps in oversight. Matter for Congressional Consideration To improve RRB’s ability to make accurate disability benefit eligibility determinations, including CDRs, and to decrease the potential for making improper payments, Congress should consider granting RRB access to the Department of Health and Human Services’ quarterly earnings information from the National Directory of New Hires database. (Matter for Consideration 1) Recommendations for Executive Action To enhance RRB’s ability to manage and oversee its CDR program, we are making the following three recommendations to the Railroad Retirement Board: RRB should develop a standard for the timely processing of disability program overpayments identified through CDRs. (Recommendation 1) RRB should consider whether to reallocate resources used for high-risk CDRs to other CDR efforts that produce more effective outcomes. (Recommendation 2) RRB should routinely compile and analyze CDR program data, such as the number of cases selected for review, the number of beneficiaries in each medical improvement category, outcomes, and the costs and benefits of conducting CDRs, to improve program oversight. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to RRB for review and comment. RRB provided written comments, which are reproduced in appendix II. The agency also provided additional technical comments, which have been incorporated as appropriate. RRB agreed with all three of the recommendations and noted that it has already taken initial steps to implement them. We are sending copies of this report to the appropriate congressional committees, the Railroad Retirement Board, and other interested parties. In addition, the report will be will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Railroad Retirement Board Funding and Expenditures, Fiscal Years 2007 Through 2016 Railroad Retirement Board (RRB) Funding In fiscal years 2007 through 2016, RRB received, on average, approximately $11.6 billion annually from multiple sources to fund its programs. RRB’s budget in fiscal year 2016, the most recent year for which data are available, was $12.4 billion. (See table 1.) Railroad retirement, survivor, disability, unemployment, and sickness benefit payroll taxes are the primary funding source for RRB and totaled $5.9 billion in fiscal year 2016. In fiscal years 2007-2016, these taxes averaged $5.2 billion annually. Railroad employers and employees pay Tier I taxes, which are the same as taxes levied on Social Security- covered employers and workers. The taxes fund benefits similar to Social Security retirement and disability. Employers and employees also pay Tier II taxes to finance additional railroad retirement benefits. According to RRB data, Tier I and Tier II taxes for fiscal year 2016 amounted to $2.8 billion and $3.1 billion, respectively. Railroad employers also paid approximately $117.2 million in unemployment insurance taxes in fiscal year 2016. The second major source of RRB funding consists of transfers from the Social Security trust funds under a financial interchange between the two systems. The financial interchange is intended to place the Social Security Old-Age and Survivors Insurance Trust Fund and the federal Hospital Insurance Trust Fund in the same condition they would have been in had railroad employees been covered by the Social Security Act and Federal Insurance Contributions Act, and the Railroad Retirement Act had not been enacted. The financial interchange calculation involves computing the amount of Social Security taxes that would have been collected on railroad employment and computing the amount of benefits which Social Security would have paid to railroad retirement beneficiaries during the same fiscal year. When benefit reimbursements exceed payroll taxes, the difference, with an allowance for interest and administrative expenses, is transferred from the Social Security Trust Funds to RRB. If taxes exceed benefit reimbursements, which has not happened since 1951, a transfer would be made in favor of the Social Security Trust Funds. According to RRB data, the net financial transfer to the Social Security Equivalent Benefit Account during fiscal year 2016 amounted to about $4.1 billion; in fiscal years 2007-2016, these transfers averaged $4.0 billion annually. The third major source of RRB funding is transfers from the National Railroad Retirement Investment Trust, the trust fund that holds assets to help pay a portion of RRB benefits. The Trust was established pursuant to Section 105 of the Railroad Retirement and Survivors’ Improvement Act of 2001, and is the vehicle for investing RRB retirement benefit assets in non-government securities. Under the Trust’s investment guidelines, assets are invested in both government securities and private equities, unlike the Social Security Trust Funds, which are only invested in government securities. The Trust also provided for the transfer of excess RRB retirement, survivor, and disability benefit payroll taxes that are not needed to pay benefits to the Trust for investment, and for transfers from the Trust to the Treasury to assist the RRB in meeting its benefit obligations. The Trust has not received transfers from the RRB since the end of fiscal year 2004. During fiscal year 2016, however, the Trust transferred a total of $1.4 billion to the Treasury for payment of RRB benefit obligations; for fiscal years 2007-2016, these transfers averaged $1.6 billion annually. According to RRB data, the value of Trust-managed assets at the end of fiscal year 2016 was $25.1 billion. The fourth major source of RRB funding is appropriations. According to RRB officials, most of these appropriations are derived from the taxation of railroad retiree benefits and primarily fund benefit payments. These appropriations also fund specific efforts such as administrative costs. In fiscal year 2016, RRB received $790.6 million in federal appropriations; for fiscal years 2007-2016, RRB’s annual appropriation averaged $655.4 million. Railroad Retirement Board Expenditures In fiscal years 2007-2016, RRB expended, on average, approximately $11.6 billion annually to fund its programs. (See table 2.) RRB’s expenditures in fiscal year 2016, the most recent year for which data are available, were $12.8 billion, which included approximately $12.5 billion for benefit payments, $156.0 million for salaries and expenses, and $98.0 million for interest expenses due to borrowing from Treasury for the financial interchange. Railroad Retirement System Solvency By law, RRB is required to prepare an annual report to the President and Congress containing a 5-year projection on revenues to and payments from the Railroad Retirement Account (RRA). In its June 2017 report, RRB projected that cash flow problems would not occur during the 25- year projection period (calendar years 2017-2041). The report also recommended no change in employer and employee tax rates and no diversion of taxes from the RRA to the Railroad Unemployment Insurance Account (RUIA). Appendix II: Comments from the Railroad Retirement Board Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mark Glickman (Assistant Director), Arthur T. Merriam Jr. (Analyst-In-Charge), Meredith Moore, and Jill Yost made significant contributions to this report. Also contributing to this report were Daniel Concepcion, Erin Godtland, Joel Green, Nicole Jarvis, David Lehrer, Emei Li, Olivia Lopez, Sheila McCoy, Phillip McIntyre, Jean McSween, Mimi Nguyen, James Rebbe, Anjali Tekchandani, Frank Todisco, and Kathleen van Gelder. Related GAO Products Social Security Disability: SSA Could Increase Savings by Refining Its Selection of Cases for Disability Review. GAO-16-250. Washington, D.C.: February 11, 2016. Railroad Retirement Board: Actions Needed to Reduce Continued Risk of Fraud and Improper Payments. GAO-15-535T. Washington, D.C.: May 1, 2015. Railroad Retirement Board: Total and Permanent Disability Program at Risk of Improper Payments. GAO-14-418. Washington, D.C.: June 26, 2014. Use of the Railroad Retirement Board Occupational Disability Program across the Rail Industry. GAO-10-351R. Washington, D.C.: February 4, 2010. Railroad Retirement Board: Review of Commuter Railroad Occupational Disability Claims Reveals Potential Program Vulnerabilities. GAO-09-821R. Washington, D.C.: September 9, 2009.
Why GAO Did This Study RRB is an independent agency that administers disability benefits for railroad workers. In fiscal year 2016, about 31,000 railroad workers with disabilities received $1.1 billion in disability benefits. RRB is generally required to periodically assess beneficiaries' medical condition or earnings through continuing disability reviews (CDRs) to verify that they remain eligible for disability benefits. This report examines the extent to which RRB (1) conducts medical and earnings CDRs to ensure the continued eligibility of disability beneficiaries, and (2) oversees the CDR program. GAO analyzed data provided by RRB for CDRs completed in fiscal years 2014-2016, the only years for which complete data were available. GAO also reviewed RRB's policies and procedures, a nongeneralizable random sample of 14 CDR cases that were completed in fiscal year 2016, and relevant federal laws and regulations; and interviewed RRB officials. What GAO Found In fiscal years 2014-2016, the Railroad Retirement Board (RRB) completed continuing disability reviews (CDRs) of various types for 427 beneficiaries (see figure below), covering slightly more than 1 percent of the railroad workers who received disability benefits during that period. These reviews included: Scheduled Medical Reviews –These are scheduled at different intervals depending on the likelihood of medical improvement. RRB data suggest that most beneficiaries are not subject to these CDRs because they are older than 54½, which RRB defines as the age at which they are unlikely to return to work. Of 43 medical CDRs that were scheduled, RRB identified 3 ineligible beneficiaries and 1 overpayment of about $28,000. High-Risk Reviews – In fiscal year 2015, RRB began conducting medical CDRs on cases it considered to be at high risk for fraud. It completed 166 of these reviews in fiscal years 2015 and 2016, but none identified any ineligible beneficiaries or overpayments. Earnings Reviews – During fiscal years 2014-2016, 163 earnings CDRs identified 47 ineligible beneficiaries and at least $970,550 in overpayments. However, RRB uses earnings information that can be up to 2 years old, thereby delaying the detection of ineligible beneficiaries and increasing the potential for lost federal dollars. Other federal agencies have access to a national federal database with more recent earnings data. Providing RRB access to these data would enable it to identify overpayments sooner. Medical + Earnings Reviews – In some cases, RRB conducts both a medical and earnings CDR. RRB's data do not allow GAO to attribute the outcome to either type of CDR. RRB oversight has primarily been limited to conducting two internal reviews of high-risk medical CDRs, one of which concluded, consistent with the above results, that these CDRs demonstrated no return on investment. Nevertheless, RRB continues to do them. RRB does not routinely compile and analyze data for all of the CDRs it conducts, which limits its ability to identify potential gaps in oversight and to monitor program performance. For example, RRB lacks data that would help it determine how many medical CDRs it should expect to conduct. RRB officials said compiling data can be challenging because it uses multiple data systems. However, by more efficiently collecting and compiling key CDR data, RRB could enhance its capability to routinely assess program performance. What GAO Recommends Congress should consider giving RRB access to the National Directory of New Hires, a national database of wage and employment information that would enable it to identify potential overpayments sooner. GAO is also making three recommendations to RRB, including that it reconsider the purpose and value of high-risk CDRs, and routinely compile and analyze CDR data to improve oversight. RRB agreed with these recommendations.
gao_GAO-18-676T
gao_GAO-18-676T_0
Observations on Grants Management Challenges Streamlined Grants Management is Critical to Effective Use of Federal Funds Our work has shown that when grants management requirements are duplicative, unnecessarily burdensome, and conflicting, agencies must direct resources toward meeting them—which can make the agency’s programs and services less cost effective and increase burden for grant recipients. For example, in 2016, we reviewed administrative requirements for federal research grants. Officials from universities and stakeholder organizations we interviewed identified common factors that added to their administrative workload and costs for complying with selected requirements. These factors included: variation in agencies’ implementation of requirements, pre-award requirements for applicants to develop and submit detailed documentation for grant proposals, and increased prescriptiveness of certain requirements. We have also reported on a number of initiatives intended to address the challenges grantees encounter throughout the grants lifecycle. These initiatives include consolidating and revising grants management circulars, simplifying the pre-award phase, promoting shared information technology solutions for grants management, and improving the timeliness of grant closeout and reducing undisbursed balances. Our work includes reviews of efforts to submit the Consolidated Federal Financial Report through a single system and to standardize notices of award to reduce reporting burden. In addition, the Digital Accountability and Transparency Act of 2014 (DATA Act) required the Office of Management and Budget (OMB) to establish a pilot program to develop recommendations for reducing reporting burden for recipients of federal awards. In 2016 and 2017, we reported on the design and implementation of the OMB pilot program, known as the Section 5 Pilot, aimed at developing recommendations for reducing reporting burden for grant recipients and contractors. We made a number of recommendations to improve the design of the Section 5 Pilot to ensure its consistency with leading practices for pilot design, which OMB has implemented. We continue to monitor implementation of the Section 5 Pilot through ongoing work and look forward to keeping the subcommittee informed about our findings. Transparency of Grant Spending Can Inform Decision Making To provide increased transparency to agencies, Congress, and the public, the DATA Act required OMB, the Department of the Treasury (Treasury), and other federal agencies to increase the types of information available on the more than $3.7 trillion in annual federal spending, including federal spending on grants. The law requires OMB and Treasury to establish data standards to enable the reporting and tracking of agency spending at multiple points in the spending lifecycle. Since enactment, OMB, Treasury, and federal agencies have addressed many of the policy and technical challenges presented by the act’s requirements, including standardizing data elements across the federal government, linking data contained in agencies’ financial and award systems, and expanding the types of data reported. However, in a 2017 report, we found inconsistencies in key award data elements and issues with the completeness and quality of the information reported. We made a number of recommendations to OMB and Treasury to clarify guidance to help agencies fully comply with DATA Act requirements and report accurate data and to disclose known data quality issues. OMB and Treasury generally agreed with our recommendations. Once the accuracy of these data are improved, federal managers should be better able to make data driven decisions to address ongoing government management challenges and improve the effectiveness and efficiency of government programs. Effective Grants Management Benefits from Collaboration and Consultation The process of distributing federal assistance through grants is complicated and involves many different parties—both public and private—with different organizational structures, sizes, and missions. A lack of collaboration among and between federal agencies, state and local governments, and nongovernmental grant participants presents a challenge to effective grants implementation. Given the complexity of managing intergovernmental grants, collaboration among the grant participants, particularly with regard to information sharing, is an important factor in effective grants management. For example, one of the lessons learned in our work on the American Recovery and Reinvestment Act of 2009 (Recovery Act) is that increased accountability requirements and aggressive timelines require coordination—both vertically among levels of government and horizontally within the same level of government—to share information and work toward common goals during implementation. Intra- and intergovernmental networks facilitated efforts to achieve the purposes of the act in an effort to efficiently and effectively spend the grant funds. Our work on interagency grants management reform initiatives also found that inadequate ongoing communication with grantees sometimes resulted in poor implementation and prioritization of initiatives. Our 2014 work on the Recovery Act illustrated how agencies can effectively approach ongoing communication. For example, the developers of Recovery.gov used input from user forums, focus groups, and usability testing with interested citizens to collect feedback and recommendations. This information then informed the development of the website from its initial stages. More recently, in our 2014 work on the DATA Act, we have noted OMB and Treasury efforts to allow the public to share their views and comment on the development of federal data standards. Identifying Fragmentation, Overlap and Duplication Could Result in Greater Efficiencies Our prior work has shown that numerous federal grant programs created over time without coordinated purposes and scope can result in grants management challenges. Addressing these challenges may achieve cost savings and result in greater efficiencies in grant programs. Our work has underscored the importance of identifying fragmentation, overlap, or duplication in a number of federal programs, including grants management practices. For example, in January 2017, we found that the National Park Service, Fish and Wildlife Service, Food and Nutrition Service, and Centers for Disease Control and Prevention had not established guidance and formal processes to avoid duplication and overlap among grants in their agencies before awarding grants. We recommended that these agencies do so, and they agreed. In response, these agencies have taken a number of actions to address the recommendation. For example, the Department of the Interior provided us documentation showing that the Fish and Wildlife Service now requires that discretionary grant applicants provide a statement that addresses whether there is any overlap or duplication of proposed projects or activities to be funded by the grant. The Fish and Wildlife Service also updated its guidance to grant awarding offices instructing them to perform a potential overlap and duplication review of all selected applicants prior to making grant awards. Strong Internal Controls and Oversight Facilitate Effective Use of Grant Funds Our prior work has shown that when awarding and managing federal grants, effective oversight and internal control is important to provide reasonable assurance to federal managers and taxpayers that grants are awarded properly, recipients are eligible, and federal grant funds are used as intended and in accordance with applicable laws and regulations. Internal control comprises the plans, methods, and procedures agencies use to be reasonably assured that their missions, goals, and objectives can be met. In numerous reviews, we and agency inspectors general identified weaknesses in agencies’ internal controls for managing and overseeing grants. Specifically, we found that when such controls are weak, federal grant-making agencies face challenges in achieving grant program goals and assuring the proper and effective use of federal funds to help avoid improper payments. Our work has identified weaknesses in grants oversight and accountability issues that span the government including undisbursed grant award balances, single audit submissions that are late, and significant levels of improper payments in grant programs. Key grants management challenges related to internal controls and oversight that we have identified include: Timeliness of grant closeouts. Federal grant-making agencies must close out grants when the grantee’s period of performance has ended in order to ensure that grantees have met all financial requirements and provide final reports as required. Closing out grants also allows agencies to identify and redirect unused funds to other projects and priorities as authorized or to return unspent balances to the Treasury. These accounts, and, in some cases, the undisbursed balances associated with them, persisted as an issue for agencies, as we reported in 2008, 2012, and 2016. In January 2016, the Grants Oversight and New Efficiency Act (GONE Act) was signed into law. The act, passed in part in response to our work, required government- wide reporting of undisbursed balances in expired grant accounts. The GONE Act requires that agencies report on the grants for which the grantee’s period of performance had expired for more than 2 years, including those with undisbursed balances and with zero dollar balances remaining in the accounts. In the fall of 2017, many agencies included in their annual Agency Financial Reports an appendix providing information required by the GONE Act. For example, the Department of Health and Human Services (HHS) reported almost $2 billion in undisbursed funds remaining in 16,603 grant accounts that were two years or more past their periods of performance and 6,512 grant accounts that had no funds remaining in them. HHS grant officials told us that they intend to close as many of these grant accounts as possible during this fiscal year. Timely submission of single audits. As we have previously reported, one key way that federal agencies oversee nonfederal grantees is through an audit of their expenditures of federal awards, referred to as a single audit. The single audit is an audit of the award recipient’s expenditure of federal awards and of its financial statements. A single audit can identify deficiencies in the award recipient’s compliance with the provisions of laws, regulations, contracts, or grant agreements and in its financial management and internal control systems. Correcting such deficiencies can help reasonably assure the effective use of federal funds and reduce federal improper payments. In 2017, we reported that of the five departments we reviewed—the Departments of Agriculture, Education, HHS, Housing and Urban Development, and Transportation—some of the departments’ subagencies did not effectively design policies and procedures to reasonably assure the timely submission of single audit reports by award recipients. In this report, we made 21 recommendations to these departments. Some action has been taken to date in response to these recommendations. Avoiding improper payments of federal grants. As we have previously reported, improper payments—payments that should not have been made or that were made in an incorrect amount—have consistently been a government-wide issue. Since fiscal year 2003— when certain agencies were required by statute to begin reporting estimated improper payments for certain programs and activities— cumulative improper payment estimates have totaled about $1.4 trillion. Our reviews of Medicaid, a joint federal-state health care program and significant source of federal grant funding to state governments, have shown that the program is particularly vulnerable to improper payments, given its size, diversity, and complexity. For example, Medicaid accounted for more than 26 percent ($36.7 billion) of the nearly $141 billion government-wide improper payment estimate in fiscal year 2017. We have also reported that federal spending for Medicaid is expected to significantly increase, so it is especially critical that appropriate measures be taken to reduce improper payments in this program. Opportunities to Effectively Advance Current Grants Management Initiatives Recent and proposed legislative- and executive-sponsored initiatives aimed at grants management reform, present opportunities to improve the efficiency, effectiveness, and transparency of federal grants. Our work on the design and implementation of merit-based grant award selection and initiatives to manage for results across the federal government has highlighted a number of key features necessary to effectively implement such crosscutting initiatives. Those features include: Establishing implementation goals and tracking progress. Our work highlighted the importance of establishing an implementation schedule and tracking progress toward priorities to help pinpoint performance shortfalls and suggest midcourse corrections, including any needed adjustments to future priorities and milestones. Identifying and agreeing on leadership roles and responsibilities. Our work has shown that when interagency councils clarify who will do what, identify how to organize their joint and individual efforts, and articulate steps for decision making, they enhance their ability to work together and achieve results. Developing an effective communication strategy. We reported on the importance of two-way communication that allows for feedback from relevant stakeholders. For example, our work showed that grantees felt that a lack of opportunities to provide timely feedback resulted in poor implementation and prioritization of streamlining initiatives and limited grantees’ use and understanding of new functionality of electronic systems. In addition, given the number and diversity of grantor agencies and grantmaking programs, we believe it is important that any grant reform initiative integrate with other government-wide reform efforts on related issues. One such reform initiative is the PMA, which lays out a long-term vision for modernizing the federal government and improving the ability of agencies to achieve results. The PMA identified a set of CAP goals to target areas where multiple agencies must collaborate to effect change and report progress in a manner the public can easily track. According to the PMA, one of the goals included in the agenda—the Results- Oriented Accountability for Grants CAP goal—is intended to maximize the value of grant funding by applying a risk-based data-driven framework that balances compliance requirements with demonstrating successful results for taxpayers. The PMA further states that this CAP goal seeks to standardize grant reporting data and improve data collection in ways that will increase efficiency, promote evaluation, and reduce reporting burden. Effectively advancing results-oriented accountability for grants will require that implementation of this CAP goal moves forward in tandem with related efforts to implement the DATA Act and advance the use of evidence to inform grant policy, highlighted below: DATA Act implementation. As our work has shown, the DATA Act will continue to be a critical driver of grants management change and reform. When fully implemented, the act will improve the accountability and transparency of federal spending data by (1) establishing government-wide financial data standards so that data are comparable across agencies and (2) holding federal agencies more accountable for the quality of the information disclosed. Such increased transparency provides opportunities for improving the efficiency and effectiveness of federal spending; increasing the accessibility of data to benefit the public and the business community; and improving oversight to prevent and detect fraud, waste, and abuse of federal funds. As efforts to implement the DATA Act move forward, we will continue to monitor implementation efforts and coordinate our efforts with agency inspectors general. Evidence-based policy. To better integrate evidence and rigorous evaluation in budget, management, operational, and policy decisions, OMB has encouraged federal agencies to expand or improve the use of grant program designs that focus federal dollars on effective practices while encouraging innovation in service delivery. For example, OMB’s efforts to foster a culture of evidence-based policy resulted in several federal agencies’ implementation of tiered evidence grant programs. Under this approach, agencies establish tiers of grant funding based on the level of evidence of effectiveness provided for a grantee’s service model. Agencies award smaller amounts to promising service models with a smaller evidence base, while providing larger amounts to those with more supporting evidence. In our 2016 report, we recommended that OMB establish a formal means for federal agencies to collaborate on tiered evidence grants. In response, in 2017, OMB launched the Tiered Evidence Grants Working Group to collaborate and share lessons learned, for example, on the use and dissemination of evaluation results. These efforts should complement each other. A lack of integration could result in duplication of effort or run the risk of working at cross-purposes. For example, the integration of the Results-Oriented Accountability for Grants CAP goal with ongoing DATA Act implementation and efforts to advance evidence-based approaches to federal grant funding and administration presents a complex governance challenge. In conclusion, designing and implementing grants management policies that strike an appropriate balance between ensuring accountability for the proper use of federal funds without increasing the complexity and cost of grants administration for agencies and grantees is a longstanding governance challenge. As the initiatives above demonstrate, meeting this challenge and successfully implementing grants management reforms will require intragovernmental coordination at the federal level, intergovernmental collaboration with state and local governments and other partners, and ongoing integration to ensure that grants management reforms and related DATA Act and evidence-based policy implementation efforts are complementary and do not exist in separate silos. We look forward to continuing our ongoing work to review implementation of the CAP goals, the DATA Act, and the infusion of evidence-based policy in federal grant programs. We also look forward to working with this and other committees as we assist Congress in identifying additional opportunities to advance grants management reform through reviews of individual grant programs and crosscutting analysis of grant implementation and grants management reform efforts. Chairman Palmer, Ranking Member Raskin, and members of the Subcommittee, this concludes my prepared remarks. I look forward to answering any questions you may have. GAO Contact and Staff Acknowledgments For questions about this statement, please contact me at (202) 512-6806 or sagerm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Brenda Rabinowitz and Tom James, Assistant Directors, Alexandra Edwards, Julie Miller, Andrew J. Stephens, and Walter Vance. Related GAO Products The Nation’s Fiscal Health: Action Is Needed to Address the Federal Government’s Fiscal Future. GAO-18-299SP. Washington, D.C.: June 21, 2018. Improper Payments: Actions and Guidance Could Help Address Issues and Inconsistencies in Estimation Processes. GAO-18-377. Washington, D.C.: May 31, 2018. 2018 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-18-371SP. Washington, D.C.: Apr. 26, 2018. DATA Act: OMB, Treasury, and Agencies Need to Improve Completeness and Accuracy of Spending Data and Disclose Limitations. GAO-18-138. Washington, D.C.: Nov. 8, 2017. Managing for Results: Further Progress Made in Implementing the GPRA Modernization Act, but Additional Actions Needed to Address Pressing Governance Challenges. GAO-17-775 Washington, D.C.: Sept. 29, 2017. Single Audits: Improvements Needed in Selected Agencies’ Oversight of Federal Awards. GAO-17-159. Washington, D.C.: Feb. 16, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: Feb. 15, 2017. Grants Management: Selected Agencies Should Clarify Merit-Based Award Criteria and Provide Guidance for Reviewing Potentially Duplicative Awards. GAO-17-113. Washington, D.C.: Jan. 12, 2017. Tiered Evidence Grants: Opportunities Exist to Share Lessons from Early Implementation and Inform Future Federal Efforts. GAO-16-818. Washington, D.C.: Sept. 21, 2016. Federal Research Grants: Opportunities Remain for Agencies to Streamline Administrative Requirements. GAO-16-573. Washington, D.C.: June 22, 2016. Managing for Results: OMB Improved Implementation of Cross-Agency Priority Goals, But Could Be More Transparent about Measuring Progress. GAO-16-509. Washington, D.C.: May 20, 2016. DATA Act: Section 5 Pilot Design Issues Need to Be Addressed to Meet Goal of Reducing Recipient Reporting Burden. GAO-16-438. Washington, D.C.: Apr. 19, 2016. Grants Management: Actions Needed to Address Persistent Grant Closeout Timeliness and Undisbursed Balance Issues. GAO-16-362. Washington, D.C.: Apr. 14, 2016. Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: Dec. 3, 2014. Managing for Results: Implementation Approaches Used to Enhance Collaboration in Interagency Groups. GAO-14-220. Washington, D.C.: Feb, 14, 2014. Recovery Act: Grant Implementation Experiences Offer Lessons for Accountability and Transparency. GAO-14-219. Washington, D.C.: Jan. 24, 2014. Grant Workforce: Agency Training Practices Should Inform Future Government-wide Efforts. GAO-13-591. Washington, D.C.: June 28, 2013). Grants Management: Oversight of Selected States’ Disbursement of Federal Funds Addresses Timeliness and Administrative Allowances. GAO-13-392. Washington, D.C.: Apr. 16, 2013. Grants Management: Improved Planning, Coordination, and Communication Needed to Strengthen Reform Efforts. GAO-13-383. Washington, D.C.: May 23, 2013. Grants to State and Local Governments: An Overview of Federal Funding Levels and Selected Challenges. GAO-12-1016. Washington, D.C.: Sept. 25, 2012. Grants Management: Action Needed to Improve the Timeliness of Grant Closeouts by Federal Agencies. GAO-12-360. Washington, D.C.: Apr. 16, 2012. Grants Management: Attention Needed to Address Undisbursed Balances in Expired Grant Accounts. GAO-08-432. Washington, D.C: Aug. 29, 2008. Grants Management: Grantees’ Concerns with Efforts to Streamline and Simplify Processes. GAO-06-566. Washington, D.C.: July 28, 2006. Grants Management: Additional Actions Needed to Streamline and Simplify Processes. GAO-05-335. Washington, D.C.: Apr. 18, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Federal outlays for grants to state and local governments totaled more than $674 billion in fiscal year 2017, equivalent to 3.5 percent of the gross domestic product in that year. GAO's previous work has found that growth in both the number of grant programs and level of funding has increased the diversity of federal grants to state and local governments. GAO's work has also found that designing and implementing grants management policies that strike an appropriate balance between ensuring accountability for the proper use of federal funds without increasing the complexity and cost of grants administration for agencies and grantees presents a governance challenge. At the same time, several government-wide initiatives hold promise for advancing the transparency, efficiency, and effectiveness of federal grants. This statement is based on GAO's prior reports on federal grants management and crosscutting issues related to managing for results across the federal government issued between 2005 and 2018. It addresses: (1) GAO's observations on long-standing challenges for federal grants management, and (2) opportunities to effectively advance current grant modernization initiatives. What GAO Found GAO has identified challenges to federal grants management in its work spanning several decades. These challenges include: Streamlining: Grants management requirements that are duplicative, unnecessarily burdensome, and conflicting require agencies to direct resources toward meeting them and can burden recipients of federal grants. GAO has reported on initiatives to streamline these requirements and address challenges grantees encounter throughout the grants lifecycle. Transparency: The Digital Accountability and Transparency Act of 2014 (DATA Act) required the Office of Management and Budget, the Department of the Treasury, and other federal agencies to increase the types of information available on federal spending, including grants. GAO has reported on progress in standardizing and expanding reported data, but has found inconsistencies with the completeness and quality of the reported information. Collaboration and consultation: Collaboration, particularly information sharing, is an important factor in effective grants management. GAO's work on interagency grants management reform initiatives found that inadequate ongoing communication with grantees sometimes resulted in poor implementation and prioritization of initiatives. Duplication, overlap, and fragmentation : Agencies' grants management practices, such as requirements to avoid duplication and overlap among grants before awarding them, can help agencies achieve cost savings and result in greater efficiencies in grant programs. Internal controls and oversight : GAO's work has identified weaknesses in grants oversight and accountability. For example, GAO has identified opportunities for agencies to more consistently close out grants when the grantee's period of performance has ended to ensure that grantees have met all requirements and identified opportunities to redirect or return unused funds. Recent and proposed initiatives aimed at grants management reform present opportunities to improve the efficiency, effectiveness, and transparency of federal grants. GAO's work on federal grants management and managing for results has highlighted a number of key features for effectively implementing such crosscutting initiatives, which include: (1) establishing implementation goals and tracking progress, (2) identifying and agreeing on leadership roles and responsibilities, and (3) developing an effective communication strategy. Further, given the number and diversity of grantor agencies and grant programs, it is important that any grant reform initiative integrate with other government-wide reform efforts on related issues across government, such as the grants-related Cross-Agency Priority goal, implementation of the DATA Act, and initiatives related to evidence-based policy. These efforts can be effective if they complement each other rather than run the risk of operating independently and potentially duplicating effort or working at cross-purposes.
gao_GAO-18-652
gao_GAO-18-652_0
Background IHS Health Care System and Tribal Health Care IHS was established within the Public Health Service in 1955 to provide certain health services to members of federally recognized AI/AN tribes, primarily in rural areas on or near reservations. IHS provides services directly through a network of hospitals, clinics, and health stations operated by IHS, and also funds services provided at tribally operated facilities. As of October 2017, IHS, tribes, and tribal organizations operated 168 service units, 48 hospitals, and 560 ambulatory care centers—including health centers, school health centers, health stations, and Alaska village clinics. See table 1. According to IHS officials, the agency provides services almost exclusively in locations designated as Health Professional Shortage Areas, with most locations identified as extreme shortage areas. In addition, IHS data indicate that about 35 percent of certain IHS facilities, including four hospitals, were identified as isolated hardship posts in 2016. IHS oversees its health care facilities through a decentralized system of 12 area offices, which are led by area directors; 10 of these 12 IHS areas have federally operated IHS facilities. IHS’s headquarters office is responsible for setting health care policy, helping to ensure the delivery of quality comprehensive health services, and advocating for the health needs and concerns of AI/AN people. The IHS area offices are responsible for distributing funds to the facilities in their areas, monitoring their operation, and providing guidance and technical assistance. IHS’s estimated budget authority for fiscal year 2018 is over $5.6 billion, an increase of almost $580 million from its enacted budget authority of just over $5 billion in fiscal year 2017. IHS has agreements with tribes and tribal organizations by which it transfers a substantial portion of its budget authority to tribes and tribal organizations. For example, in 2017, the agency transferred approximately 54 percent of its total budget authority to tribes and tribal organizations to operate part or all of their own health care programs through self-determination contracts and self- governance compacts. Self-determination contracts: IHS had 373 self-determination contracts in place with 220 tribes in 2017. Self-governance compacts: IHS had 98 self-governance compacts in place—including 124 funding agreements—with 360 tribes in 2017. See figure 1 for the percentage of IHS’s total budget authority transferred to tribes in fiscal year 2017. According to IHS officials, over the last few years an increasing number of tribes have sought to enter into contracts and compacts with IHS to assume responsibility for some or all of their health care programs, and thereby receive funding from IHS. Federal Budget Environment Unless otherwise specified in law, funding included in annual appropriation acts is available for obligation during a single fiscal year, after which it expires. For this reason, the continuation of normal government operations depends upon the enactment each fiscal year of a new appropriations act. Any lapse in appropriations—a funding gap— causes most government functions to shut down. To avert a government shutdown, Congress may enact one or more CRs. CRs are spending bills that provide funds to allow agencies to operate during a specified period of time while Congress works to pass an annual appropriations act. Relevant aspects of the federal budget environment include the following. Frequency of CRs and shutdowns. In all but 4 of the last 40 fiscal years—including fiscal year 2018—Congress has enacted CRs. Since fiscal year 1999, CRs have varied greatly in their number and duration— the number of CRs enacted in each year ranged from 2 to 21, and the duration of CRs has ranged from 1 to 187 days. Regarding lapses in appropriations that resulted in government shutdowns, in January 2018 the government partially shut down for 3 calendar days after the CR in place expired. Other shutdowns have lasted longer—16 calendar days in October 2013 and 21 calendar days in December 1995 through January 1996. We have previously reported on the effects of CRs and shutdowns for federal agencies. Budget authority during a CR. CRs provide “such amounts as may be necessary” to maintain operations consistent with the prior fiscal year’s appropriations and authorities. To control spending in this manner, CRs generally prohibit agencies from initiating new activities and projects for which appropriations, funds, or other authorities were not available in the prior fiscal year. They also require agencies to take the most limited funding actions necessary to maintain operations at the prior fiscal year’s level. Budget authority during a funding gap. Certain federal health care programs have various budget authorities that can allow for continued operations during a funding gap. For example, VA’s advance appropriations authority for its health care programs allows operations to continue after one appropriation expires, using the previously enacted budget for the next year. Although IHS does not have this authority, Congress has enacted longer periods of availability for certain IHS appropriations that would allow the activities they support to continue during a funding gap, assuming the appropriation has not run out. For example, IHS’s appropriation for Indian health facilities remains available until expended, in contrast to its appropriation for Indian health services, which is generally available for a single fiscal year. In this regard, funds for Indian health services that IHS transfers to tribes and tribal organizations during the 1-year period of availability are deemed to be obligated at the time of the award and thereafter remain available to the tribes to operate their own health care programs without fiscal year limitation. Thus, to the extent sufficient funding remained available from federal or other sources during a lapse in appropriations, a tribe could continue to operate its own health care programs during a shutdown. To operate IHS’s health care system on an emergency basis during a funding gap, IHS would need to determine what programs and activities qualified for an emergency exception under the law. Contingency planning for government shutdowns. Federal agencies must determine what activities and programs they are permitted or required to continue prior to a potential shutdown. This includes designating certain employees as “excepted” employees who would be expected to continue to work during the shutdown and who would be paid upon the enactment of an appropriation. Employees who are not “excepted” would be subject to furlough. Interest in Advance Appropriation Authority for IHS Citing funding uncertainty associated with continued use of CRs, AI/AN advocacy groups such as the National Indian Health Board have requested that Congress grant IHS advance appropriation authority; legislation to provide IHS this authority has been introduced more than once. The most recent such legislation, H.R. 235, introduced in January 2017 (not enacted), would have provided IHS with 2-year fiscal budget authority for its Indian health services and Indian health facilities accounts, similar to the authority that VA currently has for its health care appropriation accounts. HHS, on behalf of IHS, has not requested that IHS be granted advance appropriation authority during its annual budget submissions to Congress. VA’s Advance Appropriation Authority for Health Care VA, through the VHA, operates one of the nation’s largest health care systems, with 171 VA medical centers, more than 1,000 outpatient facilities, and total health care budget authority of about $69 billion in fiscal year 2017. VA provided health care services to about 6.8 million veterans in fiscal year 2017, and the agency forecasts that demand for its services is expected to grow in the coming years. VA was granted advance appropriation authority for specified medical care accounts in the Veterans Health Administration in 2009. Currently, VA’s annual appropriations for health care include advance appropriations that become available in the fiscal year after the fiscal year for which the appropriations act was enacted. Under this authority, VA receives advance appropriations for VHA’s Medical Services, Medical Support and Compliance, Medical Facilities, and Medical Community Care appropriations accounts and is required to provide Congress with detailed estimates of funds needed to provide its health care services for the fiscal year for which advance appropriations are to be provided. According to VA officials, veterans service organizations were the primary advocates who sought advance appropriation authority for VA’s health care program. In its health care budget proposal each year, VA submits a request for the upcoming fiscal year, as well as an advance appropriation request for the following year. In early 2018, for example, VA submitted a request for fiscal year 2019, as well as a fiscal year 2020 advance appropriation request. According to VA, more than 90 percent of its budget request is developed using an actuarial model that is based in part on VA’s actual health care utilization data from prior years; for example, the 2020 advance appropriation request used fiscal year 2016 data. VHA officials said that the agency calculates its advance appropriation request to fund needed care as estimated by its actuarial model, with less funding requested for other expenses (such as non-recurring maintenance) and officials told us this is consistent with direction provided by OMB. OMB officials told us that the amount provided in the advance appropriation is intended to provide VA with some assurances that it will be able to continue health care operations seamlessly across fiscal years. In the subsequent year (the year during which the advance appropriation can be used), VA may request an adjustment to the amount previously provided through advance appropriations—referred to by agency officials as a “second bite”—an arrangement that is intended by design to help respond to more recent policy changes or significant events. For example, VA requested a “second bite” increase of $2.65 billion for fiscal year 2018, to the $66.4 billion initially provided to its VHA accounts through its advance appropriation. Both OMB and VHA officials said this “second bite” provides an opportunity to make an adjustment to VA’s advance appropriation using updated utilization data. VHA officials told us that changes in policy (such as determining which veterans or what health benefits can be covered) sometimes drive changes from the initial budget request. For example, policy changes can include adding an additional presumptive condition—such as health conditions associated with Agent Orange exposure—resulting in a new health benefit, or a costly new drug treatment, as in the case of the addition to the drug formulary of a new Hepatitis C drug treatment. Despite having advance appropriation authority, VA has faced challenges in budget formulation, in addition to the general management and oversight challenges we cited in adding VA to our High-Risk List in 2015. Specifically, we reported in our 2017 update to the High-Risk List that VA faces challenges regarding the reliability, transparency, and consistency of its budget estimates for medical services, as well as weaknesses in tracking obligations for medical services and estimating budgetary needs for future years. These challenges were evident in June 2015, when VA requested authority from Congress to move funds from another appropriation account because agency officials projected a fiscal year 2015 funding gap of about $3 billion in its medical services appropriation account. Budget Uncertainty Effects on the Provision of IHS- Funded Health Care That Were Cited by Stakeholders IHS officials, tribal representatives, and other stakeholders we spoke with described how budget uncertainty resulting from CRs and government shutdowns can have a variety of effects on the provision of IHS-funded health care services for AI/ANs. The following summarizes these effects, along with the views of IHS officials, tribal representatives, and other stakeholders on how advance appropriation authority could mitigate them, and VA’s related experiences: Provision of health care services. IHS officials said that, in general, most health care services would be expected to continue at IHS-operated facilities during a shutdown, as health care providers would be deemed “excepted” personnel under the agency’s contingency plan. However, officials noted some health care procedures could be delayed, as determined on a case-by-case basis at the local level. IHS officials also acknowledged that tribal health care programs may not have access to furloughed IHS staff who do not work during a shutdown, such as support staff at local IHS area offices, who may carry out administrative duties on their behalf. For example, tribal representatives told us that during a previous government shutdown, finance employees from the local IHS area offices were furloughed (and thus not permitted to work), which created challenges for tribal health care operations that depended on these IHS employees to process payments and agreements. IHS officials stated they believe advance appropriations could help ensure continuity of health care services through certainty of funding. IHS officials also said that while lapses in appropriations do not halt patient care, they do create complications—such as the determination of excepted personnel as described above—that could be eliminated by funding provided through advance appropriations. Tribal representatives said the certainty of funding that would come with IHS having advance appropriations would create a sense of stability in tribal health care programs as well. VA VISN officials we spoke to said having advance appropriations has improved their ability to manage resources for continuity of services and allowed them to avoid the substantial additional planning that occurs before a potential government shutdown when agencies are determining which providers and staff would be deemed excepted. According to the VISN officials, knowing that funding is coming—as opposed to having less certainty—would allow an agency to plan and prioritize its services more efficiently. Health care program planning. Tribal representatives said operating health care programs with short-term funding provided through a series of CRs—and facing potential government shutdowns—rather than a full year’s apportionment hinders their ability to plan for new programs and for improvements that need to be carried out across budget years or that require large up-front investments, such as an electronic medical records system or other significant information technology purchases. Tribal representatives said there are often plans that they have to set aside because they don’t have enough funds to start a project during a CR, and—if there are multiple CRs—there is not enough time left in the budget year to start bigger projects once an annual appropriation is passed. Tribal representatives also told us that they believe that advance appropriations would help tribal health care programs plan for current and future needs. For example, one tribal official told us advance appropriations would allow tribes to plan for long-term health initiatives. The official’s specific tribe has a gestational diabetes program in conjunction with a local university that the tribe could plan to take full responsibility for if they had more funding stability. VA VISN officials we interviewed provided several examples of how they believe advance appropriations facilitate their planning. For example, VISN officials told us advance appropriations allow them to plan strategically for equipment purchases: if they need to buy a CT scanner, they would plan to do site preparation in one year—for example, reconfiguring the space for the new equipment by moving walls, electrical rewiring, etc.—and buy the scanner in the next year. With advance appropriations, they know they are going to have funds for an expensive equipment purchase available the next year; without an advance appropriation, they would not be sure, and could spend funds on preparation and then ultimately not have the funds to make the equipment purchase. These officials also said having advance appropriations gave them confidence in making current plans to provide the new shingles vaccine for their over-50 population in 2019, including the ability to secure an adequate supply of the vaccine from the manufacturer. Provider recruitment and retention. IHS officials and tribal representatives said existing challenges related to their recruitment and retention of health care providers—many of which are related to the rural and remote locations of many of IHS’s facilities—are exacerbated by funding uncertainty resulting from CRs or potential government shutdowns. IHS officials said CRs and government shutdowns can disrupt recruitment activities such as IHS marketing efforts, job advertisements, application review, interviews, and candidate site visits. Additionally, when recruiting health care providers, IHS officials said CRs and potential government shutdowns create doubt about the stability of employment at IHS amongst potential candidates, which may result in reduced numbers of candidates or withdrawals from candidates during the pre-employment process. IHS officials said that many providers in rural and remote locations are the sole source of income for their families, and the potential for delays in pay resulting from a government shutdown can serve as a disincentive for employees considering public service in critical shortage areas that do not offer adequate spousal employment opportunities. Tribal representatives said CRs create challenges for tribes in funding planned pay increases—such as cost-of-living adjustments— for health care staff at their facilities, and they may, as a result, defer increases. IHS officials and tribal representatives stated they believe advance appropriations could mitigate these challenges. For example, IHS officials said that with advance appropriations, recruitment and outreach activities could continue without disruption, and selected candidates could be brought on board as scheduled. One tribal representative stated that advance appropriations could help with recruitment by providing perceived job stability that is similar to VA or the private sector. According to VA VISN officials, the agency’s experience with advance appropriation authority suggests that advance appropriations can facilitate physician recruitment, including hiring. If, for example, they were far along in the hiring process at the end of a fiscal year, but could not finalize the hire before the end of the year, having advance appropriations for the next fiscal year provides the certainty that they will be able to make the hire in the new fiscal year. Commercial contracts and vendor negotiations. IHS officials and tribal representatives said budget uncertainty can lead to vendor reluctance to provide services to IHS and tribally operated facilities. IHS officials said they have heard from vendors—who are typically Indian- or veteran- owned small businesses in the communities being served by IHS—that they lose trust in IHS and federally-funded tribal health care programs when they are affected by budget uncertainty. One tribal organization told us delays in receiving full funding because of CRs has inhibited its ability to pay invoices for pharmaceuticals in a timely manner, which has harmed its relationship with its vendors. VISN officials told us that advance appropriations can provide an element of stability to agency funding that may serve to reassure potential vendors. According to VISN officials, vendors can be hard to find in remote and rural areas, and their perception of funding certainty can play a role in encouraging their participation as government contractors. As contracting with the federal government can be burdensome, particularly for smaller vendors, VISN officials said, any measures—such as advance appropriations—that could enhance the stability of agency contracting could make these vendors more likely to participate in government contracting. Administrative burden and costs. IHS officials and tribal representatives said the agency and tribes incur additional administrative burden and costs when the government is funded through multiple CRs, due to the high proportion of IHS funding that is transferred to tribes through contracts and compacts. Specifically, IHS officials said there is an additional administrative burden generated by each CR that results in the distribution of funds to tribes. For each CR period, IHS headquarters staff generate proportional funding allotments, which they provide to individual area offices, which then also conduct processing activities to generate payments from these allotments to the tribes in their areas. As part of this process, IHS officials said they modify hundreds of tribal contracts and make amendments to funding agreements associated with tribal compacts, and those efforts represent a significant administrative burden for IHS staff. Tribal representatives also described administrative burden associated with CRs. As one representative of a group representing several tribes told us, each CR requires the same processing and manpower for each partial payment as for a full apportionment, and moreover, CRs require tracking and reconciliation that is not necessary for a single, full apportionment. IHS officials and tribal representatives noted that time and money spent on these additional administrative activities detract from other priorities, including patient care. IHS officials said that advance appropriations would reduce this administrative burden, and added that having advance appropriations would allow for more efficiency in processing payments to tribes. IHS officials suggested that the agency would have to do less administrative work overall, because currently, under a single year appropriation (with recurrent CRs), they may modify or amend agreements 7 or 8 times within a fiscal year. Although acknowledging that advance appropriation authority would entail the additional burden of preparing budget requests for more than one fiscal year, they expect this administrative burden to be less than those under repeated CRs. Financial effects on tribes. According to tribal representatives we spoke with, funding uncertainty from recurring CRs and from government shutdowns has led to particular adverse financial effects on tribes that operate their own health care programs with funding from IHS. For example, according to tribal representatives, Funding uncertainty surrounding a CR results in more expensive commercial loans (with higher interest rates) to finance construction of new health care facilities. Specifically, a tribal representative said the uncertainty of the availability of funds due to a CR resulted in a downgrading of the tribe’s credit rating, and hence higher interest rates, as it was planning a clinic expansion. During a government shutdown, some tribes must redistribute funds from other budget categories to replace health care funding from IHS in order to continue providing health care services. Some tribes have economic development activities that provide additional funding and facilitate this redistribution, but others do not. For example, one tribal organization said that during the 2013 government shutdown, it had to take out loans and maintain a line of credit in order to pay for services and make payroll. Subsequently, that tribal organization had to pay interest on those loans, causing greater financial hardship. Tribes attempt to mitigate the challenge of not knowing their final annual payment from IHS under recurrent CRs by keeping extra funds in reserve for emergencies, which limits the remaining funds available for providing health care services. Short-term funding under CRs or delayed funding after a lapse in appropriations can limit the ability of tribes and tribal organizations to invest funds from IHS and generate interest that can be reinvested in tribal health care programs. CRs have affected the ability of tribes to reduce costs by planning for bulk purchases at favorable rates. For example, some tribes in Alaska prefer to make bulk purchases of heating oil during “barge season’’— when waterways are still navigable and not frozen. If they do not have enough money for a bulk purchase because of a CR’s limited funding, they must purchase fuel in smaller quantities, which is ultimately significantly more expensive. Tribal representatives told us one beneficial financial effect of advance appropriations for tribes could be providing opportunities for longer term contracts with vendors, which could result in cost savings that could be used for tribal health care programs. Considerations for Policymakers Related to Providing Advance Appropriation Authority to IHS We identified three types of considerations for policymakers related to providing advanced appropriation authority to IHS—operational, congressional flexibility, and agency capacity and leadership considerations. We identified these considerations based on a review of our 2009 testimony that examined considerations for granting VA advance appropriation authority, in which we identified key questions that would be applicable to any agency being granted such authority, and our interviews with VA, IHS, and other officials. In our 2009 testimony, we noted that proposals to change the availability of the appropriations for VA deserved careful scrutiny, given the challenges the agency faces in formulating its health care budget and the changing nature of health care. Similar consideration would apply to IHS. Operational considerations. If Congress were to grant IHS advance appropriation authority, it would need to make operational decisions regarding what amount of IHS funding would be provided in advance appropriations, with input from OMB and IHS as appropriate. Specifically, Congress could consider the following questions: (1) What proportion of IHS’s estimated budget would be provided in the advance appropriation—the full amount, or less (as is the case for VA)? Which appropriations accounts would be included? Further, would funds intended for transfer to tribes be handled differently? (2) Under what conditions, if any, would there be changes to funding provided through advance appropriations during the next budget cycle? For example, would Congress expect to adjust the advance appropriation amount through a “second bite,” as is the case with VA? Congressional flexibility considerations. We reported in 2009 that consideration of any proposal to change the availability of the appropriations VA receives for health care should take into account the impact of any change on congressional flexibility and oversight. These same considerations hold merit regarding potential changes to the appropriation status of any federal agency, including IHS. Specifically, advance appropriation authority reduces flexibility for congressional appropriators, because it reduces what is left for the overall budget for the rest of the government—meaning the total available for appropriations for a budget year is reduced by the amount of advance appropriations for that year, when budgets have caps. Agency capacity and leadership considerations. IHS officials told us they believe the agency’s current budget planning processes would be adequate for estimating advance appropriation budget requests, because IHS begins planning for its budget request 3 years in advance. Officials added that IHS plans its budget so far in advance to have sufficient time to work with tribes in formulating recommendations for its budget request. IHS officials said that a downside to planning so far in advance is that they do not necessarily have the most current information while formulating the budget request. In addition, we noted prior to VA receiving advance appropriation authority that advance appropriation authority could potentially exacerbate existing challenges when developing or managing a budget, generally, due in part to the higher risk of uncertainty when developing estimates that are an additional 12 months out from the actual budget year (e.g., 30 months out instead of 18 months). We raised certain capacity and leadership concerns based on our previous work when we added IHS to our High-Risk List in 2017. Further, in June 2018, we found that while IHS had taken some actions to partially address these concerns, additional progress was needed to fully address these management weaknesses. For example, IHS still does not have permanent leadership—including a Director of IHS—which is necessary for the agency to demonstrate its commitment to improvement. Additionally, while the agency has made some progress in demonstrating it has the capacity and resources necessary to address the program risks we identified in our reports, there are still vacancies in several key positions, including in the Office of Finance and Accounting. While not directly related to consideration of advance appropriations, IHS’s high-risk designation and continuing challenges in mitigating the deficiencies in its program point to questions about the agency’s capacity to implement such a change to its budget formulation process. Agency Comments and Third-Party Views We provided a draft of this report to HHS and VA for review and comment. HHS did not have any comments. We received general comments from VA that are reprinted in appendix I. We also provided relevant draft portions of this report to NIHB, which represents tribal and AI/AN interests. NIHB provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of the Department of Health and Human Services and the Department of Veterans Affairs, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kathleen M. King (Director), Karen Doran (Assistant Director), Julie T. Stewart (Analyst-in-Charge), Kristen J. Anderson, and Leonard S. Brown made key contributions to this report. Also contributing were Sam Amrhein, George Bogart, Christine Davis, and Vikki Porter.
Why GAO Did This Study IHS, an agency within the Department of Health and Human Services (HHS), receives an annual appropriation from Congress to provide health care services to over 2 million American Indians and Alaska Natives (AI/AN) who are members of 573 tribes. IHS generally provides services through direct care at facilities such as hospitals and health centers. Some tribes receive IHS funding to operate their own health care facilities. Tribal representatives have sought legislative approval to provide IHS advance appropriation authority stating that it would facilitate planning and more efficient spending. Experts have reported that agencies can use the authority to prevent funding gaps, and avoid uncertainties associated with receiving funds through CRs. House Report 114-632 included a provision for GAO to review the use of advance appropriations authority and applications to IHS. Among other things, this report (1) describes advance appropriation authority considerations identified by stakeholders for providing IHS-funded health care services, and (2) identifies other considerations for policymakers related to providing the authority to IHS. GAO reviewed its prior reports related to IHS, VA, government shutdowns, and CRs, and interviewed officials from IHS, several tribes and other organizations representing AI/AN interests, the Office of Management and Budget, VA and other experts. GAO provided a draft of this report to HHS, which had no comments; to VA, which provided general comments; and to tribal representatives, which provided technical comments that were incorporated as appropriate. What GAO Found The Indian Health Service (IHS), like most federal agencies, must use appropriations in the year for which they are enacted. However, there has been interest in providing IHS with advance appropriation authority, which would give the agency authority to spend a specific amount 1 or more fiscal years after the fiscal year for which the appropriation providing it is enacted. Currently, the Department of Veterans Affairs (VA) is the only federal provider of health care services to have such authority. Stakeholders interviewed by GAO, including IHS officials and tribal representatives, identified effects of budget uncertainty on the provision of IHS-funded health care as considerations for providing IHS with advance appropriation authority. Budget uncertainty arises during continuing resolutions (CR)—temporary funding periods during which the federal government has not passed a budget—and during government shutdowns. Officials said that advance appropriation authority could mitigate the effects of this uncertainty. IHS officials and tribal representatives specifically described several effects of budget uncertainty on their health care programs and operations, including the following: Provider recruitment and retention. Existing challenges related to the recruitment and retention of health care providers—such as difficulty recruiting providers in rural locations—are exacerbated by funding uncertainty. For example, CRs and government shutdowns can disrupt recruitment activities like application reviews and interviews. Administrative burden and costs. Both IHS and tribes incur additional administrative burden and costs as IHS staff calculate proportional allocations for each tribally operated health care program and modify hundreds of tribal contracts each time a new CR is enacted by Congress to conform to limits on available funding. Financial effects on tribes. Funding uncertainty resulting from recurring CRs and from government shutdowns has led to adverse financial effects on tribes and their health care programs. For instance, one tribe incurred higher interest on loans when the uncertainty of the availability of federal funds led to a downgraded credit rating, as it was financing construction of a health care facility. GAO identified various considerations for policymakers to take into account for any proposal to change the availability of the appropriations that IHS receives. These considerations include operational considerations, such as what proportion of the agency's budget would be provided in the advance appropriation and under what conditions changes to the funding provided through advance appropriations would be permitted in the following year. Additionally, congressional flexibility considerations arise because advance appropriation authority reduces what is left for the overall budget for the rest of the government. Another consideration is agency capacity and leadership, including whether IHS has the processes in place to develop and manage an advance appropriation. GAO has reported that proposals to change the availability of appropriations deserve careful scrutiny, an issue underscored by concerns raised when GAO added IHS to its High-Risk List in 2017.
gao_GAO-18-399
gao_GAO-18-399_0
Background Since 2001, DOD’s total workforce has changed in size and composition. DOD’s military, civilian, and contractor workforces peaked around 2011 and have since decreased in size, as shown in figure 1. Several factors have contributed to changes in the size of the workforces including varying levels of U.S. involvement in the conflicts in Iraq and Afghanistan, military to civilian and contractor conversions, contractor insourcing, and the growth in certain workforces such as acquisition and cyber. DOD’s management of its workforce is governed by several workforce management statutes, including sections 129, 129a, and 2463 of Title 10 of the United States Code. Section 129 directs that DOD civilian personnel be managed each fiscal year on the basis of, and consistent with, total-force management policies and procedures established under section 129a, the workload required to carry out the functions and activities of the department, and the funds made available to the department each fiscal year. Section 129a directs the Secretary of Defense to establish policies and procedures for determining the most appropriate and cost-efficient mix of military, civilian, and contracted services to perform the mission of the department. Finally, Section 2463 directs the Under Secretary of Defense for Personnel and Readiness to devise and implement guidelines and procedures to ensure that consideration is given to using, on a regular basis, DOD civilian employees to perform new functions and functions performed by contractors that could be performed by DOD civilian employees. DOD Instruction 1100.22, Policy and Procedures for Determining Workforce Mix (April 12, 2010) (Change 1, Dec. 1, 2017) establishes policy, assigns responsibilities, and prescribes procedures for determining the appropriate workforce mix of the military, civilian, and contracted services. The instruction provides criteria for workforce-mix decisions and directs DOD components to conduct a cost comparison to determine the low-cost provider for all new or expanding mission requirements and for functions that have been contracted but could be performed by DOD civilian employees. In addition, over the past 10 years DOD has taken steps to better understand the costs associated with its workforces. For example, we found in September 2013 that DOD had improved its methodology for estimating and comparing the full cost of work performed by military and civilian personnel and contractor support, but the methodology continued to have certain limitations, such as the lack of guidance for certain cost elements related to overhead. We made five recommendations, including for DOD to assess the advantages and disadvantages of allowing the continued use of different cost-estimation tools across the department or directing department-wide application of one tool, and revise its guidance in accordance with the findings of its assessment. DOD implemented this recommendation but has not yet implemented the other four recommendations although it concurred or generally concurred with them. DOD’s Cost- Comparison Report Addressed Most Elements in Senate Report 114-49 DOD’s Cost-Comparison Report addressed three elements and partially addressed one element concerning the accounting for the fully-burdened, or full, cost of federal civilian and service contractor personnel performing functions at the selected installations, as shown in table 1. DOD concluded that for the 21,000 federal civilians and service contractors compared, neither federal civilians nor service contractors were predominately more or less expensive, with the costs being dependent upon the function being performed, location, and level of expertise. DOD noted that the results were not generalizable across the department. Each of the elements and our assessment are discussed below. DOD Developed a Methodology to Assess Performance of Functions Being Performed by Federal Civilian and Service Contractor Personnel on Military Installations We believe that DOD addressed the reporting element to assess performance of functions performed by civilian and contractor personnel by developing a methodology to assess performance of functions performed by federal civilians and service contractors at organizations within nine geographic regions including two locations outside the continental United States. Organizations included in DOD’s methodology include the following: Fort Belvoir Community Hospital Defense Threat Reduction Agency US Army Intelligence and Security Command Aviation and Missile Research, Development, and Engineering Center Naval Medical Center San Diego Space and Naval Warfare Systems Command Ogden Air Logistics Complex 75th Air Base Wing Naval Facilities Engineering Command Tripler Army Medical Center DOD’s methodology included the following: Selecting installations and organizations: DOD used data from the Defense Civilian Personnel Data System to identify military installations with large reported numbers of federal civilians. According to DOD officials, they eliminated from consideration those installations that had no reported contractors. From this subset of installations, DOD selected organizations to represent all three military departments and diverse geographical locations, to include two locations outside the continental United States. Assessing the functions performed by civilians and contractors to identify federal civilians and service contractors performing similar functions: DOD assessed the performance of functions at these selected locations to identify federal civilians and service contractors performing similar functions as there is no direct mapping or perfect match between existing taxonomies used to quantify federal civilian positions and contracted services. Further, DOD reported that the day-to-day functions performed by federal civilian employees do not always directly correlate to the designated occupational series or the job title for their position. For example, an individual with an occupational series assigned as an accountant may actually perform work more consistent with that of a financial analyst. According to DOD’s Cost-Comparison Report, DOD did not rely on occupational series names or job titles alone to determine the actual work being performed by federal civilians. Specifically, DOD conducted site visits with each organization and relied on local managers’ direct knowledge of the actual tasks that their federal civilians and service contractor personnel performed. According to DOD’s Cost-Comparison Report, DOD determined that personnel need to perform at least 80 percent common tasks to be able to make a comparison. For the organizations selected, DOD compared the costs of all federal civilians and service contractors identified as performing similar functions. The challenges DOD identified in DOD’s Cost-Comparison Report on determining the functions performed by contractor personnel are similar to those we encountered in our prior work on DOD’s efforts to compile and review of an inventory of contracted services. Section 2330a of Title 10 of the U.S. Code directs the Secretary of Defense to annually prepare an inventory of activities performed during the preceding fiscal year pursuant to staff augmentation contracts. Section 2330a also directs the secretary of each military department and head of each defense agency responsible for activities in the inventory to, within 90 days after the Secretary of Defense submits the inventory, review the contracts and activities in the inventory for which that secretary or agency head is responsible, in part to identify activities that should be considered for conversion. Our prior work has identified, among other issues, that the absence of a complete and accurate inventory of contracted services hinders DOD’s management of these services. According to DOD officials, the Office of the Under Secretary of Defense (Personnel and Readiness) has also recognized the challenges associated with the various taxonomies and lexicons associated with articulating the size and composition of federal civilian, military, and contracted services workforces, and has efforts underway with the goal of better aligning those to enable more holistic total force management of all sources of labor. According to DOD officials, by improving available workforce data, DOD can support better-informed leadership decisions, improve accuracy of analyses, and provide consistent explanations of the department’s workforce resources. DOD officials told us that this effort has an estimated completion of December 2018. DOD Accounted for Labor Costs but Excluded Some Costs That Encompass Full Costs of Personnel We believe that DOD partially addressed the reporting element to account for the full cost of civilian and contractor personnel by providing an accounting of the labor costs of selected federal civilian and service contractor full-time equivalents for personnel performing similar functions at government-owned facilities during calendar year 2015, but excluding certain non-labor costs from its cost calculations. According to DOD officials, 2015 was the last year for which complete data were available. DOD Accounted for Federal Civilian and Contractor Labor Costs DOD developed a methodology for identifying labor costs associated with federal civilian and service contractor full-time equivalents during calendar year 2015 at government-owned facilities for its cost comparisons. Based on reviews of applicable guidance and consultations with the Office of the Under Secretary of Defense – Comptroller, DOD included numerous federal civilian costs collected from several sources in DOD’s Cost Comparison Report, as shown in table 2. In addition, to assure data quality, DOD officials told us that they took steps to identify data errors in the data collected including identifying missing data fields and data entries that might indicate data errors. For example, DOD officials told us that they verified that they had pay records for every pay period in calendar year 2015 by identifying potential errors and outliers and sharing these with the Defense Finance and Accounting Service and the selected DOD organizations for review. Officials also stated that DOD sent its complete calculated data sets to each organization for review against their own pay records and that all errors were corrected or outliers were explained. Additionally, according to our analysis, DOD excluded overtime from its costs related to federal civilians in accordance with Office of Management and Budget Circular A-11. However, DOD included overtime pay in its report separately for context and noted that overtime pay is a significant part of civilian compensation for some organizations. Officials noted that those funded via a working capital fund arrangements, such as depots, use overtime to handle surges in demand throughout the year. DOD noted in its report that selected service contracts at government facilities and developed three methodologies to identify labor costs associated with service contractor full-time equivalents during calendar year 2015, as shown in table 3. DOD stated in its report that identifying service contractor full-time equivalents is a significant challenge because the level of detail available in each contract varied such that DOD could not employ a single methodology, and unlike federal civilian pay data, there is no centralized database on service-contractor pay. DOD reported that contracts are rarely written to address the cost-per-contractor as a full-time equivalent, and some contracts do not differentiate between labor and non-labor costs. DOD noted in its report that the negotiated price of the contract includes direct costs, including labor and non-labor costs, and indirect costs such as overhead. Further, the contract costs include service contractor profit. Based on our review of DOD’s Cost-Comparison Report, DOD used non- excludable contract costs as a basis in two of its methodologies. These costs to DOD are associated with labor, and include pay and benefits provided to service contractor personnel, contractor profit, and overhead the contractor included in the cost of the contract. When the number of service contractor full-time equivalents and full costs for a contract was known, DOD used the first method, dividing contract costs by the number of service contractor full-time equivalents to arrive at the cost per service contractor full-time equivalent. When the number of billable hours was known, DOD used the second method, multiplying the ratio of contract costs divided by billable hours by a standard number of annual billable hours. For contracts in which the labor rate was known but costs could not be disaggregated, DOD multiplied the labor rate by a standard 1,880 annual billable hours unless a contract specified the labor rate as a number of annual billable hours. For example, Defense Logistics Agency contractor-labor rates for wage grade equivalent contractor full-time equivalents are based on 2,080 annual labor hours. DOD Excluded Certain Costs that Comprise Full Costs We assessed DOD’s report as partially addressing the reporting element to account for the full cost of federal civilian and contractor personnel because DOD excluded certain non-labor costs from its costs calculations—(1) direct non-labor costs for government owned facilities and government provided supplies, (2) indirect costs for general and administrative and overhead for civilians, and (3) costs to manage contracts—from its costs calculations. Senate Report 114-49 stated that DOD is to include an accounting of the full cost of DOD federal civilian and service contractors performing similar functions, including facility overhead. DOD stated in its report that the methodology utilized to compare the costs of federal civilian and service contractor full-time equivalents was consistent with DOD Instruction (DODI) 7041.04, Estimating and Comparing the Full Costs of Civilian and Active Duty Military Manpower and Contract Support (July 3, 2013), hereafter referred to DODI 7041.04. However, DODI 7041.04 states that the full cost of personnel should include direct and indirect non-labor costs, such as those referenced previously. DOD officials stated that they considered including non-labor costs in their calculations but did not because they believe these costs would add approximately the same to both federal civilian and service contractor costs. DODI 7041.04 instructs that if a function is performed on government property, the costs of goods, services, and benefits that are common costs may be excluded provided the number of government and contactor personnel is equivalent. DODI 7041.04 further instructs that when the number of government and contractor personnel differs, adjustments must be made to the cost estimates to account for the difference in number of government and contractor personnel. While there were some instances where it was the case that DOD’s cost estimates involved an equal number of civilian and contractor personnel performing functions on government property, there were many instances in where the personnel numbers differed and common costs should not have been excluded. For example, in DOD’s comparisons of federal civilians and service contractors at Fort Belvoir Community Hospital, DOD conducted comparisons of 19 functions where 2 functions had equal numbers of federal civilian and service contractors and 17 functions had differing numbers of federal civilian and service contractors. In one comparison, the number of contractors was over three times the number of civilians. DOD officials also stated that they believe their methodology is in accordance with DODI 7041.04 because DODI 7041.04 states that the cost elements in the instruction can be modified or augmented in each specific case as necessary, but DOD components should be prepared to support such decisions with sufficient justification. We acknowledge that DODI 7041.04 states that the cost elements can be modified, but by excluding non-labor costs in its cost comparisons, DOD did not account for the full cost of federal civilians and service contractors as requested in the mandate. DOD Compared Its Calculated Costs of Performance of Selected Functions by Federal Civilians and Service Contractor Personnel at Selected Installations We believe that DOD addressed the reporting element to compare costs by comparing its calculated costs of selected federal civilians and service contractors performing similar functions at selected installations. DOD reported that its results represent selected personnel performing functions within selected organizations and are not generalizable across the department. DOD concluded that for the federal civilian and contractor full-time equivalents included in the study, the costs varied by organization, location, and function being performed. DOD presented comparisons of federal civilian and service contractor full-time equivalents costs and expressed these results as a cost ratio. However, it is not clear how the results would be different if all costs that encompass full costs of personnel would have been included in DOD’s comparisons. See tables 4 and 5 below for examples of greater costs for the performance of functions by federal civilians or service contractors at Fort Belvoir Community Hospital in Fort Belvoir, Virginia. DOD Assessed Flexible Hiring Authorities Available for Employment and Retention of DOD Civilian Employees We believe that DOD addressed the reporting element by assessing the flexible employment authorities for the employment and retention of federal civilian employees at the same 17 organizations used for the cost comparison. Specifically, DOD sent questionnaires to DOD hiring officials and human resource professionals to collect information on flexible employment authorities. DOD included a broad spectrum of organizational missions in its query of management and human resource officials regarding the use and availability of flexible hiring authorities. Noting that this assessment is more subjective than the others in DOD’s Cost-Comparison Report, DOD queried senior leaders, middle managers, front-line supervisors and human resource professionals regarding which authorities are being used and the effectiveness of each. According to DOD’s report, in this way, DOD was able to gauge the extent to which each type of authority was used as well as the satisfaction with and effectiveness of each. DOD’s Cost-Comparison Report made several conclusions regarding flexible hiring authorities and made one recommendation. The findings included that there was a variance in the authorities used between organizations, management unfamiliarity with all available authorities, and a belief among managers that expanded use of some authorities is needed to produce more quality hires. DOD’s Cost- Comparison Report recommended DOD and OPM should explore opportunities to refine, consolidate, or reduce unused, inefficient, or cumbersome hiring authorities. Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In written comments, DOD non-concurred with our assessment that DOD partially addressed the mandated reporting element to provide an accounting of the fully-burdened cost of federal civilian and service contractor personnel performing functions at the selected installations to include training, benefits, reimbursable costs, and facility overhead. DOD’s comments are reproduced in their entirely in appendix I. DOD also provided technical comments, which we incorporated as appropriate. DOD stated that we presented the three reporting elements identified in the congressional mandate absent the full context and congressional intent. Specifically, DOD stated that in the congressional mandate, the list of elements to be included in the report is not a stand-alone list and DOD stated that we present the elements as a stand-alone list. DOD further stated that the list of elements in the mandate is preceded by a paragraph that we did not reproduce in our report, but which provides context and congressional intent for the reporting elements. We do not believe that the language omitted from our report changes the meaning of the reporting elements to be included in DOD’s cost comparison report because the paragraph omitted clearly states that the purpose of the report is to provide the results of a study that includes a comparison of the fully-burdened cost of the performance of functions by DOD civilian personnel with the fully-burdened cost of the performance by DOD contractors. The paragraph preceding the reporting elements and the elements reads as follows: The committee directs the Secretary of Defense to submit to the Committees on Armed Services of the Senate and the House of Representatives, and to the Comptroller General of the United States, a report setting forth the results of a study, conducted by the Secretary for the purposes of the report, of a comparison of the fully-burdened cost of performance of functions by Department of Defense (DOD) civilian personnel with the fully-burdened cost of the performance of functions by DOD contractors by no later the February 1, 2016. The study shall include: (1) An assessment of performance of such functions at six DOD installations selected by the Secretary for purposes of the study from among DOD installations at which functions are performed by an appropriate mix of civilian personnel and contractors, with four such installations to be located in the continental United States and two such installations to be located outside the continental United States; (2) An accounting of the fully-burdened cost of DOD civilian personnel and contractors performing functions for DOD (including costs associated with training, benefits, reimbursable costs under chapter 43 of title 41, United States Code, and facility overhead) in order to permit a direct comparison between the cost of performance of functions by DOD civilian personnel and the cost of the performance of functions by contractors; (3) A comparison of the cost of performance of the full range of functions, required expertise, and managerial qualities required to adequately perform the function to be compared, including: a. Secretarial, clerical, or administrative duties, including data entry; b. Mid-level managers and other personnel possessing special expertise or professional qualifications; c. Managers and other leadership; and d. Personnel responsible for producing congressionally-directed reports. The committee recommends that, in conducting the study, the Secretary should take into account the policy that inherently governmental functions vital to the national security of the United States may not be performed by contractor personnel. The report required shall include an assessment of the flexible employment authorities available to the Secretary for the employment and retention of civilian employees of the DOD, including an identification of such additional flexible employment authorities as the Secretary considers appropriate to shape the civilian personnel workforce of the DOD. Not later than 120 days after receipt of such report, the Comptroller General shall submit to Congress a report that includes an assessment of the adequacy and sufficiency of the report submitted by the Secretary, including any recommendations for policy or statutory change as the Comptroller deems appropriate. As we reported, DOD noted in its cost comparison report that it identified labor costs used in its comparisons. However, DOD did not include direct and indirect non-labor costs and DODI 7041.04 states that the full cost of personnel should include these non-labor costs as we discussed earlier in the report. Therefore, DOD only partially addressed the reporting provision. In addition, DOD stated that we omit relevant language related to congressional intent for the second reporting element (i.e., an accounting of the fully-burdened cost of DOD civilian personnel and contractors). DOD stated that the text, “. . . in order to permit a direct comparison between the cost of performance of functions by DOD civilian personnel and the cost of the performance of the function by contractors,” conveys the congressional intent that the study is for comparison and our exclusion of the text in our restatement of the element omitted language indicating relevant Congressional intent. We do not believe that the language omitted in our report changed the meaning of the reporting element, which is that DOD was to include an accounting of the fully- burdened costs of federal civilians and service contractors in its cost comparisons. DOD further stated that we did not assess the second reporting element (i.e., an accounting of the fully-burdened cost of DOD civilian personnel and contractors) as it is directly stated but rather that we assessed the element by redefining it and then asserting that DOD partially addressed it. DOD noted that the direct language of the second reporting element is for DOD to include an “accounting” of the fully burdened cost of DOD civilian personnel and contractors. DOD asserted that we misinterpreted the meaning of “accounting” when we determined that DOD partially addressed the mandate because it did not “calculate” certain non-labor costs. We disagree. As we discuss in our report, DOD did account for the labor costs associated with federal civilian and service contractors by gathering labor cost data from several sources, but it did not include non- labor costs in its cost calculations. In order to account for the fully burdened costs of federal civilians and service contractors, as directed to do so by the preamble to the reporting elements, as well as the second reporting element, DOD should have included all labor and non-labor costs in the cost calculations. DOD also stated that our assessment incorrectly implies that to “account” for costs is equivalent to “calculating” costs as evidenced by the following quote from our draft report, "We acknowledge that DODI 7041.04 states that the cost elements can be modified, but by excluding non-labor costs in its cost comparisons, DOD did not account for the full cost of federal civilians and service contractors as requested in the mandate." DOD stated that although DOD did not “calculate” some non-labor costs, they did “account” for them in accordance with DODI 7041.04 and as directed in the congressional mandate. DOD asserted that in multiple places, DODI 7041.04 states that common costs "are excluded" and "may be excluded" from cost comparisons. DOD provided facility costs as an example of non-labor costs accounted for but not calculated in its cost comparisons. DOD stated that in its report, all of the civilian positions and contractor functions are performed at government-owned facilities. Thus, facility costs are common costs and may be excluded. DOD stated that their report accounted for facility costs by recognizing that such costs exist and are common costs, thus, DOD properly excluded such costs in accordance with DODI 7041.04, and their report satisfied the Congressional mandate. We disagree. As mentioned above, the preamble to the mandated reporting elements and the second reporting element specifically directed that DOD account for the fully-burdened cost of DOD civilian and contractor personnel. Because there are multiple costs associated with civilian and contractor personnel, calculations are necessary in order to account for the full cost of these workforces. DODI 7041.04 instructs that if a function is performed on government property, the costs of goods, services, and benefits that are common costs may be excluded provided the number of government and contactor personnel is equivalent. While there were some instances where it was the case that DOD’s cost estimates involved an equal number of civilian and contractor personnel performing functions on government property, there were many instances where the personnel numbers differed and common costs should not have been excluded. For example, in DOD’s comparisons of federal civilians and service contractors at Fort Belvoir Community Hospital, DOD conducted comparisons of 19 functions where 2 functions had equal numbers of federal civilian and service contractors and 17 functions had differing numbers of federal civilian and service contractors. In one comparison, the number of contractors was over three times the number of civilians. DODI 7041.04 further instructs that when the number of government and contractor personnel differs, adjustments must be made to the cost estimates to account for the difference in number of government and contractor personnel. DOD did not make these adjustments in is calculations and as result non-labor costs should not have been excluded; therefore, DOD did not account for the fully- burdened costs, as directed by Congress. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense, the Director of the Office of Cost assessment and Program Evaluation and other interested parties. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact Brenda S. Farrell at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Vincent Balloon, Assistant Director; Timothy Carr, Felicia Lopez, Clarice Ransom, Michael Silver, and Norris “Traye” Smith made key contributions to this report. Related GAO Products Department of Defense: Actions Needed to Address Five Key Mission Challenges, GAO-17-369 (Washington, D.C.: June 13, 2017) DOD Civilian and Contractor Workforces: Additional Cost Savings Data and Efficiencies Plan Are Needed, GAO-17-128 (Washington, D.C.: October 12, 2016) Federal Hiring: OPM Needs to Improve Management and Oversight of Hiring Authorities, GAO-16-521 (Washington, D.C.: August 2, 2016) DOD Service Acquisition: Improved Use of Available Data Needed to Better Manage and Forecast Service Contract Requirements, GAO-16-119 (Washington, D.C.: February 18, 2016) Civilian and Contractor Workforces: Complete Information Needed to Assess DOD’s Progress for Reductions and Associated Savings, GAO-16-172 (Washington, D.C.: December 23, 2015) DOD Inventory of Contracted Services: Actions Needed to Help Ensure Inventory Data Are Complete and Accurate, GAO-16-46 (Washington, D.C.: November 18, 2015) Sequestration: Comprehensive and Updated Cost Savings Would Better Inform DOD Decision Makers if Future Civilian Furloughs Occur, GAO-14-529 (Washington, D.C.: June 17, 2014) Human Capital: Opportunities Exist to Further Improve DOD’s Methodology for Estimating the Costs of Its Workforces, GAO-13-792 (Washington, D.C.: September 25, 2013) Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce, GAO-13-470 (Washington, D.C.: May 29, 2013) Defense Outsourcing: Better Data Needed to Support Overhead Rates for A-76 Studies, GAO/NSIAD-98-62 (Washington, D.C.: Feb. 27, 1998)
Why GAO Did This Study In addition to more than 2.2 million active duty and reserve personnel, DOD employs about 760,000 federal civilians and more than 560,000 contractors. In the Senate Report 114-49 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2016 included a provision for DOD to issue a report (1) assessing functions performed by federal civilian and service contractor personnel, (2) accounting for the full costs of federal civilian and service contractor personnel performing these functions, (3) comparing these costs, and (4) assessing available hiring and retention authorities for federal civilians. The Senate report also included a provision for GAO to assess DOD's report, which DOD submitted to Congress in April 2017. This report examines the extent to which DOD's report addressed the prescribed congressional elements. GAO reviewed DOD's report and compared it to the prescribed elements, examined documents relevant to DOD's cost estimating and comparison methodology, and interviewed DOD officials, including those in its Office of Cost Assessment and Program Evaluation responsible for the calculations in DOD's report. What GAO Found In response to Congressional direction, the Department of Defense (DOD) issued a report in April 2017 comparing the costs of federal civilian and service contractor personnel at select installations. The report addressed three out of four provision elements and partially addressed one, as discussed below. DOD concluded that neither federal civilians nor service contractors were predominately more or less expensive, with costs being dependent on position, location, and level of seniority. DOD noted that it used a non-probability based sample of personnel for its report, and the results are not generalizable. An assessment of performance of functions being performed by federal civilian and service contractor personnel at six military installations, with four being in the continental United States and two being outside the continental United States. GAO believes that DOD addressed this requirement because it developed a methodology to assess performance of functions performed by federal civilians and service contractors at 17 organizations within nine geographic regions including two locations outside the continental United States. DOD used data from the Defense Civilian Personnel Data System to identify military installations with large reported numbers of federal civilians. DOD determined that personnel need to perform at least 80 percent common tasks to be able to make a comparison. An accounting of the fully-burdened, or full, cost of federal civilian and service contractor personnel performing functions at the selected installations including training, benefits, reimbursable costs, and facility overhead. GAO believes that DOD partially addressed this requirement because while it calculated the labor costs of selected federal civilian and service contractor full-time equivalents performing similar functions for organizations at government-owned facilities, it excluded certain non-labor costs from its calculations. A comparison of the costs of performance of these functions by federal civilians and service contractor personnel at the selected installations. GAO believes that DOD addressed this requirement because it compared calculated costs for selected federal civilians and service contractors performing similar functions at selected installations and included those comparisons in its report. An assessment of the flexible employment authorities for the employment and retention of federal civilian employees. GAO believes that DOD addressed this requirement because it sent questionnaires to DOD hiring officials and human resource professionals to collect information on flexible employment authorities and conducted interviews with these and human resource professionals at the same 17 organizations used for the cost comparison. Based on an analysis of the information collected, DOD's report included several conclusions regarding flexible hiring authorities and made one recommendation. What GAO Recommends GAO is not making any recommendations; however, DOD non-concurred with GAO's assessment that DOD partially addressed the element to account for the full cost of personnel. GAO believes the assessment is correct as discussed in the report.
gao_GAO-18-196
gao_GAO-18-196_0
Background Child Abuse Prevention and Treatment Act CAPTA, originally enacted in 1974, provides formula grants to states to improve child protective service systems. ACF administers the CAPTA state grant program and provides guidance and oversight to states. In fiscal year 2017, Congress provided about $25 million for the program. As part of the CAPTA state grant program, states are required to submit to the Secretary of HHS plans outlining how they intend to use CAPTA funds to improve their child protective service systems, among other things. State plans remain in effect for the duration of states’ participation in the grant program; if modifications are needed, these must be submitted. In addition to state plans, states are required to submit to HHS an annual data report providing information on agency decisions made in response to referrals of child abuse and neglect, as well as preventive services provided to families, among other things. CAPTA requires state governors to provide a series of assurances in their state plans. Since 2003, governors have had to provide an assurance that states have in effect and are enforcing a state law or program that includes policies and procedures to address the needs of infants affected by prenatal substance abuse or displaying withdrawal symptoms at birth. Under states’ policies and procedures, health care providers are required to notify CPS of such infants. Governors must also assure that a plan of safe care is developed for these infants. Although CAPTA does not define “plans of safe care,” for the purposes of this report we define them as plans to ensure the safety and well-being of infants who are born substance-affected. The Comprehensive Addiction and Recovery Act of 2016 (CARA) amended certain provisions of CAPTA that relate to substance-affected infants (see table 1). In addition to provisions related to substance-affected infants, CAPTA also requires governors to provide an assurance to the Secretary of HHS that they have provisions or procedures for certain individuals to report known and suspected instances of child abuse and neglect, which are generally referred to as mandated reporter laws. All states have statutes identifying persons who are required to report suspected child maltreatment to an appropriate agency, such as child protective services, a law enforcement agency, or a state’s toll-free child abuse reporting hotline, according to a 2016 HHS report. Mandatory reporters often include social workers; teachers, principals, and other school personnel; physicians, nurses, and other health care workers; and counselors, therapists, and other mental health professionals. The circumstances under which a mandatory reporter must make a report vary from state to state, according to HHS. Typically, a report must be made when the reporter, in his or her official capacity, suspects or has reason to believe that a child has been abused or neglected. State laws require mandatory reporters to report the facts and circumstances that led them to suspect that a child has been abused or neglected; they do not have the burden of providing proof that abuse or neglect has occurred. CPS Notification and Screening Process CPS, a division within state and local social services, is generally the agency that conducts an initial assessment or investigation of reports of child abuse and neglect. It also offers services to families and children where maltreatment has occurred or is likely to occur. Typically, when CPS agencies receive a notification about suspected child abuse, including a substance-affected infant, social workers review the referral to determine if it should be accepted for investigation. During an investigation, social workers determine, among other things, the nature, extent, and cause of abuse or neglect, and identify the person responsible for the maltreatment. An investigation may include the following: a visit to the hospital and/or infant’s home; observation of the infant; risk and safety assessments; evaluation of the home environment; background checks, including criminal record checks of adults that reside with the family; as well as mental health evaluations. If social workers determine that there is enough evidence to suggest that an infant is at risk for harm or neglect, or that abuse or neglect occurred, the case is substantiated. Once a case is substantiated, CPS develops a case plan with the family outlining objectives and tasks for the family. Among other things, CPS may refer the family to services in the community, such as early intervention services, parenting classes, and substance abuse treatment. Generally, CPS attempts to strengthen the family and alleviate the problems which led to maltreatment. If the case is not substantiated, but there is genuine concern about the child’s situation and the family may benefit from services in the community, the case may be closed and/or the family may be referred for voluntary services (see figure 1). Neonatal Abstinence Syndrome and Prenatal Drug Use Prenatal maternal opioid use has increased considerably in recent years. This increase has contributed to a significant rise in the rate of NAS. According to a recent study, the rate of NAS has increased from 1.2 per 1,000 hospital births in 2000 to 5.8 per 1,000 hospital births in 2012, reaching a total of 21,732 infants diagnosed with NAS. NAS occurs with considerable variability. According to a recent HHS report, various studies indicate that anywhere from 55 to 94 percent of infants exposed to opioids in-utero exhibit some degree of symptoms. Typically, infants with NAS develop symptoms within 72 hours of birth, but may develop symptoms within the first 2 weeks of life, including after hospital discharge. For the purpose of this report, infants exposed to opioids ingested by mothers in utero are considered substance-exposed, and those born negatively affected by exposure or experiencing withdrawal symptoms are considered substance-affected. According to experts, NAS is considered an expected and treatable result of women’s prenatal opioid use. Opioid exposure during pregnancy may occur for the following reasons: Women receiving pain medication with a prescription under the care of a physician. Medications can include fentanyl and oxycodone. Women under the care of a physician and undergoing treatment for an opioid use disorder with medications, such as methadone or buprenorphine. This type of treatment is generally referred to as medication-assisted treatment (MAT). Women misusing opioid pain medications with or without a prescription (such as using without a prescription, using a different dosage than prescribed, or continuing to use a drug when no longer needed for pain). Women using or abusing illicit opioid, such as heroin. Most States Reported Having Policies About Notification and Investigation of Substance-Affected Infants State Policies Generally Require or Encourage Health Care Providers to Notify Child Protective Services of Substance- Affected Infants In response to our survey, 42 states reported that state policies and procedures require health care providers to notify CPS about substance- affected infants. Some states reported that they explicitly require health care providers to notify CPS of substance-affected infants. For example, Wisconsin reported that under its state law if tests indicate that infants have controlled substances or controlled substance analogs in their bodily fluids, the health care provider shall report the occurrence of that condition to CPS. Others reported that the requirement is met by their states’ mandated reporter law—whereby people in certain positions, including health care providers, are required to notify CPS about substance-affected infants, similar to the manner in which other mandatory reporters, like school teachers, day care personnel, and social workers are required to report other instances of child abuse and neglect. For example, Kentucky statute requires that “any person who knows or has reasonable cause to believe that a child is dependent, neglected, or abused shall immediately” make a report to the police or CPS. The statutory definition for an abused or neglected child in Kentucky includes situations where a child’s health or welfare is harmed or threatened with harm because of parental incapacity due to alcohol and other drug abuse. Of the 42 states that require health care providers to notify CPS of substance-affected infants, 21 reported that notification is required for infants affected by both illegal and legal use of opioids. For example, in Massachusetts health care providers are required to notify CPS orally and, in writing within 48 hours, about substance-affected infants physically dependent on drugs, even if the drugs were legally obtained and the mother is under the care of a prescribing medical professional. Sixteen of the 42 states reported that health care providers are required to notify CPS of infants affected only by the illegal use of opioids, and five of the 42 states reported that they did not know if health care providers were required to notify CPS of infants affected by the illegal and legal use of opioids. The other eight states reported that although they did not have policies and procedures that require health care providers to notify CPS about substance-affected infants, they have laws or policies that encourage notification. Specifically, in written responses to our survey: Two states reported that under their state mandated reporter laws health care providers are encouraged, but not required, to notify CPS about substance-affected infants. Four states reported that they are working to amend their states’ policies and procedures to require that health care providers refer substance-affected infants to CPS. Another state reported that it encourages the notification from health care providers, but has not sought legislation to require health care providers to report substance-affected infants to CPS because of concerns that any laws that criminalize prenatal substance use would further deter substance-using pregnant women from seeking prenatal care. The state’s law requires all hospital personnel who suspect abuse and neglect or observe conditions that are likely to result in abuse or neglect to notify CPS. One state reported that all persons, including health care providers, are required to report child abuse and neglect, but reporting depends on whether a hospital’s policy indicates substance abuse is child abuse or neglect. Further, the state CPS director reported collaboration with the health care community on reporting substance exposed infants to its child abuse hotline. Although one state reported in our survey that it does not require or encourage health care providers to notify CPS about substance-affected infants, in an interview, state officials explained that its policy requires that health care providers notify CPS if, through an assessment, they conclude that infants are at risk for abuse and neglect. Under the state’s law, health care providers in each county are required to assess the needs of mothers and substance-affected infants using a protocol established by county health departments, CPS agencies, and hospitals. State officials told us that under the state’s law, the birth of a substance- affected infant is not in and of itself a sufficient basis for reporting child abuse or neglect. In addition to having policies and procedures regarding the reporting of substance-affected infants, in written responses to our survey some states reported providing training and guidance to support the efforts of health care providers to notify CPS about these infants. Three states reported that they offer mandatory reporter training to inform health care providers that they are obligated to notify CPS about substance-affected infants. Another state reported that its Department of Human Services developed a guide for mandated reporters that discusses what needs to be reported and where to make reports. Also, one state reported that it sent a formal letter to its state hospital association about how to report substance-affected infants to CPS. This state also sent a memo to its CPS county directors instructing them to contact their local health care providers on the importance of reporting substance-affected infants to CPS and the process for doing so. In addition, during our Massachusetts site visit, officials shared with us a memo that was sent to mandated reporters, community partners, and other stakeholders that offered guidance on when to file a report about substance-exposed infants. Further, local CPS staff at one Massachusetts field office told us that upon request they provide mandated reporter training to health care providers. Despite these policies, procedures, and guidance, in written responses to our survey, a few states reported concerns about requiring health care providers to notify CPS about substance-affected infants and the definition of substance-affected. All of the hospitals that we visited have policies consistent with their state’s law that require that health care providers, primarily hospital social workers, to notify CPS about substance-affected infants. However, one state reported that some medical personnel have been reluctant to report some infants that are positive for illegal and legal substances due to fears of mothers being arrested. Another state reported that stakeholders are concerned that having to notify CPS about substance-affected infants will have a chilling effect on the willingness of pregnant women who use substances to be honest with providers and seek the help and support they need and deserve. According to one state, there is often an inherent resistance to contacting CPS in these cases as health care providers tend to view child welfare involvement as punitive rather than a potential resource for the family. In addition, three states reported in written responses to our survey challenges understanding how to define terms, such as substance- affected, under CAPTA. For example, the Pennsylvania CPS director expressed concerns during our site visit, suggesting that CAPTA raises many unanswered questions, such as (1) if “affected by substances” means at-risk of being or physically affected by substances, (2) what policies relating to substance-affected infants should look like and include, and (3) whether “affected by substances” should include women who are under the care of health care or treatment providers and taking their medications as prescribed. A Kentucky public health official told us that a drug test, or whether the infant is affected by legal or illegal substances, should not be the sole factor in determining CPS’ involvement with a family. Rather, a holistic view of the family, whether the substance prohibits the mother’s ability to care for her child, and any risk factors present that places the infant at risk should also be considered. According to officials, an infant that is exposed to substances, but has not been affected by the substance, can still be at risk for child abuse and neglect. States Reported Having Policies That Guide Decisions About Investigating Substance- Affected Infants and Their Families In response to our survey, 46 states reported that they have policies and procedures for deciding which notifications about substance-affected infants are accepted for investigation. Seventeen of those states reported that all notifications of substance-affected infants are accepted for investigation, regardless of the circumstances. The remaining 29 states reported that they apply specific criteria to determine if children who present as substance-affected are accepted for investigation by CPS. Several states reported in written responses to our survey that they base their criteria for accepting notifications on the infant’s safety. For these states, drug exposure does not by itself indicate that an infant’s safety is at risk. For example, one state explained that in determining a child’s safety risk, staff evaluate a number of factors including the history of the family; the family’s presentation at the birthing hospital (appearance of chaotic behavior, suspected intoxication of adults, lack of appropriate concern or bonding with the infant); the presentation of the infant’s physical condition; the results of any testing of parent or child (blood, urine, etc.); discrepancies identified in the parent’s representation of their substance use or substance use treatment; and any other concerns noted by the reporting source. Other states reported that their criteria for accepting notifications for investigation are based on the degree or type of drug exposure in question. For example, one state reported that its policy directs CPS agencies to accept notifications for investigation when a parent has used illegal substances or non-medical use of prescribed medication during the last trimester of pregnancy. Another state reported that it will accept notifications for investigation if the infant is born with a positive toxicology or is experiencing drug withdrawal, or if the mother tests positive for substances. A few states reported using both risk to the safety of infants as well as degree or type of drug as their criteria for accepting notifications. For example, one state reported that it considers factors, such as the type of drug, the parent’s ability to care for the child, addiction history, and the parent’s readiness and preparation to care for the infant. In follow-up correspondences with states that reported that they do not have policies and procedures to decide whether to accept for investigation notices about substance-affected infants, one state reported that decisions are made on a case-by-case basis. A few states reported that after receiving notifications about substance- affected infants, CPS agencies may decide to opt out of investigating some families, referred to as “screening out” families. For example, in Massachusetts, CPS can “screen out” referrals of mothers if the only substance affecting the infants was used by the mothers as prescribed by their physician. In these instances, when CPS in Massachusetts is notified by the hospital about an infant, the screener gathers information from the caller and consults with a supervisor to determine whether the referral should be accepted for investigation or screened out. If the mother is on methadone, for example, but is involved with services and is in a treatment plan, CPS verifies with medical or other qualified providers that the mother used the drug as part of substance abuse or medical treatment as authorized. Additionally, CPS confirms that there are no other concerns of child abuse and/or neglect. If CPS officials in Massachusetts are unable to collect all the information that they need to screen out families, for example when a mother does not sign a release allowing CPS officials to speak with her health care providers, notifications about substance-affected infants are accepted for investigation. Most States Reported Having Requirements to Develop Plans of Safe Care, but Officials We Interviewed Reported Challenges Meeting the Needs of All Families States Reported That CPS Agencies Develop Plans to Primarily Address Infants’ Immediate Safety and Medical Needs and Caregivers’ Substance Use In response to our survey, 49 states reported that their CPS agency has policies to develop a plan to ensure the safety and well-being of substance-affected infants who meet the state’s criteria for investigation. Two states reported that CPS staff are not required to develop such a plan, even if a notification is accepted for an investigation or an assessment. For purposes of this report, we are defining a plan of safe care as a plan to ensure the safety and well-being of the infant. States’ approaches to identifying children and families who will receive a plan of safe care generally fall into two categories: 38 states reported that CPS is required to develop a plan of safe care for all notifications of substance-affected infants that are accepted for investigation, including those that are not substantiated. 11 states reported that CPS staff are required to develop a plan of safe care only in those instances where an investigation substantiates the notification or uncovers an unmet need or present or emerging danger. For example, local Pennsylvania CPS officials told us that they only develop plans when there is a safety threat or other concern about the infant. Most states reported that after a notification of a substance-affected infant is accepted for investigation, CPS always conducts a needs assessment for the infant and caregivers. For example, one local CPS office that we visited told us that social workers assess risk to and safety of infants, their function (development, age appropriate behavior, etc.), and environment. In addition, workers assess the caregiver’s ability to parent and employment status, as well as housing. The assessments conducted as part of the investigation inform the development of plans of safe care, as well as decisions about the removal of infants from the home. Among the 49 states that reported that plans of safe care are developed for all or some substance-affected infants, 47 reported that these plans either always or sometimes address infants’ safety needs. Plans also address other needs, such as infants’ immediate medical and longer-term developmental needs, as well as caregiver’s substance use treatment needs. See figure 2 for the number of states whose plans of safe care address various issues facing the infant and parent. In written responses to our survey and during our site visits, officials reported that plans of safe care and referrals for services included in the plans are individualized based on the infant and family’s needs. For example, Massachusetts state CPS officials told us that plans of safe care are developed for each family based on the information that staff collect from the safety, risk, and family assessments, as well as information collected from individuals who may have knowledge that would inform the family assessments, such as medical and treatment providers, and family members. Kentucky state CPS officials told us that the local organizations and service providers that they collaborate with to develop the plan of safe care also vary based on the family’s needs. For example, Kentucky will only collaborate with substance use treatment providers to develop the plan of safe care when families have substance use disorders. Similarly, during our site visits, officials from two states told us that the decision to place an infant in foster care is based on the individualized needs of the infant and caregiver. For example, Massachusetts state officials told us that their decision to remove a baby from the home depends on a myriad of factors and is determined on a case-by-case basis. Officials explained that if a mother is discharged from the hospital and begins using drugs again and does not have adequate supports in place to care for her baby, CPS may decide to place the infant in foster care. However, if a mother has existing support systems in place to mitigate safety risks, CPS may decide to keep the baby in the home. In our survey, all 51 states reported that their agencies either always or sometimes refer parents or caregivers to substance use treatment programs, and most states reported that they always or sometimes refer parents or caregivers to parenting classes or programs (49), and other supportive services (49). CPS officials in each of the three states that we visited told us that their plans of safe care include referrals to address not only the immediate needs of the infants, but also the needs of the parent or caregiver. For example, officials from a local Kentucky CPS agency told us that staff refer mothers of substance-exposed infants to a program called Sobriety Treatment and Recovery Team (START). START is comprised of a social worker and a peer support mentor who has at least 3 years of sobriety, previous involvement with CPS, and was successfully able to regain or keep custody of her own children. According to officials, the START program has been able to provide participants with quick access to substance use disorder treatment. Officials from a Massachusetts local CPS agency told us that one of the services that they provide to parents of substance-affected infants is a parent aide who can help monitor how the parent is caring for the infant, such as administering the infant’s medications appropriately and ensuring the parent is not abusing the infant’s drugs. In addition, a parent aide can provide emotional support and help parents adjust after the infant is discharged from the hospital. Kentucky officials noted the effect that a healthy caregiver has on the outcome of the infant and emphasized that a baby cannot be healthy if the mother is not. Kentucky CPS officials said that they have found that the earlier caregivers enter treatment, the better the outcomes are for mothers and babies. According to Kentucky officials, parents who participate in the START program are less likely to have their child placed in foster care. CPS Officials Reported Challenges Involving Caseloads, Developing Plans, and Confidentiality Restrictions Officials from the states that we visited told us that developing and monitoring plans of safe care under CAPTA’s new requirements for infants affected by their mother’s legal use of prescribed medications, as well as plans for these infants’ caregivers, present challenges. Specifically, officials reported concerns about increased caseloads, particularly if they are required to provide plans and services for infants at low risk of abuse or neglect, the content of plans, and confidentiality restrictions. Increased Caseloads Thirty-one of 50 states reported on our survey that staffing or resource limitations was very or extremely challenging, and CPS officials across the 3 states we visited said that the opioid epidemic has directly contributed to increased caseloads. According to a local Kentucky CPS office, the number of babies that met criteria for being accepted for investigation has increased about 55 percent from 2011 to 2016, while the number of staff has remained the same. Similarly, hospitals reported being impacted by this challenge. For example, staff at four hospitals we visited told us that they have delayed discharging infants from the hospital because CPS social workers did not identify caregivers to whom infants may be released or make plans for infants in a timely manner. In addition, staff from three hospitals told us that some CPS workers are difficult to contact and not especially responsive to their questions. One hospital social worker told us that she is concerned that the changes to CAPTA that require notifying CPS of all substance-affected newborns will inundate the agencies with cases. Officials from two of the three states we visited anticipated that providing services to infants affected by the legal use of prescribed medications, but not likely to be at risk for child abuse and neglect, will result in an increase in the number of families referred to CPS. This, in turn, will require a plan of safe care and further strain limited resources. Twenty- five states reported in our survey that the plan they develop for substance-affected infants is the same as for other children in CPS care, suggesting that states devote the same level of resources to these infants as other cases. The states we visited interpret CAPTA to require that plans of safe care be developed for all substance-affected infants who are referred to CPS, including those who may not meet usual criteria to be accepted for an investigation. Some state officials we interviewed questioned whether the new CAPTA requirements would allow for the best use of limited resources. For example, one senior state CPS official questioned whether it would be a good use of resources to develop plans of safe care for mothers in substance use disorder treatment or mothers using opioid medications due to chronic pain. A local CPS official we interviewed stated that drug exposure, in and of itself, is not necessarily a safety risk, and CPS should not intervene with families who are not at risk for child abuse or neglect. Instead, hospitals or treatment providers should intervene and refer families who do not meet criteria for CPS involvement, but could benefit from additional supports, to voluntary services. Kentucky public health officials told us that the period after a woman gives birth is a critical time for families as mothers may be stressed, sleep-deprived, exhausted, and may have other children in the home. This period may be especially challenging for mothers with substance use disorders, if adequate supports are not in place. According to officials women are typically covered for substance use treatment during pregnancy; however, this coverage ends roughly 60 days after the baby is born. In written responses to our survey, some states reported that they would rely on other agencies to develop plans of safe care. Similarly, in order to manage limited CPS resources, officials from two of the three states that we visited said they are considering having hospitals or other agencies assume responsibility for developing plans of safe care when there is no evidence of abuse or neglect and there appears to be minimal risk to the safety and well-being of the infant. Kentucky officials told us that they envision that CPS will be responsible for developing a plan of safe care for notifications that are accepted for investigation, while hospitals, or another agency, will be responsible for developing plans of safe care for referrals that are screened out by CPS. According to CPS state officials, the plan of safe care for the infant and the family can be part of the discharge plan prior to the family leaving the hospital. However, officials reported that obtaining cooperation from other agencies may be difficult. Some state officials reported being concerned that other agencies may not feel obligated to develop these plans, in part, because CAPTA provides funding to child welfare, and other agencies may therefore believe that child welfare should be responsible for developing the plan of safe care. Determining What to Include in the Plan of Safe Care CPS officials we interviewed in two of our site visit states, as well as one state we followed up with, told us that they were unsure of whether their current plans will meet new CAPTA requirements because CAPTA does not define a plan of safe care. For example, Massachusetts officials said that their plans include everything that a family might need to ensure the safety of the child, including resources to ensure stabilization and reunification of a family, but they are not sure whether the plans meet new CAPTA requirements, in part because they are not familiar with the term “plan of safe care.” An official in another state was also unsure about whether his state’s “safety plans” would meet CAPTA requirements. According to the official, safety plans may include a treatment plan for mothers, and referral services, such as early intervention for the child. In practice, plans of safe care generally address gaps that place an infant at risk for harm or neglect. However, state officials we interviewed reported being unsure about what a plan of safe care should look like for families where these gaps do not exist. Also, in a written response to our survey, one state expressed uncertainty about CPS’ role if required to work with infants who do not typically receive CPS services. For example, a Pennsylvania official said that it is unclear what types of interventions child welfare should conduct with families of infants exposed to legal substances, such as medications prescribed by doctors, when the caregivers are taking their medications correctly. Similarly, officials also questioned whether a plan would be necessary, and what the plan would entail, for caregivers who are already addressing their substance use disorder and taking steps to ensure their infant’s safety. Officials from a local Kentucky CPS office described a case in which a mother was participating in medication-assisted treatment, had attended counseling three times per week throughout her pregnancy, and was continuing treatment in the postpartum period. Through CPS’ investigation, the agency found that the case was not substantiated, in part, because there were no additional services that CPS could connect her with that she was not already receiving. Confidentiality Restrictions Officials across the three states we visited also said that state and federal drug and alcohol confidentiality restrictions may challenge their ability to monitor plans of safe care. To monitor plans of safe care, CPS staff may need access to confidential information in order to know how caregivers are progressing in treatment, particularly now that these plans must address the substance use disorder needs of the caregiver. However, federal law restricts the disclosure and use of alcohol and drug patient records maintained in connection with the performance of any federal- assisted alcohol and drug abuse program. Generally, confidential information may be disclosed in accordance with the prior written consent of the patient. State and local CPS staff we interviewed said that strict confidentiality requirements make it challenging for drug and alcohol treatment providers to share information about mothers and infants. A CPS state director from Pennsylvania said that treatment providers are often reluctant to provide CPS case workers with information or updates on a mother’s treatment, which prevents child welfare workers from fully understanding how mothers are progressing with their treatment and the extent to which those in treatment are adhering to prescribed directions as outlined by treatment providers. In addition, one official from a state we visited said state statutes regarding sharing of drug and alcohol treatment information may be more restrictive than the federal statute. Some states have developed ways to obtain confidential information about mothers in substance use disorder treatment. For example, officials from one local CPS office told us that in instances when they have to develop a long-term plan of safe care for families, they have mothers sign a release of information form in order to obtain updates about her treatment adherence from the medication- assisted treatment provider. Similarly, a local Massachusetts CPS office told us that typically staff obtain releases from mothers so that they can verify whether mothers are actively participating in their treatment and that there are no records of relapse. Although HHS Has Provided Technical Assistance and Guidance to Assist States’ Efforts to Implement CAPTA, States Want More Help HHS Provided Technical Assistance Through a Resource Center and ACF Issued Formal Guidance and Began Its Oversight Efforts In HHS’ role to assist states in the delivery of child welfare services, two agencies—ACF and the Substance Abuse and Mental Health Services Administration (SAMHSA)—provided technical assistance to states through the National Center on Substance Abuse and Child Welfare (NCSACW). In addition, in ACF’s role to administer and monitor states’ implementation of CAPTA, the agency has provided some guidance to states on the provisions pertaining to substance-affected infants and has begun its monitoring responsibilities. Technical Assistance ACF and SAMHSA, which leads public health efforts to reduce the impact of substance abuse and mental illness, established the NCSACW in 2002. The NCSACW provides technical assistance to states, and has issued publications and hosted forums to help states develop policies and procedures around issues affecting substance-affected infants. The technical assistance has focused on a broad range of issues, including collaboration among service providers, and plans of safe care. With respect to collaboration, NCSACW has issued several studies that identify opportunities for strengthening interagency efforts to prevent, intervene, identify, and treat prenatal substance exposure. The NCSACW collaboration guides encourage states to involve CPS agencies with medical providers in an interagency collaborative setting, thereby facilitating the process for CPS agencies to be notified of substance- affected infants. Regarding plans of safe care, NCSACW has provided technical assistance and best practices to states around development of these plans. For example, in one state it has facilitated discussion groups to help the state develop a model plan. From calendar year 2011 to 2016, NCSACW processed approximately 600 requests from state CPS agencies for short-term technical assistance related to improving care for substance-affected infants and their families. This short-term technical assistance included activities such as responding to telephone inquiries, mailing information, identifying needed resources, and making referrals. The NCSACW has also provided in- depth assistance to 16 states to strengthen collaboration and linkages across child welfare, addiction treatment, medical communities, early care and education systems, and family courts to improve outcomes for substance-affected infants and their families. Through this in-depth assistance, NCSACW identified areas for improvement in states, including a lack of clarity regarding compliance with CAPTA requirements (such as identification, notification, and developing plans of safe care) and the need for state models to comply with CAPTA requirements to develop plans of safe care. In one state, the project overview report indicated that a next step for the in-depth technical assistance is to continue development of the plan of safe care model and ensure practices and protocols are in place across systems to meet CAPTA requirements. The report indicated that this will include ongoing work with hospitals to ensure consistent identification of infants with prenatal exposure and notifications to CPS. Although18 states reported in our survey that technical assistance from the NCSACW was very or extremely helpful, 11 reported that it was moderately helpful, 7 reported that it was slightly helpful, and 1 reported that it was not at all helpful. Eleven states reported that they were not familiar with this assistance. Guidance Since July 2016, when the most recent amendments to CAPTA were enacted, ACF has issued one information memorandum and two program instructions to states about provisions relating to substance-affected infants. According to an ACF official, information memoranda share information with states, while program instructions provide interpretations of the law and inform states of actions they must take. ACF issued an August 2016 information memorandum informing states of the 2016 amendments to CAPTA. The August 2016 information memorandum also provided states with best practices, drawing on an NCSACW guide on collaboration for developing multi-systemic approaches to assist child welfare, medical, substance use disorder treatment, and other systems to support families affected by opioid use disorders. In January 2017, ACF issued a program instruction which provided guidance to states on implementing the 2016 amendments to CAPTA made by CARA and informed states of the flexibilities that they have under the law. Particularly, the guidance noted that: “CAPTA does not define ‘substance abuse’ or ‘withdrawal symptoms resulting from prenatal drug exposure.’ We recognize that by deleting the term ‘illegal’ as applied to substance abuse affecting infants, the amendment potentially expands the population of infants and families subject to the provision [that states have policies and procedures in place to address their needs]. States have flexibility to define the phrase, ‘infants born and identified as being affected by substance abuse or withdrawal symptoms resulting from prenatal drug exposure,’ so long as the state’s policies and procedures address the needs of infants born affected by both legal (e.g., prescribed drugs) and illegal substance abuse.” “While CAPTA does not specifically define a ‘plan of safe care,’ CARA amended the CAPTA state plan requirement . . . to require that a plan of safe care address the health and substance use disorder treatment needs of the infant and affected family or caregiver.” “CAPTA does not specify which agency or entity must develop the plan of safe care; therefore the state may determine which agency will develop the plans. We understand that in most instances the state already has identified the responsible agency in its procedures. When the state reviews and modifies its policies and procedures to incorporate the new safe care plan requirements in CARA, the state may wish to revisit its procedures regarding which agency develops the plan of safe care, including any role for agencies collaborating with CPS in caring for the infant and family.” In addition, in April 2017, ACF issued a program instruction on reporting requirements, including changes in those requirements brought about by the 2016 amendments to CAPTA. Monitoring ACF conducted limited monitoring of states prior to the amendments passed in 2016. According to ACF officials, if presented with evidence of potential deficiencies, the agency would attempt to learn more about the state’s activities. In one instance, ACF reviewed South Carolina’s policies and found them to not be in compliance with the notification and safe care plan requirements of CAPTA. It directed the state to develop a program improvement plan to bring it into full compliance, which South Carolina submitted in April 2016. In a recent progress report (February–April 2017), South Carolina reported that it was focused on updating statutes, developing policies and procedures, training child protective service workers, and building relations with health care providers. In response to the 2016 amendments to CAPTA that added the requirement for HHS to monitor state policies and procedures to address the needs of substance-affected infants, ACF officials told us that staff in regional offices will review states’ annual reports, submitted in June 2017. In its program instruction describing the reporting requirements, ACF asked each state to submit a new Governor’s Assurance, as well as a narrative explaining what they have done in response to the amendments. Specifically, ACF asked states to provide information on any changes that were made in state laws, policies, or procedures related to identifying and referring infants affected by substance abuse to CPS as a result of prenatal drug exposure. It also requested updates on states’ policies and procedures regarding the development of plans of safe care; a description of how states have developed systems to monitor plans of safe care; and a description of any outreach or coordination efforts the states have taken to implement the amendments, among other things. According to ACF officials, as of October 1, 2017, some states have provided information and a Governor’s Assurance demonstrating compliance with the amended provisions and some states have been placed on Program Improvement Plans, but the agency does not yet have information on the status of all states. An ACF official explained that, in their annual reports, some states either acknowledged that they are trying to get legislation enacted to bring them into compliance with the law and it has failed, or that they are not in compliance, for example, because they were limiting their policies to those infants affected only by illegal substances. In addition, in May 2017, ACF issued a technical bulletin informing states of the new data collection requirements that resulted from the 2016 amendments to CAPTA. ACF stated that it intends to collect data required by the amendments to CAPTA through the National Child Abuse and Neglect Data System, beginning with states’ submission of fiscal year 2018 data. This system is maintained by ACF and contains data from states about children who have been abused or neglected. ACF issued a Federal Register notice about the proposed data elements and requested comments on the accuracy and quality of the proposed data collection, among other things; the comment period closed in July 2017. In the Federal Register notice, ACF notes that the 2016 amendments to CAPTA require it to collect information from state CPS agencies on the number of notifications from health care providers that are accepted for investigation or screened out. Further, of those infants screened in, ACF is required to collect data on the number of safe care plans developed for substance- affected infants as well as the number of infants for whom a referral was made for appropriate services, including services for the affected family or caregiver. In the Federal Register notice, ACF proposed to collect this information using a combination of existing and new data from states. Thirty-two states reported in our survey that they already collect data on the incidence of substance-affected and/or substance-exposed infants; 15 of those 32 states also collect data on the incidence of NAS. Further, 18 states reported that they collect data on the number of notifications health care providers make to CPS. Of those states, 8 reported that they collect specific data on notifications related to infants diagnosed with NAS. States Reported the Need for Additional Guidance and Assistance from HHS to Address Implementation Challenges Most states reported in our survey that additional guidance and assistance would be extremely or very helpful (see figure 3). For example, 38 states reported that additional guidance on requirements for health care providers to notify CPS of substance-affected infants would be extremely or very helpful. Similarly, 37 states reported that additional guidance on developing, implementing, and monitoring plans to ensure the safety and well-being of substance-affected infants would be extremely or very helpful. In written responses to our survey, states suggested ideas for additional guidance, training, and technical assistance to help them address the needs of substance-affected infants. States’ suggestions ranged from assisting in the development of substance abuse training curriculum for staff to video conferences with other states to share information about implementing CAPTA. A few states suggested that the guidance ACF has provided to date is not clear and reported grappling with the meaning of terms such as “affected” and “legal vs. illegal” substances, and two states requested “concrete guidance” and “specificity.” A few other states suggested that it would be helpful to obtain additional information about meeting the requirements of plans of safe care within the constraints of state and federal confidentiality laws, technical assistance on what plans of safe care look like, and a format for a plan of safe care. ACF officials told us that states have flexibility with implementing the law and the agency does not anticipate issuing additional written guidance on the amendments to CAPTA made by CARA. ACF officials explained, in October 2017, that they were finalizing their review of the plans that states were required to submit. These plans are expected to include details on how the states are addressing the CAPTA requirements. While ACF could not provide the number, officials reported that some of the state plans submitted to date did not meet the requirements and those states have been asked to develop program improvement plans. They expect states to work with the ACF regional offices, which will provide or facilitate technical assistance to states on their implementation of the provisions, as needed. In addition to the review of state plans, ACF officials explained that regional officials may learn about states’ needs for technical assistance through meetings or informational exchanges. Finally, the NCSACW is expected to review and prepare a summary of CAPTA state plans, current state statutes and policies and procedures relating to amended CAPTA requirements. In addition, according to ACF, NCSACW will continue to offer technical assistance on the development and implementation of plans of safe care to states. Technical assistance may include responding to requests for information, disseminating written materials and resources, and conducting webinars/conference calls. Further, ACF reported that some states will receive more in-depth technical assistance, albeit in some instances on a time-limited basis. Undertaking these actions can enhance states’ understanding of CAPTA requirements and better address known challenges such as the ones described in this report. However, more specific guidance from HHS on the issues which states have expressed confusion can assist them in better understanding CAPTA requirements and providing more effective protections and services for the children and families most in need. Conclusions The opioid epidemic has generated a significant increase in the number of substance-affected infants born and diagnosed with NAS. These vulnerable infants may be at risk for child abuse and neglect if adequate supports and services are not available to ensure their safety. CAPTA requires states to have policies and procedures to address the needs of these infants and their families, including mothers with a substance use disorder. However, states have experienced challenges implementing new CAPTA requirements. Many states reported in our survey that they are not completely adhering to the law. This is reflected in ACF’s review of state plans, some of which are resulting in program improvement plans. States cite challenges that stem, in part, from ACF’s lack of specificity in providing guidance on implementing CAPTA requirements. Specifically, states report that ACF has not provided clear guidance about which substance-affected infants health care providers are required to notify CPS about, as well what a plan of safe care is and for whom it should be developed. Given the challenges that states reported facing in implementing the provisions, a majority reported wanting more help from ACF, such as trainings and teleconferences with other states, to help overcome their challenges. Additional guidance and assistance from HHS would help states better understand what they need to do to develop policies and procedures that meet the needs of children and families affected by substance use. Recommendation for Executive Action The Secretary of HHS should direct ACF to provide additional guidance and technical assistance to states to address known challenges and enhance their understanding of CAPTA requirements, including the requirements for health care providers to notify CPS of substance- affected infants and the development of a plan of safe care for these infants. Agency Comments and Our Evaluation We provided a draft of this report to HHS for review and comment. HHS’s comments are reproduced in appendix I. HHS also provided technical comments, which we incorporated into our report where appropriate. HHS did not concur with our recommendation. HHS stated that: in January 2017, ACF clarified in guidance several of the issues raised in the report, including the population of infants and families covered by the provision and the state flexibility inherent in determining which infants are “affected by” substance abuse, and the terminology used in the federal law of what a “plan of safe care” is; ACF believes it is necessary to allow states the flexibility to meet the requirements in the context of their state CPS program; several of the challenges that the GAO notes are not specific to CAPTA compliance with the safe care plan and notification requirements; and it does see the value in continuing to provide technical assistance to states to address known challenges and to enhance their understanding of CAPTA requirements. With respect to HHS’ January 2017 guidance, state officials reported in our survey and during site visits that they found some terms unclear and were uncertain about what is required of them. In written responses to our survey, states reported challenges understanding how to define substance-affected under CAPTA. In addition, as we note in our report, the guidance about plans of safe care described the following: “While CAPTA does not specifically define a ‘plan of safe care,’ CARA amended the CAPTA state plan requirement . . . to require that a plan of safe care address the health and substance use disorder treatment needs of the infant and affected family or caregiver.” States reported in our survey and in follow-up discussions that this lack of specificity remained an ongoing challenge for them. For example, as we discuss in our report, one state that we followed up with in August 2017 was still unsure about whether its safety plans would meet CAPTA requirements for plans of safe care. In addition, as of October 2017, HHS confirmed that some state plans did not meet CAPTA requirements and that the states were asked to develop program improvement plans. Accordingly, a key ongoing challenge was not addressed by the January guidance. Regarding allowing states flexibility to meet CAPTA requirements, we acknowledge in our report that HHS said that states have flexibility. However, in our survey and site visits, states indicated that they would find it helpful for HHS to provide them with greater specificity around terms, including the degree of flexibility they are allowed. States added that this would include parameters within which they can develop policies and procedures that meet CAPTA requirements. We continue to believe that additional guidance addressing these concerns would benefit states and could be provided without imposing additional mandates. Concerning HHS’ third point that some of the issues raised in the report are not specific to CAPTA, the states we visited interpret CAPTA to require that plans of safe care be developed for all substance-affected infants who are referred to CPS. During our discussions with states and in responses to our survey, state officials did not delineate which federal requirement impacted their approach to serving children and families. As stated in our conclusion, vulnerable infants may be at risk for child abuse and neglect if adequate supports and services are not available to ensure their safety. Lastly, HHS indicated that it will continue to provide technical assistance to states and fund demonstration sites to establish or enhance collaboration across community agencies and courts. Although continuing to provide technical assistance to states should be beneficial, our findings demonstrate that additional guidance is also needed. For example, 38 states reported that additional guidance on requirements for health care providers to notify CPS of substance-affected infants would be extremely or very helpful. Similarly, 37 states reported that additional guidance on developing, implementing, and monitoring plans to ensure the safety and well-being of substance-affected infants would be extremely or very helpful. Overall, given the results of our review, we continue to believe our recommendation is warranted. Effective implementation of our recommendation should help states better implement protections for children. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Kathryn A. Larin, (202) 512-7215 or larink@gao.gov. Staff Acknowledgments In addition to the contact above, Sara Schibanoff Kelly (Assistant Director), Ramona L. Burton (Analyst-in-Charge), Kay E. Brown, Hannah Dodd, Ada Nwadugbo, and Srinidhi Vijaykumar made key contributions to this report. Also contributing to this report were Sandra L. Baxter, James Bennett, Gina Hoover, Jessica Orr, Rhiannon Patterson, Jean McSween, and James Rebbe.
Why GAO Did This Study Under CAPTA, states perform a range of prevention activities, including addressing the needs of infants born with prenatal drug exposure. The number of children under the age of 1 entering foster care increased by about 15 percent from fiscal years 2012 through 2015. Child welfare professionals attribute the increase to the opioid epidemic. GAO was asked to examine the steps states are taking to implement CAPTA requirements on substance-affected infants and related amendments enacted in 2016. This report examines (1) the extent to which states have adopted policies and procedures to notify CPS of substance-affected infants; (2) state efforts to develop plans of safe care, and associated challenges; and (3) steps HHS has taken to help states implement the provisions. To obtain this information, GAO surveyed state CPS directors in all 50 states and the District of Columbia and reached a 100 percent response rate. GAO also visited 3 states (Kentucky, Massachusetts, and Pennsylvania); reviewed relevant documents such as federal laws and regulations, and HHS guidance; and interviewed HHS officials. GAO did not assess states' compliance with CAPTA requirements. What GAO Found All states reported adopting, to varying degrees, policies and procedures regarding health care providers notifying child protective services (CPS) about infants affected by opioids or other substances. Under the Child Abuse Prevention and Treatment Act (CAPTA), as amended, governors are required to provide assurances that the states have laws or programs that include policies and procedures to address the needs of infants affected by prenatal substance use. This is to include health care providers notifying CPS of substance-affected infants. In response to GAO's survey, 42 states reported having policies and procedures that require health care providers to notify CPS about substance-affected infants and 8 states reported having policies that encourage notification. The remaining 1 state has a policy requiring health care providers to assess the needs of mothers and infants and if they conclude that infants are at risk for abuse or neglect, CPS is notified. In response to GAO's survey, 49 states reported that their CPS agency has policies to develop a plan of safe care; 2 reported not having such a requirement. Under CAPTA, states are required to develop a plan of safe care for substance-affected infants. Although not defined in law, a plan of safe care generally entails an assessment of the family's situation and a plan for connecting families to appropriate services to stabilize the family and ensure the child's safety and well-being. States reported that plans typically address the infant's safety needs, immediate medical needs, and the caregiver's substance use treatment needs. However, officials in the 3 states GAO visited noted challenges, including uncertainty about what to include in plans and the level of intervention needed for infants at low risk of abuse or neglect. The Department of Health and Human Services (HHS) has provided technical assistance and guidance to states to implement these CAPTA requirements. Most states reported in GAO's survey that additional guidance and assistance would be very or extremely helpful for addressing their challenges. Nevertheless, HHS officials told GAO that the agency does not anticipate issuing additional written guidance, but that states can access technical assistance through their regional offices and the National Center on Substance Abuse and Child Welfare—a resource center funded by HHS. However, of the 37 states that reported on the helpfulness of the assistance they have received, 19 said it was only moderately helpful to not helpful. States offered suggestions for improving the assistance, such as developing substance abuse training materials for staff and holding video conferences with other states to share information. In October 2017, HHS officials explained that some states have submitted plans that include details on how they are addressing the CAPTA requirements. HHS officials reported that some of the plans submitted to date indicated that states are not meeting the requirements and those states have been asked to develop program improvement plans. Without more specific guidance and assistance to enhance states' understanding of CAPTA requirements and better address known challenges such as the ones described in this report, states may miss an opportunity to provide more effective protections and services for the children and families most in need. What GAO Recommends GAO recommends that HHS provide additional guidance and technical assistance to states to address known challenges and enhance their understanding of requirements. HHS did not concur with the recommendation. As discussed in the report, GAO continues to believe that added guidance would benefit states.
gao_GAO-18-512T
gao_GAO-18-512T_0
Background Over the past decade, the federal government has expanded financial assistance to public and private stakeholders for preparedness activities through various grant programs administered by DHS through its component agency, FEMA. Through these grant programs, DHS has sought to enhance the capacity of states, localities, and other entities, such as ports or transit agencies, to prevent, respond to, and recover from a natural or manmade disaster, including terrorist incidents. Two of the largest preparedness grant programs are the State Homeland Security Program and the Urban Areas Security Initiative. The State Homeland Security Program provides funding to support states’ implementation of homeland security strategies to address the identified planning, organization, equipment, training, and exercise needs at the state and local levels to prevent, protect against, respond to, and recover from acts of terrorism and other catastrophic events. FEMA allocated $402 million for the program in fiscal year 2017. The Urban Areas Security Initiative provides federal assistance to address the unique needs of high-threat, high-density urban areas, and assists the areas in building an enhanced and sustainable capacity to prevent, protect, respond to, and recover from acts of terrorism. FEMA allocated $580 million for the program in fiscal year 2017. The State Homeland Security Program (SHSP), awarded to the nation’s 56 states and territories, and the Urban Areas Security Initiative (UASI), awarded to urban areas based on DHS’s risk assessment methodology, are the largest of the preparedness grant programs, accounting for about 60 percent of Fiscal Year 2017 grant funding. See figure 1 for a history of funding levels for these programs. Eligible candidates for the FY 2017 UASI program are determined through an assessment of relative risk of terrorism faced by the 100 most populous metropolitan statistical areas in the United States, in accordance with the Homeland Security Act of 2002, as amended. FEMA Has Strengthened Its Coordination, Oversight, and Assessments of Grants But Challenges Remain in the Effectiveness of FEMA’s Grant Management FEMA Has Taken Some Steps to Address Coordination Challenges Between Headquarters and Regional Offices, But Some Challenges Still Remain In February 2016, we reported that FEMA has taken some steps, but has not fully addressed longstanding preparedness grant management coordination challenges between its headquarters and regional offices. We found that for several preparedness grant programs, FEMA headquarters staff in GPD and regional staff share management and monitoring responsibilities. For example, we found that assessments by GPD and others since 2009 had recommended that regional offices, rather than headquarters offices, be responsible for managing and monitoring preparedness grants to avoid confusion and duplication, and to strengthen coordination with state and local grantees. Further, in July 2011, we found that GPD had efforts underway to regionalize grant management responsibilities and improve coordination of preparedness grants, and that these efforts were consistent with internal control standards. However, GPD officials reported that in 2012 it changed course and decided to continue sharing grant management roles between headquarters and regions, referred to as a hybrid grant management structure. GPD officials told us that they changed course because, among other things, estimates that the costs of regionalization would be greater than the annual savings FEMA identified in an earlier study and concerns that inconsistent program implementation would occur across the regions, and outweighed the potential benefits. GPD officials at that time said they had taken steps to address coordination challenges associated with this hybrid grant management structure. However, we found in February 2016 that these challenges continue. For example, states and FEMA regional officials told us that GPD staffs in headquarters and regions did not always coordinate their monitoring visits, which can be disruptive to the state emergency management agency’s day-to-day operations. FEMA regional officials also reported that GPD staffs in headquarters and regions sometimes provided inconsistent guidance to grantees. Further, while GPD officials identified some steps they plan to take to address the challenges, we found that GPD lacked a plan with time frames and goals for addressing them. We recommended that FEMA develop a plan with time frames, goals, metrics, and milestones detailing how GPD intends to resolve longstanding challenges associated with its existing hybrid grants management model, which divides responsibilities between regional and headquarters staff. FEMA, however, did not concur with our recommendation, stating that it disagreed with our characterization of longstanding challenges in managing preparedness grants. As we stated in the report, multiple assessments dating back to 2009 have reported challenges with the hybrid model. As also noted in our report, officials from four FEMA regional offices and officials from three states within those regions provided various examples of a lack of coordination between headquarters and regional staff in managing preparedness grants, including instances that took place in 2014 and as recently as September 2015. In October 2017, FEMA developed a plan—the Milestone Action Plan—to track efforts aimed at improving coordination issues associated with its hybrid grants management model, as we recommended in February 2016. The plan divides responsibilities for the management of preparedness grants between regional and headquarters staff and describes completed, ongoing, and planned efforts taken by FEMA to improve grants management coordination along with steps taken, goals, and time frames, among other things. For example, the plan shows that FEMA developed and finalized the Monitoring Actions Tracker in August 2016, a tool shared by GPD in FEMA headquarters and staff in regional offices. Through the tracker, GPD headquarters and regional staffs are able to view planned and completed monitoring activities related to grants management, as well as the status of any open corrective actions. In addition to developing the Milestone Action Plan, FEMA officials described other efforts taken to improve coordination issues. For example, FEMA officials told us they increased the use of an online collaboration tool, which allows for instant information sharing between GPD and the regions. By taking these steps, FEMA should be better positioned to track and evaluate efforts to improve regional coordination, as we recommended in 2016. FEMA Has Taken Steps to Increase Oversight Across Preparedness Grant Programs FEMA has been delayed in addressing the need for improved coordination among grant programs identified in our prior work. Specifically, we found in February 2012 that multiple factors contribute to the risk of duplication among four FEMA preparedness grant programs— the State Homeland Security Program, Urban Areas Security Initiative, Port Security Grant Program, and Transit Security Grant Program—as these programs share similar goals, fund similar projects, and provide funds in the same geographic regions. Further, we found that DHS’s ability to track grant funding, specific funding recipients, and funding purposes varies among the programs, giving FEMA less visibility over some grant programs. Also, DHS’s award process for some programs based allocation decisions on high-level, rather than specific, project information, which could further contribute to the risk of duplication. Although our February 2012 analysis identified no cases of duplication among a sample of grant projects, the above factors collectively put FEMA at risk of funding duplicative projects. As a result, in 2012, we included these challenges in our annual report on duplication, overlap, and fragmentation in federal programs, agencies, offices, and initiatives. FEMA has not yet taken action to fully address our concerns. We recommended in February 2012 that as FEMA developed its new grants management information system (the Non-Disaster Grants Management System, or ND Grants at that time), that the agency collect project information with the level of detail needed to better position the agency to identify any potential unnecessary duplication within and across the four grant programs. In December 2012, FEMA officials reported that the agency intended to start collecting and analyzing project-level data from grantees in fiscal year 2014. Further, in December 2017, FEMA took actions to identify potential unnecessary duplication across four preparedness grant programs, as we recommended in February 2012. Although the development of FEMA’s grants management information system is ongoing, FEMA issued guidance and adopted interim processes to help identify potential duplication across these preparedness grant programs until the system’s capabilities are upgraded over the next several years. For example, in fiscal year 2014, FEMA modified a legacy grants data system to capture more robust project-level data—such as project budget data—for the Homeland Security Grant Program, which includes the State Homeland Security Grant Program and the Urban Areas Security Initiative. In addition, in fiscal year 2017, FEMA procured a software visualization tool and developed a set of standard operating procedures to assist staff in identifying potentially duplicative projects. Specifically, the visualization tool will use grants award data from the Port Security Grant Program, the Transit Security Grant Program, and compare the grant programs named above to highlight ZIP codes that contain multiple projects. These projects will then be analyzed by FEMA officials. According to the standard operating procedure, if duplication is suspected within a particular geographic area, further collaborative reviews should be conducted in coordination with the Office of Chief Counsel to determine appropriate remedies. Using an interim approach to collect more specific project-level data during the grant application process and utilizing the new software visualization tool should help FEMA strengthen the administration and oversight of its grant programs until FEMA implements its long-term solution for the agency’s grants management information system. FEMA Is Validating Grant Performance Data, In the area of performance assessment, we reported in June 2013 on limitations in FEMA’s ability to validate the performance data it collects. Specifically, we found that two of FEMA’s preparedness grant programs—Emergency Management Performance Grants (EMPG) and Assistance to Firefighters Grants (AFG) programs—collect performance information through a variety of reporting mechanisms but face challenges in identifying verifiable program outcomes. These reporting mechanisms collect performance data used by FEMA regional offices and headquarters for different purposes. For example, headquarters focuses on the development of future program priorities and on reporting progress toward the National Preparedness Goal, while regions use program information to monitor primary grant recipients for compliance. DHS developed agency priority goals that reflect agency-wide, near-term priorities. According to FEMA officials, the EMPG and AFG programs have an indirect link to a DHS agency priority goal, as well as the National Preparedness Goal, because they support states’ level of preparedness for disasters. According to FEMA officials, neither program has a standardized tool with which to validate the performance data that are self-reported by recipients; additionally, the regions are inconsistent in their approaches to verifying program performance data. We concluded that the absence of a formal established validation and verification procedure, as directed by the Office of Management and Budget’s Circular No. A-11, could lead to the collection of erroneous performance data. In our June 2013 report, we recommended that FEMA ensure that there are consistent procedures in place at the headquarters’ office and regional level to ensure verification and validation of grant performance data that allow the agency to attest to the reliability of EMPG and AFG grant data used for reporting progress toward goals. DHS concurred with our recommendation and stated that FEMA would explore effective and affordable ways to verify and validate EMPG and AFG grant performance data. In April 2015, FEMA officials reported that FEMA was in the process of developing the data verification and validation checks of EMPG grantee performance reporting. For example, according to FEMA officials, they have revised reporting templates and uniform table definitions to make it easier for grantees to submit accurate, complete, and consistent information on programmatic activities such as the completion of training and exercise requirements. However, these processes have not yet been fully implemented, and FEMA officials have not yet provided similar tools and checklists for the AFG program. In March 2017, FEMA grants management staff provided us with documentation on the process FEMA uses to verify and validate grantee data from the EMPG and AFG grant programs, as we recommended. As a result of having a consistent approach to verifying data, FEMA’s efforts should reduce the collection of erroneous performance data. In addition, as part of our September 2016 review of FEMA Fire Assistance Grant program, we reported that FEMA officials said they planned to develop and implement a consolidated grant management system to integrate data used to manage fire grant programs with the data gathered for FEMA’s other preparedness grants, and ultimately better measure the impact of fire grants on national preparedness efforts. Specifically, as we reported in May 2016, FEMA plans to develop and implement a new Grants Management Modernization system to provide agency-wide management for all of FEMA’s disaster and preparedness grants. Further, we are currently performing an on-going review of FEMA’s consolidated grant management system and plan to report on this effort later this year. FEMA Has Made Progress Assessing Its Grant Preparedness Capabilities, but Continues to Face Challenges Developing a National Preparedness System We also reported in March 2011 that FEMA needed to improve its oversight of preparedness grants by establishing a framework with measurable performance objectives for assessing urban area, state, territory, and tribal capabilities to identify gaps and prioritize investments. Specifically, we recommended that FEMA complete a national preparedness assessment of capability gaps at each level based on tiered, capability-specific performance objectives to enable prioritization of grant funding. With such an assessment, FEMA could identify the potential costs for establishing and maintaining capabilities at each level and determine what capabilities federal agencies should provide. We reported in March 2013 that FEMA has made some progress in assessing its preparedness capabilities, but continued to face challenges developing a national preparedness system that could assist FEMA in prioritizing preparedness grant funding. For example, in March 2012, FEMA issued the first National Preparedness Report, which describes progress made to build, sustain, and deliver capabilities. In April 2012, FEMA issued guidance on developing Threat and Hazard Identification and Risk Assessments (THIRA) to facilitate the self-assessments of regional, state, and local capabilities. FEMA requires state, territory, tribal, and urban area governments receiving homeland security funding to annually complete THIRAs and use the results to determine the resources required to achieve the capability targets they set for their jurisdiction. However, we found in March 2013 that FEMA faced challenges that may reduce the usefulness of these efforts. For example, the National Preparedness Report noted that while many programs exist to build and sustain preparedness capabilities, challenges remain in measuring their progress over time. According to the report, in many cases, measures do not yet exist to gauge the performance of these programs, either quantitatively or qualitatively. FEMA has taken some steps to address our recommendation. Specifically, FEMA reported in February 2018 that the agency has developed capability-specific performance objectives that will enable a national preparedness assessment of capability gaps, but no such report has been issued at this time. FEMA reported that it plans on implementing new methodology for some core capabilities in December 2018 and for all core capabilities by December 2019, and will be able to provide complete results in 2020. In addition, FEMA reported that they are developing a new Threat and Hazard Identification and Risk Assessment (THIRA) methodology that will assist in measuring the effectiveness of state and urban areas’ grant projects in reducing risk. According to FEMA, the new methodology will measure changes in state and urban area preparedness through the use of standardized capability targets and key indicators that will show how FEMA preparedness grants are being used to address gaps in capability targets. This should also lead to a better understanding of the Nation’s overall preparedness. Regardless, as of February 2018, FEMA had taken steps to assess preparedness capabilities, but had not yet completed a national preparedness assessment with clear, objective, and quantifiable capability requirements against which to assess preparedness, as we recommended. Developing such an assessment would help FEMA to identify what capability gaps exist at the federal level and what level of resources are needed to close such gaps. Chairman Donovan, Ranking Member Payne, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. GAO Contacts and Staff Acknowledgments For questions about this statement, please contact Chris Currie at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Aditi Archer (Assistant Director), John Vocino (Analyst-In- Charge), Dorian Dunbar, Alexandra Gebhard, Eric Hauswirth, Chuck Bausell, Heidi Nielson, and Adam Vogt. Related GAO Products Federal Emergency Management Agency: Progress and Continuing Challenges in National Preparedness Efforts GAO-16-560T: Washington, D.C.: Apr 12, 2016. Fire Grants: FEMA Could Enhance Program Administration and Performance Assessment GAO-16-744: Washington, D.C.: Sep 15, 2016. Federal Emergency Management Agency: Strengthening Regional Coordination Could Enhance Preparedness Efforts. GAO-16-38, .Washington, D.C.: February 4, 2016. Emergency Management: FEMA Has Made Progress since Hurricanes Katrina and Sandy, but Challenges Remain. GAO-16-90T. Washington, D.C.: October 22, 2015. Emergency Management: FEMA Collaborates Effectively with Logistics Partners but Could Strengthen Implementation of Its Capabilities Assessment Tool. GAO-15-781. Washington, D.C.: September 10, 2015. Emergency Preparedness: Opportunities Exist to Strengthen Interagency Assessments and Accountability for Closing Capability Gaps. GAO-15-20. . Washington, D.C.: December 4, 2014. Federal Emergency Management Agency: Opportunities to Achieve Efficiencies and Strengthen Operations. GAO-14-687T. Washington, D.C.: July 24, 2014. National Preparedness: Actions Taken by FEMA to Implement Select Provisions of the Post-Katrina Emergency Management Reform Act of 2006. GAO-14-99R. Washington, D.C.: November 26, 2013. National Preparedness: FEMA Has Made Progress, but Additional Steps Are Needed to Improve Grant Management and Assess Capabilities. GAO-13-637T. Washington, D.C.: June 25, 2013. Grants Performance: Justice and FEMA Collect Performance Data for Selected Grants, but Action Needed to Validate FEMA Performance Data. GAO-13-552. Washington, D.C.: June 24, 2013. Managing Preparedness Grants and Assessing National Capabilities: Continuing Challenges Impede FEMA’s Progress. GAO-12-526T. Washington, D.C.: March 20, 2012. Homeland Security: DHS Needs Better Project Information and Coordination among Four Overlapping Grant Programs. GAO-12-303. Washington, D.C.: February 28, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12- 342SP. Washington, D.C.: February 28, 2012. Port Security Grant Program: Risk Model, Grant Management, and Effectiveness Measures Could Be Strengthened. GAO-12-47. Washington, D.C.: November 17, 2011. FEMA Has Made Progress in Managing Regionalization of Preparedness Grants. GAO-11-732R. Washington, D.C.: July 29, 2011. Measuring Disaster Preparedness: FEMA Has Made Limited Progress in Assessing National Capabilities. GAO-11-260T. Washington, D.C.: March 17, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. FEMA Has Made Limited Progress in Efforts to Develop and Implement a System to Assess National Preparedness Capabilities. GAO-11-51R. Washington, D.C.: October 29, 2010. Urban Area Security Initiative: FEMA Lacks Measures to Assess How Regional Collaboration Efforts Build Preparedness Capabilities. GAO-09-651. Washington, D.C.: July 2, 2009. Transit Security Grant Program: DHS Allocates Grants Based on Risk, but Its Risk Methodology, Management Controls, and Grant Oversight Can Be Strengthened. GAO-09-491. Washington, D.C.: June 8, 2009. National Preparedness: FEMA Has Made Progress, but Needs to Complete and Integrate Planning, Exercise, and Assessment Efforts. GAO-09-369. Washington, D.C.: April 30, 2009. Homeland Security: DHS Improved its Risk-Based Grant Programs’ Allocation and Management Methods, But Measuring Programs’ Impact on National Capabilities Remains a Challenge. GAO-08-488T. Washington, D.C.: March 11, 2008. Homeland Security: DHS’ Efforts to Enhance First Responders’ All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. Homeland Security: Management of First Responder Grant Programs Has Improved, but Challenges Remain. GAO-05-121. Washington, D.C.: February 2, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Department of Homeland Security (DHS), through FEMA, provides preparedness grants to state, local, tribal, and territorial governments to improve the nation's readiness in preventing, protecting against, responding to, recovering from and mitigating terrorist attacks, major disasters and other emergencies. According to DHS, the department has awarded over $49 billion to a variety of DHS preparedness grant programs from fiscal years 2002 through 2017, to enhance the capabilities of grant recipients. For example, the State Homeland Security Program which awards grants to the nation's 56 states and territories, and the Urban Areas Security Initiative which awards grants to urban areas based on DHS's risk methodology, are the largest of the preparedness grant programs (see figure). This statement addresses progress and challenges in FEMA's efforts to manage preparedness grants and GAO's prior recommendations to strengthen these programs. This statement is based on prior GAO reports issued from March 2011 through February 2016 and selected updates conducted in December 2017 through April 2018. To conduct the prior work and updates, GAO analyzed relevant FEMA data and documentation and interviewed relevant officials. What GAO Found In February 2012, GAO identified coordination challenges among Federal Emergency Management Agency (FEMA) grant programs that share similar goals and fund similar projects, which contribute to the risk of duplication among the programs. GAO recommended that FEMA take steps, as it develops its new grant management system, to collect project information with sufficient detail to identify potential duplication among the grant programs. FEMA has since addressed these recommendations. Specifically, in 2014, FEMA modified a legacy grants data system to capture more robust grant project-level data, and in fiscal year 2017, procured a software tool and developed a set of standard operating procedures to assist its staff in identifying potentially duplicative projects. These actions should help FEMA strengthen the administration and oversight of its grant programs. Furthermore, FEMA is also developing a new grants management modernization system to consolidate and better manage its grants. GAO is currently reviewing the system for this Committee and will report out next year. GAO reported in March 2011 on the need for FEMA to improve its oversight of preparedness grants by establishing a framework with measurable performance objectives for assessing urban area, state, territory, and tribal capabilities to identify gaps and prioritize investments. Specifically, GAO recommended that FEMA complete a national preparedness assessment of capability gaps at each level based on tiered, capability-specific performance objectives to enable prioritization of grant funding. FEMA has taken some steps to address GAO's prior recommendation. Specifically, in February 2018, FEMA reported developing capability-specific performance objectives that will enable a national preparedness assessment of capability gaps. However, FEMA plans to finalize these efforts in 2020 and it is too early to tell how this will impact grant allocations. Until these efforts are completed, GAO will not be able to determine the extent that they address past challenges and recommendations. What GAO Recommends GAO has made prior recommendations designed to address the challenges discussed in this statement. FEMA has taken actions to address some but not all of these recommendations.
gao_GAO-18-377
gao_GAO-18-377_0
Background Estimation of Improper Payments Executive branch agencies are required to take various steps regarding improper payments under IPIA, as amended by IPERA and IPERIA, and related OMB guidance. The steps include the following: 1. reviewing all programs and activities and identifying those that may be susceptible to significant improper payments (commonly referred to as a risk assessment), 2. developing improper payment estimates for those programs and activities that the agency identified as being susceptible to significant improper payments, 3. analyzing the root causes of improper payments and developing corrective actions to reduce them for those programs and activities that the agency identified as being susceptible to significant improper payments, and 4. reporting on the results of addressing the foregoing requirements. Figure 1 lays out these steps, as well as the major components of developing an improper payment estimate. IPERA also directs executive branch agencies’ inspectors general to annually determine and report on whether their respective agencies complied with six criteria listed in the law. On an annual basis, agencies are required to develop improper payment estimates for programs that they consider susceptible to significant improper payments. This generally involves selecting a sample of program payments (or other items, such as invoices) and reviewing them in order to determine whether the relevant payments were proper. OMB guidance for developing improper payment estimates focuses on the statistical nature of the estimates and provides agencies with flexibility in developing their estimates. IPIA, as amended, provides the definition of “improper payment” with IPERIA further instructing OMB to issue guidance requiring agencies to include in the estimate all improper payments, regardless of whether those payments have been or are being recovered. OMB incorporated this requirement into Appendix C to Circular No. A-123, Requirements for Effective Estimation and Remediation of Improper Payments. In accordance with these relevant laws and OMB guidance, agencies must apply “improper payment” in the context of their programs when developing improper payment estimates. Characteristics of Programs Reviewed The 10 programs we reviewed serve a variety of purposes and are administered by various agencies across the federal government. Table 2 summarizes each of these programs. Agency Processes to Estimate Improper Payments Varied, and Some Differences May Hinder the Usefulness of the Resulting Estimates Aspects of Sample Selection, Including Sampling Approach and Age of Data, Varied Sampling Approach IPIA, as amended, requires agencies to develop statistically valid improper payment estimates or estimates that are otherwise appropriate using a methodology approved by the Director of OMB. The six agencies we reviewed reported using either statistically valid or alternative sampling approaches for the 10 selected programs, and some agencies reported additionally incorporating actual improper payment amounts into their estimates, as shown in table 3. If an agency is unable to produce a statistically valid improper payment estimate, it can use an alternative approach if approved by OMB. For example, the Department of Education (Education) reported using an alternative methodology for the Direct Loan program after conducting a cost-benefit analysis comparing use of a statistical and an alternative methodology. Similarly, the Department of Health and Human Services (HHS) reported using an alternative methodology for Medicaid to better manage resources needed to conduct the required reviews. In addition to their statistical approaches, two agencies reported incorporating actual improper payment amounts into the estimates for 2 of the programs we reviewed. Officials at the Department of Defense (DOD) stated that the agency calculates its Military Pay improper payment estimate by adding the amount of debts due to DOD entered into its financial system based on overpayments (i.e., debts due to DOD by a recipient of an overpayment) identified during the fiscal year to a projected estimate of improper payments. Officials at the Office of Personnel Management (OPM) stated that the agency calculates its Retirement program improper payment estimate by adding the amount of debts due to OPM entered into its financial system based on overpayments (i.e., debts due to OPM by a recipient of an overpayment) identified during the fiscal year to a projected estimate of underpayments. Data Subject to Sampling To implement their sampling approaches, agencies select a sample of data to test from a larger, specified population of data. For the six agencies we reviewed, data sampled varied by program and include payments, claims, tax returns, and pay accounts. For example, according to their policies and procedures DOD samples invoices related to payments made from 12 financial systems for Defense Finance and Accounting Service Commercial Pay, HHS samples medical claims for Medicare Fee-for-Service, and DOD samples pay accounts for Military Pay. Agencies subject specific data populations to sampling, which may not include all payments made for a program. Reasons for sampling exclusions varied across programs, as shown by the examples in table 4. Some of the selected agencies reported sampling multiple sets of data. For example, for its Direct Loan improper payment estimate, Education officials stated that the agency reviews Program Review Reports to identify improper payments in originations and also samples loan consolidation and refund payments. According to agency officials, Direct Loan origination, consolidation, and refund transactions carry different risks of improper payment. Age of Data To estimate improper payments for fiscal year 2017, the six agencies we reviewed reported sampling and testing data that varied in age from calendar year 2013 to fiscal year 2017. Figure 2 shows the range of data used. OMB guidance states that to the extent possible, data used for estimating improper payments should coincide with the fiscal year being reported, but agencies may use a different 12-month reporting period with approval from OMB. OMB staff acknowledged there are costs and benefits to sampling newer or older data. OMB staff stated that although they review agencies’ sampling and estimation plans, they defer to the agencies regarding the appropriateness of the age of data used to estimate improper payments. OMB staff stated that they approve the timeframe of the data used in alternative methodologies as part of the approval of the methodology overall, whereas OMB silence provides tacit approval (i.e., no communication to the agency) for statistically valid methodologies. Processes for Identifying Improper Payments Varied by Program, Including Consideration of Eligibility and Treatment of Nonresponses Testing Processes After agencies determine what subsets of data and types of transactions to review, they generally test the data and calculate their improper payment estimates. Testing processes varied among the 10 programs, with some of the six agencies using processes designed specifically to estimate improper payments and others leveraging existing processes designed for other purposes. Some of the selected agencies reported using multiple testing processes and combining the results to develop a program’s improper payment estimate. For example, according to their policies and procedures the Direct Loan estimate comprises three component estimates for loan originations, consolidations, and refunds and the Medicaid estimate includes fee-for-service, managed care, and eligibility components. Table 5 summarizes the processes used by the six agencies we reviewed. Although agencies’ testing processes varied, most included steps to address aspects of eligibility of beneficiaries, goods, or services—a key component of determining the appropriateness of a payment—in their programs. For example, according to their policies and procedures for Medicare Fee-for-Service, reviewers examine the medical necessity, compliance with documentation requirements, and coding of services provided, among other things; for the Earned Income Tax Credit (EITC), auditors examine whether the taxpayer properly reported income and whether the taxpayer meets eligibility criteria, including income and qualifying child requirements, and auditors examine, among other things, whether the taxpayer is subject to a disallowance period on receiving EITC; for Medicaid, reviewers examine fee-for-service claims and managed care payments to determine the eligibility status of the beneficiary and the provider, as well as support for the medical necessity of fee-for- service claims, among other things; for Old-Age, Survivors, and Disability Insurance (OASDI), reviewers examine factors to support the beneficiary’s eligibility, including, among other things, citizenship, relationship (in the case of survivor benefits), and receipt of other government benefits; and although Education’s Direct Loan program reviews can vary in scope, they may include, among other things, steps to verify educational institution eligibility (such as licensing and accreditation) and student eligibility (such as enrollment status and satisfactory academic progress). In contrast, per their policies and procedures, eligibility is not tested for DOD’s Military Pay or the overpayment component of OPM’s Retirement estimate. DOD Military Pay. DOD reported using the results of monthly payment reviews to calculate a projected improper payment amount for Military Pay. However, DOD’s policies and procedures do not require a review of servicemember eligibility for special pay or allowances as part of these monthly reviews. DOD’s Standard Operating Procedures (SOP) direct reviewers to recalculate payments to servicemembers solely based on the pay account data included in DOD systems (i.e., to verify that components of servicemember pay were calculated appropriately). DOD’s SOP does not direct reviewers to verify that servicemembers were eligible for special pay or allowances by verifying the information included in the pay account (such as pay grade) with supporting documentation. According to DOD officials, reviewers may investigate potential inconsistencies in pay account data identified during their reviews—which may include eligibility issues—but this process is not consistently performed or documented. According to DOD officials, an example of a potential inconsistency is when a servicemember receives jump pay (a hazard pay for parachute jumps) but is located at a site where no jump activity occurred. According to DOD officials, to help compensate for the limitations of its monthly reviews, DOD calculates the final reported Military Pay improper payment estimate by adding actual debts due to DOD (related to overpayments) identified during the year to the projected estimate of the monthly reviews. DOD identifies the actual overpayments through various methods, including other postpayment reviews and servicemember self- reporting. Standards for Internal Control in the Federal Government states that management should identify, analyze, and respond to risks related to achieving the defined objectives. DOD has acknowledged internal control deficiencies related to the Military Pay program, which—if addressed in improper payment testing—could have an impact on the program’s improper payment rate. However, these deficiencies were identified through other internal control reviews not related to estimating improper payments. For the purposes of estimating improper payments, DOD has not fully assessed the risks in its Military Pay program and evaluated whether its approach for estimating improper payments effectively addresses these risks. As a result, DOD’s process for estimating Military Pay improper payments may not reflect significant risks of improper payment in the program, specifically whether servicemembers are eligible for the special pay or allowances they receive, calling into question the improper payment estimate and its usefulness for developing effective corrective actions. OPM Retirement. OPM relies on its existing Quality Assurance (QA) process to estimate Retirement underpayments. The QA process is designed to determine whether new Retirement claims (i.e., claims paid for the first time) have been adjudicated correctly. Therefore, only new Retirement claims are sampled and tested for accuracy. OPM applies historical results of QA testing to older claims; however, these historical results do not reflect any different risks of underpayment that the older claims may face. Although OPM’s QA process also produces an estimate of overpayments, the agency’s policies and procedures instead use actual debts due to OPM (related to overpayments) that were identified during the fiscal year as its overpayment amount (i.e., the overpayment amount does not reflect any testing of Retirement payments to verify eligibility or accuracy). These actual overpayments represent amounts that have been identified through various means, such as inspector general fraud referrals. OPM officials stated that the agency uses actual amounts because the QA estimate may overstate overpayments. However, the fiscal year 2016 QA overpayment estimate was lower than the actual amount of debts identified as due to OPM. Standards for Internal Control in the Federal Government states that management should identify, analyze, and respond to risks related to achieving the defined objectives. OPM has not fully assessed the risks of improper payments in its Retirement program—particularly related to the risk of underpayments in older claims and the risk of overpayments— and evaluated whether its approach for estimating improper payments effectively addresses these risks. As a result, OPM’s processes for estimating Retirement improper payments may not reflect significant risks of improper payment in the program, calling into question the improper payment estimate and its usefulness for developing effective corrective actions. OMB guidance. OMB issues guidance for agencies to implement various requirements of the improper payment laws. Specifically, OMB is required by IPERIA to issue guidance to set standards for agencies to follow in determining the underlying validity of sampled payments to ensure that amounts being billed, paid, or obligated for payment are proper. Although existing OMB guidance addresses requirements for sampling, it does not address how agencies test to identify improper payments, such as using a risk-based approach to help ensure that key risks of improper payments, like eligibility, are addressed through testing processes. Without such guidance, there is increased risk that agencies’ processes may not address key risks of improper payments in their programs—for example, the cases of DOD Military Pay and OPM Retirement described above—calling into question the improper payment estimates for such programs and their usefulness for developing effective corrective actions. Treatment of Insufficient Documentation According to OMB guidance, when an agency’s review is unable to determine whether a payment was proper because of insufficient or lack of documentation, the payment must be considered an improper payment. Among the six agencies and 10 programs we reviewed, treatment of insufficient documentation varied by program, as did the classification of these issues for root cause reporting in the AFRs. HHS’s programs were the only ones we reviewed that reported improper payments in the insufficient documentation root cause category for fiscal years 2016 or 2017, as shown in table 6. Some agencies stated that they report insufficient documentation in other root cause categories that they consider more appropriate. For example, Education officials stated that for the Direct Loan program, payments that lack sufficient supporting documentation may be placed in the “Administrative or Process Error Made by Other Party” root cause category. In these cases, a third party—such as a loan servicer—is unable to provide sufficient documentation supporting that the sampled payment was proper. OMB guidance states that in cases where the agency believes that more than one root cause category might be suitable, the agency should determine which category it believes to be the most appropriate. Additionally, some agencies stated that the “insufficient documentation” category was not always relevant when they recreated sampled cases to estimate a program’s improper payments. For example, according to officials, to complete an OASDI stewardship review of a sampled case, a Social Security Administration (SSA) quality reviewer reviews the documentation related to the original determination and then independently re-develops all factors of the payment and interviews the associated beneficiary. According to agency officials, insufficient documentation would not apply as all improper payments identified in the stewardship sample are supported by documentation and payment has been verified in all reviewed cases. As noted previously, the processes for estimating DOD Military Pay and OPM Retirement improper payments were limited, and these limitations may have an impact on the agencies’ ability to identify improper payments related to insufficient documentation. Treatment of cases of nonresponse. Some agencies contact outside entities—such as payees or beneficiaries—as part of their improper payment testing processes. Among the six agencies we reviewed, treatment of cases of nonresponse differed. For example: SSA officials stated that in cases where quality reviewers do not receive responses from OASDI beneficiaries they contact, they exclude the cases from review (unless the reviewer identifies an improper payment in the initial review that is completed prior to reaching out to the beneficiary). For EITC improper payment estimation purposes, the Internal Revenue Service (IRS) stated that the agency does not consider the sampled payment associated with a nonresponse case to be proper or improper. It sets the sampling weight of nonresponse cases to zero and adjusts the sampling weights of respondents upward to account for the nonresponse cases. IRS’s methodology assumes nonresponse and response cases have an equal likelihood of improper payment. For Medicare Fee-for-Service and Medicaid, HHS’s policies and procedures consider payments associated with nonresponse cases to be improper. OMB guidance states that when an agency’s review is unable to discern whether a payment was proper as a result of insufficient or lack of documentation, this payment must be considered an improper payment. However, it does not specifically address the appropriate treatment of nonresponse cases for improper payment estimation purposes. As a result, without clearer guidance there is increased risk that agencies’ improper payment estimates may be understated and that estimates for similar programs may not be comparable. Except for IRS, Selected Agencies Generally Reported Using Law and OMB Guidance to Calculate Improper Payment Estimates Calculation of Improper Payment Estimates When agencies identify improper payments, they must determine the amount of the payment that was improperly made. The six agencies we reviewed generally reported using the definition of improper payment in relevant laws and OMB guidance to determine the amount of improper payments identified. OMB guidance provides agencies with instructions on how to calculate the amount of improper payments. However, when developing its improper payment estimate for EITC, IRS subtracted overpayments that were paid out and later recovered. By subtracting recovered overpayments, IRS excluded them from the EITC improper payment estimate. For 2013—the tax year used to produce the fiscal year 2017 improper payment estimate—IRS estimated that $1.2 billion in EITC overpayments would be recovered. IPERIA directed OMB to provide guidance that requires agencies to include all improper payments in their improper payment estimates, regardless of whether they have been or are being recovered. Although the OMB guidance was revised in October 2014 to implement this requirement, IRS has not updated its estimation methodology for EITC. By not updating its guidance and continuing to remove EITC overpayments that may be subsequently recovered, IRS is understating its improper payment estimate and potentially limits its ability to address these types of improper payments before they occur. Conclusions Improper payments are a long-standing, significant problem in the federal government. Estimation of improper payments is key to understanding the extent of the problem and to developing effective corrective actions to address it. Among the six agencies we reviewed, processes to estimate improper payments in their programs varied, and certain differences in these processes may affect the quality of the resulting estimates and consequently these agencies’ efforts to reduce improper payments. Specifically, policies and procedures for DOD’s Military Pay and OPM’s Retirement programs’ improper payment estimation methodologies do not address certain key risks, like eligibility, in part because these agencies have not fully assessed their processes. Further, although OMB guidance addresses requirements for sampling, it does not address how agencies test to identify improper payments. Without such assessments and guidance, there is increased risk that agencies’ processes may not address key risks of improper payments in their programs, calling into question the improper payment estimates for such programs and their usefulness for developing effective corrective actions. Additionally, for agencies we reviewed that contact outside entities as part of their improper payment estimation processes, the treatment of cases of nonresponse varied. OMB guidance does not specifically address the appropriate treatment of nonresponse cases for improper payment estimation purposes. Without clearer guidance there is increased risk that agencies’ improper payment estimates may be understated and that estimates for similar programs may not be comparable. Finally, although IPERIA directed OMB to provide guidance that requires agencies to include all improper payments in their improper payment estimates, regardless of whether they have been or are being recovered, IRS has not updated its processes to reflect the change. By not updating its guidance and continuing to remove EITC overpayments that may be subsequently recovered, IRS is understating its improper payment estimate and potentially limits its ability to address these types of improper payments before they occur. Recommendations for Executive Action We are making two recommendations to the Director of OMB that have government-wide implications and specific recommendations to DOD, OPM, and IRS regarding their programs included in this review. The Director of OMB should develop guidance on how agencies test to identify improper payments, such as using a risk-based approach to help ensure that key risks of improper payments, such as eligibility, are addressed through testing processes. (Recommendation 1) The Director of OMB should develop guidance clarifying the appropriate treatment of nonresponse cases during improper payment testing. (Recommendation 2) The Under Secretary of Defense (Comptroller) should assess the processes for estimating Military Pay improper payments to determine whether they effectively address key risks of improper payments— including eligibility for different types of pay and allowances—and take steps to update the processes to incorporate key risks that are not currently addressed. (Recommendation 3) The Director of OPM should assess the processes to estimate Retirement improper payments to determine whether they effectively address key risks of improper payments—including eligibility and whether older claims face different risks of improper payments than new claims—and take steps to update the processes to incorporate key risks that are not currently addressed. (Recommendation 4) The Commissioner of IRS should update IRS’s improper payment estimation methodology to not exclude recovered overpayments from its EITC improper payment estimate. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report for comment to OMB, DOD, Education, HHS, Treasury, OPM, SSA, and USDA. OMB provided oral comments, which are summarized below. OPM, DOD, and IRS provided written comments, which are reproduced in appendixes II through IV, respectively. Education, HHS, SSA, and USDA did not provide written comments on the draft report. In addition, HHS, IRS, OMB, OPM, and SSA provided technical comments, which we have incorporated, as appropriate. In oral comments provided on April 30, 2018, a Senior Policy Advisor in OMB’s Office of Federal Financial Management stated that OMB partially agreed with our first recommendation and agreed with our second recommendation. Regarding the first recommendation, the Senior Policy Advisor stated that OMB should not have to develop more specific guidance as each program and activity has its own risks. Instead, inspectors general are better equipped and positioned to review the sampling and estimation plans as part of their annual IPERA compliance audits and that agencies, their statisticians, and inspectors general should work out the best testing procedures for their agencies. OMB could provide suggestions during OMB’s annual town hall meeting related to improper payments for areas that inspectors general may consider. Although we agree that programs and activities may face different risks of improper payment, we continue to believe that guidance from OMB on how agencies test to identify improper payments—such as directing agencies to take a risk-based approach in developing their testing procedures—could help ensure that agencies address the specific risks they identify when developing improper payment estimates. Further, such guidance could also help ensure that testing processes are designed to address an agency’s identified risks before the estimate is developed, whereas an inspector general’s review—as well as related recommendations for improvement—would generally occur after the agency’s improper payment estimate had been developed and reported. Regarding the second recommendation, the Senior Policy Advisor noted that OMB plans to update its guidance to direct agencies to treat nonresponse cases as improper payments and to include a new category for tracking such cases. In its written comments, OPM partially concurred with our recommendation to assess the processes to estimate Retirement improper payments to determine whether they effectively address the key risks of improper payments. OPM agreed to conduct an audit of older claims to determine if they face different risks than new claims. However, OPM did not agree with the part of the recommendation to assess the risk of improper payments related to eligibility in the estimation process. OPM stated that eligibility is determined before annuity or survivor benefits are fully adjudicated. However, the objective of an improper payment estimate is to determine whether payments were made properly. To do so, an agency should determine whether the payee was eligible for the payment that was made, among other things. As such, we continue to believe that the recommendation—including the assessment of the risk of improper payments related to eligibility—is warranted. In their written comments, DOD and IRS both agreed with our recommendations directed to them and described the steps they plan to take to implement them. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Defense, Education, Health and Human Services, and the Treasury; the Director of the Office of Personnel Management; the Administrator of the Social Security Administration; the Director of the Office of Management and Budget; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Reported Improper Payment Estimates by Agency and Program for Fiscal Year 2017 Table 7 lists the fiscal year 2017 improper payment estimates by agency and program, as reported by agencies in their fiscal year 2017 agency financial reports and compiled on the Office of Management and Budget’s payment integrity website, paymentaccuracy.gov. Appendix II: Comments from the Office of Personnel Management Appendix III: Comments from the Department of Defense Appendix IV: Comments from the Internal Revenue Service Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Phillip McIntyre (Assistant Director), James M. Healy (Auditor in Charge), Daniel Flavin, and Fabiola Torres made key contributions to this report.
Why GAO Did This Study Improper payments—which include payments that should not have been made or were made in an incorrect amount—are a long-standing, significant problem in the federal government, estimated at almost $141 billion for fiscal year 2017. Executive branch agencies are required to annually estimate improper payments for certain programs. Estimation of improper payments is key to understanding the extent of the problem and to developing effective corrective actions. Relevant laws and guidance provide agencies flexibility in developing estimates. This report describes agencies' processes to estimate improper payments in selected programs for fiscal year 2017 and the extent to which certain differences in these processes can affect the usefulness of the resulting estimates. GAO selected 10 programs across six agencies with the largest reported program outlays in fiscal years 2015 and 2016. For these programs, GAO reviewed relevant laws and guidance, analyzed agencies' policies and procedures, and interviewed officials at relevant agencies and OMB staff. What GAO Found The six agencies GAO reviewed reported taking various approaches related to key components of estimating improper payments—shown in the figure below—for 10 selected programs, which collectively reported outlays of over $2.5 trillion for fiscal year 2017. Sample selection. Eight of the 10 programs GAO reviewed reported using statistically valid approaches, and the remaining 2 reported using alternative methodologies approved by the Office of Management and Budget (OMB). The sampled data elements varied, including payments, medical claims, and tax returns. The age of the data used to develop fiscal year 2017 improper payment estimates also varied, ranging from calendar year 2013 to fiscal year 2017. Identification of improper payments. Some of the six agencies reported using processes designed specifically to estimate improper payments, whereas others reported leveraging existing reviews. These agencies' policies and procedures include a review of aspects of eligibility, except for those related to the Department of Defense's (DOD) Military Pay and the Office of Personnel Management's (OPM) Retirement overpayments. DOD and OPM have not fully assessed whether their estimation processes effectively consider key program risks. OMB guidance does not specifically address how agencies are to test to identify improper payments, such as using a risk-based approach to help ensure that key risks of improper payments are addressed. The six agencies also varied in the treatment of insufficient documentation, both in identifying and in reporting the root causes of improper payments. For the agencies that contact entities outside the agency to estimate improper payments, the treatment of nonresponse differed, with one agency including nonresponses as improper payments and another generally excluding the nonresponse cases from review. Although OMB guidance states that agencies should treat cases of insufficient documentation as improper payments, it does not specifically address the treatment of nonresponse cases. Calculation of the improper payment estimate. The six agencies generally reported using law and OMB guidance to calculate improper payment estimates for the selected programs, except for the Earned Income Tax Credit (EITC). The Internal Revenue Service (IRS) removed overpayments that were recovered when developing its estimate. OMB guidance requires agencies to include recovered amounts in their estimates. Removing these overpayments understates the EITC improper payment estimate and may limit IRS's ability to develop corrective actions to prevent improper payments. What GAO Recommends GAO recommends that OMB develop guidance on treatment of nonresponse cases and testing to identify improper payments, that DOD and OPM assess their estimation processes, and that IRS revise its methodology to not exclude recovered payments from its estimate. All of the agencies either agreed or partially agreed with the specific recommendations to them. GAO believes that the actions are warranted, as discussed in the report.
gao_GAO-18-378
gao_GAO-18-378_0
Background Adverse Medical Events DHA requires the military services and NCR to categorize adverse medical events by severity, using seven categories defined by the Agency for Healthcare Research and Quality (AHRQ), ranging from unsafe condition to death. (See table 1.) MTF personnel must enter all adverse medical events in DHA’s JPSR system, which was implemented in June 2011 in response to a statutory mandate for the MHS to establish a patient care error reporting and management system. The JPSR system is intended to provide ways to facilitate the self-reporting, collection, and aggregation of adverse medical event data across the MHS. The system includes prompts for information about factors that may have contributed to the event, such as medication or equipment, as well as the assignment of a severity category. From 2013 through 2016, the total number of reported adverse medical events in the JPSR system increased from over 76,000 to about 108,000. When analyzing adverse medical events, DHA groups the data into three categories—near miss, no harm, and harm. The highest increase was in the near miss category (about 36,000 to 56,000) while the other two categories increased to a lesser extent. According to an internal DHA publication, a higher increase in near miss events alongside a decrease in harm and no harm events is considered a positive trend because it shows that more potential adverse medical events are being detected before they reach the patient. (See fig. 1.) Sentinel Events The most severe types of adverse events are called sentinel events. In March 2015, DOD issued a memo that revised its previous definition of a sentinel event, which was an unexpected occurrence involving death or serious physical or psychological injury or risk. The revised definition states that a sentinel event is a patient safety event (not primarily related to the natural course of the patient’s illness or underlying condition) that results in death, permanent harm, or severe temporary harm. The revised definition also added a list of events outlined by the Joint Commission and the National Quality Forum that go beyond those that result in unexpected death or serious physical or psychological harm to the patient. (See app. I for the revised definition of sentinel events.) From 2013 through 2016, DHA’s data showed an increase in the total number of reported sentinel events—both medical and dental—from 121 to 319. Medical sentinel events approximately doubled from 101 to 206, while dental sentinel events increased more than fivefold from 20 to 113. (See fig. 2.) The sharp increase in events in 2015 may have been influenced by DHA’s revised definition of sentinel events as well as the Army’s inclusion of dental events that meet sentinel event criteria. A DHA internal publication also noted that a culture shift in patient safety reporting could have contributed to this increase. As with all adverse medical events, MTF personnel must enter sentinel events into the JPSR system; however, sentinel events have additional reporting requirements that must be met within specified time frames. For example, DHA policy requires MTF officials to report sentinel events to their respective military service or NCR within 24 hours after they become aware of the event. (See fig. 3, step 1.) MTFs also must report to and comply with sentinel event reporting requirements established by the Joint Commission. These requirements include the development and submission of an RCA report for each sentinel event to identify the causal and contributory factors associated with the event as well as the corrective actions needed to prevent future incidents. The military services and NCR submit copies of their RCA reports to DHA, which rates the corrective actions included in each RCA report as stronger, intermediate, or weaker based on an estimation of their effectiveness. (See fig. 3, step 2.) DHA uses commercial process improvement software called TapRooT to assist with the development of RCA reports, and DHA requires all MTFs to use a methodology for its RCA reports that is currently supported by this software. Additionally, once the Joint Commission approves an RCA report and its associated corrective action plan, it may require the preparation of an MOS report that assesses the corrective actions 4 months after an RCA report is submitted to determine whether the implementation of corrective actions and outcome measures was successful. Unlike RCA reports, these reports are only required for selected sentinel events as determined by the Joint Commission. DOD’s March 2015 memo that revised the definition of sentinel events contained an additional requirement for the military services and NCR to submit copies of reports on the implementation of corrective actions to DHA. (See fig. 3, step 3.) DHA officials told us that MTFs could submit their MOS reports to meet this requirement. For this report we use the term MOS report when referring to this requirement. Transition of MTF Administrative Responsibilities to DHA Responsibility for the delivery of care in the MHS is shared among the Office of the Assistant Secretary of Defense (Health Affairs), DHA, the military service medical commands, and NCR’s medical directorate. MTFs are currently under the direction and control of the Army Medical Command, the Navy Bureau of Medicine and Surgery, and the Air Force Major Commands. MTFs within the NCR are under the direction and control of the NCR medical directorate, which reports to DHA. (See fig. 4.) The NDAA 2017 included a provision that requires the Director of DHA to be responsible for the administration of every MTF beginning October 1, 2018. This responsibility includes budgetary matters, patient safety activities, information technology, and health care administration and management, among other things. As part of the patient safety activities, DHA officials will assume responsibility for adverse medical event reporting. As required, DHA submitted initial plans to Congress in both March and June 2017 about how it plans to implement its new responsibilities. In September 2017, we reported that DHA’s plans summarize its new roles and responsibilities at a high level and that a significant amount of work remained to complete the implementation plan. On March 30, 2018, DOD submitted an additional implementation plan and stated that its final implementation plan will be completed by June 30, 2018. The Military Services’ and NCR’s Adverse Medical Event Policies Do Not Consistently Align with DOD’s Policies, but Transition to DHA’s Policies Is Planned Policies established by the military services and NCR for reporting adverse medical events are developed to implement DOD’s policies— which tend to be broad—and may include additional requirements specific to their branch of military service. However, we found that aspects of these policies do not consistently align with DOD’s policies, including the definitions for adverse medical events and sentinel events, as well as requirements for entering events into the JPSR system. (See table 2.) Definition of adverse medical event. The Navy uses DOD’s definition of an adverse medical event—which includes events that may or may not result in harm to the patient. However, the Army, Air Force, and NCR defined this term more narrowly, to include only an event that causes actual harm to the patient. While the difference in these definitions could potentially result in the underreporting of events, officials from all four of the MTFs we visited told us that the discrepancy does not have much of an impact because the individuals who report these events—MTF personnel—are unlikely to be aware of the difference and likely follow the broader DOD definition. Policy on entering events in the JPSR system. Only NCR’s policy states that adverse medical events should be entered into the JPSR system. However, Army, Navy, and Air Force officials as well as officials from one MTF we spoke with stated that they record all adverse medical events in the JPSR system even though their policies do not require it. Policy on reviewing adverse medical events. NCR and Air Force policies, which align with DOD’s policy, require a review of an adverse event that is based on whether there is harm to the patient. In contrast, Army and Navy policies do not require that an adverse medical event be reviewed on the basis of whether there is harm to the patient, but they do require the event to be reviewed for the level of severity and probability of recurrence. However, Navy officials told us that reviewing an event for severity includes an assessment of harm to the patient even though this is not clearly stated in their policy. Additionally, all of the MTF officials we interviewed said that the JPSR system requires them to review an adverse medical event on the basis of whether there is harm to the patient and to assign a harm scale category. Memorandum that revised the definition of a sentinel event. Only the Army’s draft policy aligned with DOD’s March 2015 revised definition of sentinel events. However, MTF officials from the other military services and NCR told us that even though the revised definition was not in their policies, they were aware of the memo and were using this definition. Memorandum that requires the military services and NCR to submit copies of their reports on the implementation of corrective actions to DHA. The Army’s draft policy that aligned with DOD’s revised definition of sentinel events also included a section requiring the submission of these reports to DHA. The policies of the other military services and NCR do not include this requirement. However, officials from the other military services we interviewed told us that they are aware of this requirement and are submitting MOS reports to meet this requirement. NCR officials told us that they are aware of this requirement but have not begun submitting these reports. In March 2017, DOD’s senior military medical leadership published operating principles to guide the implementation of specific MHS requirements outlined in the NDAA 2017. One of the operating principles to guide the transition of MTF administrative responsibilities to DHA requires DHA to create all health care policies for the direct care system (the MTFs) to ensure greater consistency and eliminate duplicative governance. As a result, the military services and NCR will no longer be establishing their own policies. According to DHA officials, the transition for DHA to be the single policy writer for MTFs will take time, and policies issued by the military services and NCR will remain in place until they are superseded by revised DHA policies. DHA officials are in the process of updating the department’s patient safety policy through the Patient Safety Improvement Collaborative, a working group that includes patient safety representatives from all of the military services, NCR, and DHA. However, as of January 2018, DHA officials were uncertain as to when this effort would be complete. Fragmented Process for Tracking Sentinel Events and RCA Reports Impedes DHA’s Ability to Ensure It Has Received Complete Information Process Used by the Military Services, NCR, and DHA to Track Sentinel Events and RCA Reports Is Fragmented Sentinel Event Tracking We found that the process used by the military services, NCR, and DHA to track sentinel events is fragmented. (See fig. 5.) Similar to all other types of adverse events, DHA requires that sentinel events be recorded in the JPSR system. However, DHA officials told us there are additional follow-up reports and associated deadlines for sentinel events that go beyond the JPSR system’s current tracking capabilities, and as a result, officials from each of the military services and NCR told us they track sentinel events in their own tracking record outside of the JPSR system. Officials told us the military services and NCR receive reports about sentinel events from their MTFs via email, which are then entered in their respective internal tracking records and reported to DHA via email. DHA then enters and tracks the sentinel events in its own internal tracking record. DHA officials told us that they do not believe that all sentinel events are being entered in the JPSR system, and that the JPSR system does not currently have the capability to pull sentinel event data for tracking purposes. As a result, the same sentinel events are entered and tracked in two separate tracking records—DHA’s tracking record and the tracking records maintained by the military services or NCR. In a similarly fragmented process, MTFs email RCA reports—a requirement for sentinel events—separately to their respective military services or NCR, which then emails them to DHA. Although DHA requires MTFs to use a methodology currently supported by the TapRooT system to complete their RCA reports, DHA officials told us the TapRooT software is not compatible with most MTFs’ computer systems, and as a result, MTFs do not share RCA reports through this system. Instead, they told us MTFs use the methodology from the TapRooT system to prepare the RCA report as a standalone document. Officials told us MTFs then email the RCA reports to their military service or NCR, which notates the RCAs in their respective internal tracking record. The military services and NCR email the RCA reports to DHA, which notates the reports in its own internal tracking record. Fragmented Tracking Impedes DHA’s Ability to Ensure That It Has Complete Information on Sentinel Events and RCA Reports Because the process used by the military services, NCR, and DHA to track sentinel events and RCA reports is fragmented, DHA officials told us they must rely on their reconciliation process to ensure they have complete information. Specifically, on a monthly basis, DHA officials email separate spreadsheets of DHA’s sentinel event records to each of the military services and NCR requesting confirmation of reported sentinel events and the status of overdue RCA reports, among other information. DHA officials acknowledged that their reconciliation process is inefficient and told us that their full-time employees and contractors spend an average of 80 hours per month working on it. Additionally, officials told us that sometimes information about sentinel events and RCA reports is lost or not effectively communicated due to complexities related to routing the email submissions and to turnover in the contract staff who track and reconcile this information. The cooperation of the military services and NCR is key to this process because officials told us that DHA currently has no authority to compel a response from these entities, although this may change with the transition of MTF administrative responsibilities to DHA. DHA officials told us they sometimes do not receive a response to their emails, and in these cases, DHA assumes concurrence. In an effort to improve the reconciliation process and compliance with RCA report submission requirements, DHA officials told us that they developed a new tool called the Comprehensive Analysis Progress Tracker for all three military services and NCR. DHA officials told us this tracker shows the full cycle of each sentinel event, including which RCAs are overdue, and is available on the MHS internal website. DHA officials told us that this tracker, launched in October 2017, replaced the previous system of separate monthly reconciliation emails with individual spreadsheets for each military service and NCR. In January 2018, DHA officials told us they began using this tracker at monthly Patient Safety Improvement Collaborative meetings and will use it during monthly check-ins with the military services and NCR to discuss delayed or missing items. However, the military services and NCR cannot directly edit the Comprehensive Analysis Progress Tracker. As a result, DHA officials told us that the military services and NCR will continue to use email to submit their sentinel events and RCA reports as well as any corrections or additional information needed for the tracker, which may perpetuate previous inefficiencies. Despite Reconciliation Efforts, DHA Does Not Have Complete Information on Sentinel Events and RCA Reports Despite DHA’s efforts to reconcile its information on sentinel events and RCA reports, we identified discrepancies and missing information in its tracking record. Sentinel Event Discrepancies We found that the sentinel events in all of the military service and NCR tracking records matched DHA’s tracking record except for those of the Navy. Specifically, DHA had a record of 19 sentinel events that the Navy did not have for 2013 through 2016. DHA officials were not sure of the reason for the discrepancy between their tracking record and the Navy’s, but told us that sometimes sentinel events are reported to DHA and later determined to not be reportable, and DHA is not given the updated status of the event. Navy officials told us that although they initially reported these 19 events as sentinel, the Joint Commission informed the Navy that it did not consider these events to be sentinel after reviewing the Navy’s submission. Navy officials told us that they determined these events also did not meet other sentinel event criteria per DHA’s revised definition, which goes beyond the definition used by the Joint Commission. Further, Navy officials told us they informed DHA that these events had been deemed non-sentinel by the Joint Commission, and DHA’s tracking record subsequently noted this. However, DHA did not remove the events from its tracking record. RCA Report Discrepancies We found discrepancies in the number of RCA reports when comparing DHA’s internal tracking record to the military services’ and NCR’s internal tracking records. In some instances, we found that DHA had more RCA reports in its tracking record than the military services or NCR for reported sentinel events, and in other instances, DHA had fewer RCA reports in its tracking record than the military services or NCR: DHA had more RCA reports in its internal tracker than in the Army’s internal tracker for 2015 (2 more) and 2016 (1 more). DHA had fewer RCA reports than the Air Force in 2013 (3 less), 2014 (2 less), 2015 (13 less), and 2016 (1 less). Additionally, DHA had fewer RCA reports for reported sentinel events for NCR in 2015 (1 less) and 2016 (18 less). Officials with the military services and NCR told us they did not know why there were differences between their tracking records and those of DHA. However, Army and NCR officials offered potential reasons for these differences. Army officials told us that they may have fewer RCA reports than DHA because they recently transitioned their sentinel event and RCA tracking record from a spreadsheet format to a database, and some reports may not have been copied into the database. NCR officials told us their tracking record may not match DHA’s tracking record because an MTF may submit only one RCA report to DHA that covers multiple similar sentinel events, so DHA may have fewer reports documented in its internal tracking record. Missing RCA Reports For some reported sentinel events, we found that the required RCA reports had not been recorded in any tracking record for the Army, NCR, or DHA. (See table 3.) Army and NCR officials told us that they did not know why they did not have a record of an RCA report for every sentinel event in their internal tracking record. However, these officials explained that there are a number of potential reasons that RCA reports could be missing, including insufficient MTF staff to carry out these activities, and MTF officials’ confusion about the revised definition of a sentinel event. DHA officials told us that they did not know the reasons for the discrepancies between the tracking records for the military services, NCR, and DHA or for the missing RCA reports. Specifically, DHA officials did not know whether these reports were completed but not submitted to DHA or were not completed at all. They told us that they rely on the cooperation of the military services and NCR to submit these reports and cannot enforce the requirement, although this may change with the transition of MTF administrative responsibilities to DHA. Because of these discrepancies and missing RCA reports, DHA lacks critical information about why a sentinel event may have occurred and what actions, if any, MTFs should take to prevent similar incidents in the future. We have previously reported that when fragmentation or overlap exists, there may be opportunities to increase efficiency. In particular, our prior work identified management approaches that may improve efficiency and effectiveness, including implementing process improvement methods and technology improvements. As MTF patient safety responsibilities are transitioned to DHA, the fragmented tracking process may hamper DHA’s ability to efficiently and effectively monitor sentinel events and RCA reports, potentially leading to missed opportunities for systemic improvements. DHA’s Efforts to Ensure It Receives MOS Reports Are Limited and Impeded by Inconsistent Report Tracking and Unclear Requirements about Report Submission DHA’s Efforts to Ensure It Receives MOS Reports Are Limited and Impeded by Inconsistent Report Tracking As of September 2017, DHA had received 27 MOS reports for the 319 sentinel events that were reported in 2016. However, DHA does not know how many reports it is missing because its efforts to reconcile information for these reports have been limited. Prior to January 2018, DHA did not include MOS reports as part of its reconciliation process for sentinel events and RCA reports. However, in January 2018, DHA officials told us they added MOS reports to their new monthly reconciliation process using the Comprehensive Analysis Progress Tracker. While this tracker displays the total number of MOS reports DHA has received, it does not display whether individual reported sentinel events have an associated MOS report. Without this information, DHA may be unable to identify which MOS reports are missing. DHA officials told us that they may revise the Comprehensive Analysis Progress Tracker to follow up on MOS reports associated with specific sentinel events in the future. DHA’s efforts to identify which MOS reports are missing are further impeded by the military services’ and NCR’s inconsistent tracking efforts. Specifically, the military services and NCR have been tracking the submission of their MOS reports in different ways or not at all. Army officials had told us that the completion of MOS reports was noted in their internal tracking record for sentinel events and RCAs. Army officials subsequently told us that as of January 2018, they began tracking whether MOS reports were submitted to DHA in the notes section of their internal tracking record. Navy officials told us they indicated the due date of the MOS report and the date of its submission to DHA in their internal tracking record for sentinel events and RCA reports. Air Force officials told us they indicated in their internal tracking record for sentinel events and RCA reports the date that the MOS report was sent to DHA. However, they told us the Air Force’s process for tracking and submitting MOS reports to DHA has been inconsistent, and they plan to revise it in the future. NCR officials told us they did not track the completion of MOS reports or their submission to DHA. Because of these issues, DHA may not be able to fully reconcile its information for individual MOS reports or identify the reports it is missing, impeding its ability to obtain complete information on the effectiveness of MTFs’ corrective action plans. This is inconsistent with federal internal control standards, which require management to identify and respond to risks to achieve its objectives, and for management to use quality information to achieve its objectives. DOD’s Requirement to Submit Reports on the Implementation of Corrective Actions Is Unclear The requirement in DOD’s memo to submit reports on the implementation of corrective actions is unclear, which may also impact DHA’s ability to ensure that it is receiving these reports for all sentinel events. DHA officials told us that MTFs could meet this requirement by submitting copies of their MOS reports. According to the Joint Commission’s guidance, the Joint Commission assigns MOS reports on an ad hoc basis, depending on the sentinel event, RCA report, and corrective actions, and as a result, an MOS report is not necessarily required for each sentinel event. DHA officials told us that they intended to obtain a report on the implementation of corrective actions for every sentinel event, and they believed that an MOS report was required and thus would be reported for every sentinel event, similar to RCAs. However, DHA officials told us that they learned from the military services and NCR at the January 2018 Patient Safety Improvement Collaborative meeting that an MOS report was not required for every sentinel event and that DHA’s requirement for submitting reports on the implementation of corrective actions was unclear. Specifically, DHA officials told us the military services and NCR told DHA that the 2015 memo did not state when the reports on the implementation of corrective actions are required by DHA. For example, the memo did not state whether DHA requires this report for a reported sentinel event and RCA when the Joint Commission does not. DHA’s unclear requirement is inconsistent with internal control standards, which require management to review policies for continued relevance and effectiveness in achieving the entity’s objectives. Under the current policy, DHA cannot be sure it is receiving all reports on the implementation of corrective actions—such as MOS reports—as it intended, and therefore, it may be missing important information on the effectiveness of MTFs’ implementation of their corrective actions that could be used to help inform broader system-wide improvements. DHA officials told us that they expect to clarify this requirement in DHA’s update to its patient safety policy. DHA Uses Information about Adverse Medical Events to Inform System-wide Patient Safety Improvement Initiatives We found that DHA has introduced several system-wide patient safety improvement initiatives informed by data on adverse medical events from the JPSR system and data on sentinel events from DHA’s tracking database, including the following: DHA’s Partnership for Improvement. In January 2015, DHA established an MHS-wide information technology system called the Partnership for Improvement. The Partnership for Improvement collects data from MTFs and assesses MTF performance on approximately 38 health care measures that were established by a committee of MHS officials and designed to improve readiness, population health, and quality of care as well as control costs. Three of these measures focus on patient safety—central line-associated bloodstream infection, unintended retained foreign object, and wrong site surgery. To track these measures, DHA officials told us that they created an associated performance dashboard, including acceptable ranges for each measure, to provide visibility into MHS, military service-, and NCR-level performance. The dashboard is available to all MHS users on the system website and allows MTF leaders and staff to review MTF-level performance data. DHA officials conduct quarterly system-wide performance assessments on these measures. DHA officials told us they use the data on this dashboard to determine what is improving and where to make changes. Officials from each of the military services, NCR, and each of the MTFs we visited told us they are aware of the Partnership for Improvement and its associated dashboard and that they review the data to assess their performance. Publications on Patient Safety. DHA produces several types of publications using adverse medical event and sentinel event data that officials told us are generally distributed to MTFs through the military services and NCR, including the following. Patient Safety Data Snapshot. This monthly publication contains an overview of adverse medical event and sentinel event data, trends across the MHS, and short descriptions of sentinel events that have been reported in the system in the same month. Additionally, this publication may include reports of medical product deficiencies, or materials that have been determined to be or are suspected of being harmful, defective, deteriorated, or unsatisfactory because of malfunction or design. Annual patient safety report. This yearly publication provides a retrospective status update on MHS patient safety initiatives and in- depth adverse event and sentinel event trend analysis, system-wide and by military service. Content includes trends in adverse events reported in JPSR, sentinel events, and RCAs, including information on weaker, intermediate, and stronger corrective actions. This report also describes progress on Partnership for Improvement measures system-wide and by military service and NCR, the culture of patient safety, and collaboration across DHA, the military services, and NCR. The report also details online resources for MHS officials. Focused review. According to officials, focused review publications are produced three times a year, and the topics are related to adverse medical events and associated follow-up data provided to DHA as determined by data and performance trends. For example, in September 2016, the publication included an explanation of the basic components of an RCA, including their associated corrective actions and factors DHA considers when determining if they are stronger, intermediate, or weaker. This publication included 2013 through 2016 system-wide data, such as the number of RCAs submitted, the most common root cause categories, and the proportion of RCAs with stronger, weaker, or no corrective actions. The publication also included an example of a decrease in occurrences of wrong-site surgery accompanied by an improvement in RCAs with stronger corrective actions, common pitfalls in conducting high-quality RCAs, and recommendations to conduct better RCAs. Patient safety alerts. DHA uses these publications to inform the MHS about immediate hazards, and officials told us they produce these publications on an as-needed basis. For example, a July 2016 report was focused on unintended retained foreign objects during surgery, specifically, pieces of gloves. The publication described recent occurrences of retained pieces of gloves, glove selection best practices, tips for preventing unintended retention, and corrective actions when retention occurs. Global Trigger Tool. The Global Trigger Tool is a new tool for collecting adverse medical event data by selecting a sample of medical charts that was implemented MHS-wide as of September 2017. Unlike traditional methods to detect adverse events, the Global Trigger Tool does not focus on voluntary reporting and tracking of adverse medical events. Instead, a team of three reviewers managed by DHA uses the tool methodology to retrospectively examine a random selection of patient medical charts at a facility over time to identify “triggers” (or clues) that may lead to an adverse medical event. The 53 triggers include events such as a patient fall or readmission to the emergency department within 48 hours of treatment. If a trigger is discovered, the medical chart is further reviewed to determine if an adverse event occurred. After the Global Trigger Tool review is complete, the contractor is able to provide facility leaders with rates of harmful adverse events per 1,000 patient days and per 100 admissions. Results from the tool are intended to aid MTFs in understanding the true frequency of harm events and in identifying systemic issues that contribute to patient safety events. All inpatient MTFs across the MHS will use the tool, and implementation began in 2017. The Global Trigger Tool has just begun to provide data to the MTFs, and DHA officials told us that 6 to 12 months of data is recommended before the tool can be used to make improvements. Sentinel Event and Root Cause Analysis (SERCA) tool. In October 2017, DHA released a dashboard called the SERCA tool, which DHA officials told us will allow all MTF patient safety leaders to share lessons learned in the course of sentinel event follow-up in real time. The SERCA tool displays sentinel event and RCA data from DHA’s internal tracking record reported by the military services and NCR. It is intended to provide quick, online access to sentinel event trends MHS-wide and at the military service, NCR, and MTF levels. The SERCA tool is also intended to facilitate sharing of lessons learned and best practices based on sentinel events and RCAs in a single platform. DHA officials told us that individuals with access to the system will be able to see a breakdown of corrective actions submitted by other MTFs for a particular type of sentinel event and whether these corrective actions were rated as stronger, intermediate, or weaker by DHA. DHA officials told us that for now, they will allow the military services and NCR to determine who has access to the system. Officials from two military services and NCR told us that they have access to this tool and are responsible for granting access to their MTFs. One MTF we visited told us they have access to this tool. However, it is too early to evaluate how the SERCA tool will be used to make improvements. Conclusions Each year, thousands of adverse medical events are reported at MTFs. Tracking and conducting follow-up on these events is crucial for officials to learn from and prevent these events in the future. As DHA assumes administrative responsibility for all MTFs, its role in ensuring that sentinel events—the most serious type of adverse medical events—are reported and tracked and that required follow-up is conducted will become increasingly critical. However, the current fragmented and inconsistent tracking process across the military services and NCR has impeded the efficiency of DHA’s efforts to ensure DHA has complete information about sentinel events and RCA reports. Furthermore, DHA cannot ensure that it is receiving all reports on the implementation of corrective actions, such as MOS reports, and does not know how many reports it is missing for a number of reasons, including those related to policy, tracking, and reconciliation efforts. Collectively, all of these information gaps impair DHA’s ability to fully understand the types of sentinel events that are occurring in its MTFs, the corrective actions that have been implemented, and whether these actions have been effective. This information is essential to prevent adverse medical events from occurring in the future and to ensure that the care provided by MTFs is safe and effective. Recommendations for Executive Action We are making the following two recommendations to the Assistant Secretary of Defense (Health Affairs): Ensure DHA improves as appropriate the systems and processes used by the military services, NCR, and DHA to track sentinel events and RCA reports and require the military services and NCR to communicate with DHA the reasons RCA reports are not completed for reported sentinel events. (Recommendation 1) Ensure DHA clarifies its requirement that reports on the implementation of corrective actions, such as MOS reports, should be completed and submitted to DHA, and to work with the military services and NCR to develop a standard system to help DHA consistently track and reconcile information about individual reports. (Recommendation 2) Agency Comments DOD provided written comments on a draft of this report, including technical comments, which we incorporated as appropriate. In its written comments, which are reprinted in appendix II, DOD concurred with both of our recommendations. In response to our first recommendation, DOD acknowledged that its current tracking efforts for sentinel events and RCAs are fragmented, inefficient, and unreliable. DOD stated that in the future, it envisions a single system to track and monitor sentinel events, RCAs, and corrective action implementation plan reports. A single system would eliminate the fragmentation associated with tracking these reports and the need for a cumbersome reconciliation process, potentially improving the completeness and reliability of DHA’s patient safety data as well as its ability to identify and implement system-wide improvements. In response to our second recommendation, DOD stated that it will clarify the difference between an MOS report, which may be required by the Joint Commission, and a corrective action implementation plan report, which will always be required by DOD for reported sentinel events. DOD explained that when an MOS report is required by the Joint Commission, this report will satisfy DOD’s requirement. However, when the Joint Commission does not require an MOS report for a sentinel event, DOD will require a corrective action implementation plan report. DOD stated that it expects the revised policy to be signed in late summer 2018 and in effect by October 1, 2018—the date that DHA is to assume responsibility for the administration of all MTFs. We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix III. Appendix I: Department of Defense’s (DOD) Revised Definition of a Sentinel Event In a March 2015, the Assistant Secretary of Defense for Health Affairs issued a memorandum about improving the sentinel event and root cause analysis (RCA) reporting processes. This memorandum also revised DOD’s definition of sentinel events, which previously stated that a sentinel event is an unexpected occurrence involving death or serious physical or psychological injury or risk. The revised sentinel event definition is a patient safety event (not primarily related to the natural course of the patient’s illness or underlying condition) that reaches a patient and results in death, permanent harm, or severe temporary harm. This revised definition also includes additional types of events outlined by the Joint Commission and the National Quality Forum. (See table 4.) DOD described the following sentinel events that are outlined by the Joint Commission: Suicide of any patient receiving care, treatment, and services in a staffed around-the clock care setting or within 72 hours of discharge, including from the hospital’s emergency department. Unanticipated death of a full-term infant or discharge of an infant to the wrong family. Abduction of any patient receiving care, treatment, and services. Any elopement (unauthorized departure) of a patient from a staffed around-the-clock care setting (including the emergency department), leading to death, permanent harm, or severe temporary harm to the patient. Destruction of red blood cells transfusion reaction involving administration of blood or blood products that have major blood group incompatibilities. Rape, assault (leading to death, permanent harm, or severe temporary harm), or homicide of any patient receiving care, treatment, and services while on site at the hospital. Invasive procedure, including surgery, on the wrong patient, at the wrong site, or that is the wrong (unintended) procedure. Unintended retention of a foreign object in a patient after an invasive procedure. Severe neonatal excess of bilirubin (bilirubin >30 milligrams/deciliter). Prolonged fluoroscopy with cumulative dose >1,500 rads to a single field or any delivery of radiotherapy to the wrong body region or >25 percent above the planned radiotherapy dose. Fire, flame, or unanticipated smoke, heat, or flashes occurring during an episode of patient care. Any maternal death or severe maternal or morbidity occurring during or after birth (24 hours). Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to those named above, key contributors to this report were: Bonnie Anderson, Assistant Director; Danielle Bernstein, Analyst-in- charge; Jacquelyn Hamilton; Elizabeth T. Morrison; Vikki Porter; and Helen Sauer.
Why GAO Did This Study Adverse medical events are unintended incidents that may harm a patient. Serious adverse medical events, called sentinel events, have specific follow-up requirements. The National Defense Authorization Act for Fiscal Year 2017 (NDAA 2017) requires DHA to assume the military services' administrative responsibilities, such as adverse medical event reporting, for all MTFs beginning October 1, 2018. The NDAA 2017 included a provision for GAO to examine the reporting and resolving of adverse medical events in the military health system. Among other objectives, this report reviews (1) the extent to which sentinel events and RCA reports are tracked and DHA ensures it has received complete information, and (2) the extent to which DHA ensures it has received MOS reports. GAO examined relevant policies; analyzed the most current available data on sentinel events from 2013 through 2016; and interviewed officials with DHA, the military services, and four MTFs selected for variety in military service, size, and geographic location. What GAO Found GAO found that the process for tracking the most serious adverse medical events, called sentinel events, and their root cause analysis (RCA) reports are fragmented, impeding the Defense Health Agency's (DHA) ability to ensure that it has received complete information. Unlike other adverse medical events, sentinel events—which may result in severe harm or death—have additional reporting requirements that must be met within specified time frames. For example, military treatment facility (MTF) officials must develop RCA reports, which identify causal factors and corrective actions for sentinel events. However, because the database that DHA uses to collect information on adverse medical events does not currently have the capability to track this information, the military services (Army, Navy, and Air Force) and DHA each maintain their own tracking records for sentinel events and RCA reports. Due to these fragmented tracking efforts, DHA reconciles its information on sentinel events and RCA reports through monthly emails to the military services—a time-consuming, inefficient process. DHA officials emphasized that this process relies on the military services' cooperation because DHA does not currently have the authority to compel their responses. Moreover, despite DHA's reconciliation efforts, GAO identified discrepancies and missing information in DHA's tracking record. As a result, DHA lacks critical information about why a sentinel event may have occurred and what actions, if any, MTFs should take to prevent similar incidents in the future. Recently, DHA replaced its previous system of emails with a new tracker tool that can be accessed on the military health system website. However, the new tracker does not allow the military services to make edits, and as a result, any corrections or additional information must be submitted to DHA via email, which may perpetuate previous inefficiencies. GAO found that DHA cannot ensure that it is receiving all reports on the implementation of corrective actions identified in RCA reports as required by a March 2015 memo. DHA officials stated that MTFs could meet this requirement by submitting copies of their measures of success (MOS) reports, which may be required by the Joint Commission, a hospital accrediting organization. As of September 2017, DHA had received 27 MOS reports for the 319 sentinel events that were reported in 2016. However, DHA does not know how many reports it is missing because MOS reports are not required for every sentinel event, and DHA did not began reconciling its information for these reports until January 2018, when it implemented its new tracker tool. Furthermore, GAO found that the new tracker tool documents the aggregate number of MOS reports received and does not indicate whether individual sentinel events have an MOS report, impeding DHA's ability to identify which reports are missing. This issue is compounded by the fact that the military services either track MOS reports in different ways or not at all, and military service officials said that DHA's requirement for MOS report submission is not clear. DHA officials stated that they expect to clarify this requirement in their update to the patient safety policy. Because it is unable to ensure it has received all reports on the implementation of corrective actions, DHA could be missing important information that could be used to help inform broader, system-wide patient safety improvement efforts. What GAO Recommends GAO recommends that the Assistant Secretary of Defense (Health Affairs) ensure DHA (1) improve tracking of sentinel events and RCA reports, and (2) clarify its requirements for submitting reports on the implementation of corrective actions and consistently track and reconcile individual reports. DOD agreed with these recommendations.
gao_GAO-19-109
gao_GAO-19-109_0
Background The National Guard’s Organizational Structure and Mission The National Guard consists of the NGB—which includes the Office of the Chief, National Guard Bureau; the National Guard Joint Staff; the Office of the Director, Army National Guard; the Office of the Director, Air National Guard—and the National Guard units, which are located in the 50 states, 3 U.S. territories, and the District of Columbia. Figure 1 illustrates the organizational structure of the National Guard. The National Guard has both a federal- and state-level mission. The National Guard’s federal mission is to (1) maintain well-trained and well- equipped units that are ready to be mobilized by the President of the United States during war or international peacekeeping efforts, and (2) provide assistance during national emergencies, such as natural disasters or civil disturbances. The National Guard’s state-level mission is to (1) protect life and property and preserve peace, order, and public safety, and (2) provide emergency relief support during local or statewide emergencies, such as riots, earthquakes, floods, or terrorist attacks. The National Guard’s state-level mission is executed under the control of state and territory governors, and for the District of Columbia, the President. Reflecting the National Guard’s federal and state roles, National Guard members may function under one of three command statuses: Title 10. When performing duty under the authority of Title 10 of the United States Code (Title 10 status), National Guard members are under the command and control of the President and are federally funded. When operating in Title 10 status, National Guard members are subject to the Uniform Code of Military Justice. Title 32. When performing duty under the authority of Title 32 of the United States Code (Title 32 status), National Guard members are under the command and control of the governors, but are federally funded. For example, past missions have included providing security at the nation’s airports in the immediate aftermath of the September 11, 2001 terrorist attacks and assisting the Gulf Coast in the aftermath of Hurricane Katrina. While operating in Title 32 status, National Guard members are not subject to the Uniform Code of Military Justice, but, according to OCI officials, may be subject to a state code of military justice enacted by the state legislature. State Active Duty. When performing duty in State Active Duty status, National Guard members are under command and control of the governors and are state funded. When operating in State Active Duty status, National Guard members are not subject to the Uniform Code of Military Justice. When performing their state-level mission, National Guard units within a state, territory, or the District of Columbia report to a state-level senior officer known as the Adjutant General, who in turn reports to either a state or territorial governor, or for the District of Columbia, the President (as Commander-in-Chief). The Adjutant General coordinates with the NGB’s Army or Air National Guard, as appropriate, on such matters as staffing and unit readiness. The Army and Air National Guard in turn coordinate with Army and Air Force staff, respectively. The Office of Complex Investigations OCI was established in 2012 by the Chief of the NGB to perform complex administrative investigations at the request of the Adjutants General of the 50 states, the three territories, and the District of Columbia, or at the direction of the Chief of the NGB. OCI’s primary purpose is to provide the state National Guards with the capability to administratively investigate reports of sexual assault having a National Guard nexus when the reports fall outside the jurisdiction of military criminal investigative organizations and are not sufficiently investigated by civilian law enforcement. OCI’s secondary purpose is to administratively investigate other complex matters as assigned, one of which is a state assessment. The types of investigations conducted by OCI are further described later in this report. Congress designated the Chief of the NGB as (1) the senior military officer responsible for the organization and operations of the NGB and (2) the principal advisor on National Guard matters to the Secretary of Defense through the Chairman of the Joint Chiefs of Staff, as well as to the Secretary and Chief of Staff of the Army, and Secretary and Chief of Staff of the Air Force. Further, a DOD directive states that one function of the NGB is to monitor and assist states in the organization, maintenance, and operation of National Guard units so as to provide well- trained and well-equipped units capable of augmenting the active forces. OCI officials stated that the Chief of the NGB has the authority to investigate matters in order to support the above statutory and regulatory obligations and authorities. Moreover, a DOD instruction makes clear that DOD components without law enforcement authority, like the NGB, have the authority to conduct only administrative investigations. The NGB Instruction states that the Chief of OCI specifies the requisite education, training, and experience for appointing an investigator to OCI and for assigning investigators to conduct a specific investigation. According to OCI officials, investigators are initially selected based on their analytical and investigatory skills, as well as their experience and understanding of the civilian and military criminal justice systems. OCI officials stated that investigators are required to complete an initial two- week training course conducted by the U.S. Army Military Police School, followed by three days of orientation conducted by OCI. OCI officials stated that investigators are also offered additional training opportunities throughout the year, including annual refresher training and professional development training. DOD’s Sexual Assault Prevention and Response Program In response to statutory requirements, in 2005, DOD established its Sexual Assault Prevention and Response Program to promote the prevention of sexual assault, encourage increased reporting of such incidents, and improve victim response capabilities. DOD’s program allows servicemembers to make a restricted or unrestricted report of sexual assault. DOD’s restricted reporting option is designed to allow sexual assault victims to confidentially disclose an alleged sexual assault to selected individuals without initiating an official investigation and to receive medical and mental health care. DOD’s unrestricted reporting option triggers an investigation by a military criminal investigative organization, such as the Army Criminal Investigation Command or the Air Force Office of Special Investigations. DOD’s directive for its Sexual Assault Prevention and Response Program delegates authority to the Chief of the NGB for implementing policy and procedures for the program as it applies to National Guard members in Title 32 status. OCI Conducts Administrative Investigations of Sexual Assault and State Assessments, Receives Funds from DOD’s Sexual Assault Program, and Uses Temporarily Assigned Investigators OCI conducts administrative investigations of reports of sexual assault, in addition to state assessments of state National Guard units. OCI is funded through appropriations made available for DOD’s Sexual Assault Special Victims’ Counsel Program. Moreover, OCI is staffed with temporarily assigned National Guard members as investigators. OCI Conducts Administrative Investigations of Sexual Assault and Assessments of State National Guard Units Since its inception, OCI has primarily conducted administrative investigations of unrestricted reports of sexual assault, in addition to a smaller number of state assessments. Since 2013, OCI has completed approximately 380 administrative investigations of sexual assault and 5 state assessments, as shown in figure 2. The National Guard reported to Congress in 2018 that OCI has experienced a 350 percent increase in requests for assistance from fiscal year 2014 to fiscal year 2017; and 53 of the 54 states and territories have requested OCI support during this period. OCI’s sexual assault investigations are conducted at the request of the Adjutants General and are intended to provide the Adjutants General with information to make administrative decisions. Figure 3 describes the OCI process for accepting sexual assault cases. Based on its investigation, OCI provides a report to the state National Guard that includes the findings resulting from the investigation and identifies whether OCI has found the allegation to be substantiated. OCI’s reports resulting from its sexual assault investigations do not include recommendations for action. Rather, the Adjutant General can use the report as the basis to determine whether and what type of administrative action should be taken. Such administrative actions may include a letter of reprimand, administrative separation, or other appropriate administrative remedy. OCI may also conduct a state assessment at the request of a state official, such as the Adjutant General or Governor. Each state assessment reflects the informational needs of the requesting official. According to NGB policy, the office will generally not conduct an assessment into criminal matters, and the assessment will also not include investigations of unrestricted reports of sexual assault. According to OCI officials, state assessments generally involve matters that are widespread issues and may adversely affect the good order and discipline of the National Guard, such as hostile work environment or concerns regarding a state Guard’s approach to sexual assault prevention and response. At the conclusion of an assessment, OCI provides a report to the requesting official that may include recommended actions to address problems identified as a result of the assessment. In addition, according to OCI officials, the Chief of the NGB has the authority to direct inquiries into matters affecting the good order of the National Guard. OCI officials stated that OCI has the capacity to conduct inquiries at the direction of the Chief of the NGB and which are not performed at the request of a state official. For example, according to OCI officials, in 2014, the Chief of the NGB directed OCI to conduct an inquiry to evaluate the fiscal stewardship of the National Guard. National Guard officials stated that this was the only inquiry of this kind that the office has performed. Adjutants General and their staffs stated that OCI provides the states with an unbiased or impartial third-party review of reported incidents of sexual assault. Officials from one state stated that they could not identify an alternative entity that could provide this service if OCI did not exist. OCI Is Funded through Appropriations Made Available by Congress for DOD’s Sexual Assault Special Victims’ Counsel Program OCI is primarily funded through amounts made available by Congress for transfer to the services for the Sexual Assault Special Victims’ Counsel Program in annual Operation and Maintenance, Defense-wide (O&M, Defense-wide) appropriations. According to OCI officials, the office estimates its annual budgetary needs based on an analysis of prior fiscal year’s case load and expected personnel, travel, and training costs in the upcoming fiscal year. OCI, along with the National Guard’s Special Victims’ Counsel Program, submits its budget requirements to DOD SAPRO. SAPRO then submits a consolidated request for inclusion in DOD’s overall budget request. According to OCI and DOD officials, OCI does not receive its allotment of transferred amounts until late in the fiscal year. When the transferred amounts are received into Army and Air National Guard O&M and Military Personnel accounts, amounts initially allotted for OCI are reprogrammed to other activities that supported OCI earlier in the fiscal year. OCI’s overall funding has increased since 2014. According to an OCI official, the funding increase has been in response to increasing requests for OCI’s services by the states and territories. Specifically, in fiscal year 2014, OCI funding was approximately $1.4 million, and by fiscal year 2018 total funding was almost $5 million. Figure 4 shows OCI’s funding levels from fiscal year 2014 through fiscal year 2018. According to the NGB’s 2018 manual, the NGB Joint Staff is responsible for coordinating funding for OCI’s state assessments. OCI officials said that costs related to state assessments may be funded through available NGB O&M amounts. However, the officials also said that OCI does not track its expenditures related to state assessments separately from those related to its sexual assault investigations. According to OCI officials, OCI does not receive reimbursement from the states and territories for the cost of its investigations. OCI officials further stated that OCI investigators are part of the federal oversight of the federally recognized and funded units and members of the State National Guards. As such, states do not reimburse DOD for the cost of investigations performed by OCI. According to an OCI briefing document, a benefit of the office is its ability to conduct sexual assault investigations for the states which alleviates the need for Adjutants General to choose between funding such investigations versus other mission needs. OCI Investigators Are Temporarily Assigned National Guard Members According to the National Guard’s 2018 Report to Congress, OCI primarily relies on National Guard members staffed temporarily to the office as investigators to conduct its sexual assault investigations and state assessments. The report stated that, since fiscal year 2015, OCI has used active duty operational support (ADOS) orders to maintain a staff of National Guard members, including between 22 and 28 investigator positions and 4 administrative and support positions. That report further stated that in fiscal year 2018, OCI hired one additional full- time Active Guard Reserve enlisted position and one Department of the Army civilian position. According to OCI officials, the office’s investigative staff consists primarily of individuals with legal or law enforcement experience. See appendix I for more information on the organizational structure of OCI. In its 2018 Report to Congress, the National Guard stated that, of those OCI staff serving on ADOS orders, more than half serve in their position for one year or less, which was a contributing factor to longer investigative timelines and a backlog of requests for investigation. In February 2017, we found that the timeliness of investigations was a challenge for OCI and that 57 percent of investigations conducted in fiscal year 2015 took 6 to 9 months from the time a case was referred until the investigation was completed. We made a recommendation that the Chief of the NGB reassess OCI’s timeliness and resources and identify the resources needed to improve the timeliness of these investigations. As of October 2018, the Office of the Chief Counsel has taken some steps to address this recommendation, which according to OCI officials include, for example, starting to develop a strategic plan to address the office’s long term staffing and funding needs. In its 2018 Report to Congress, the National Guard stated that OCI’s current manning and resourcing strategy of one-year ADOS tours, coupled with unprogrammed funding, impairs the office’s ability to recruit and sustain a stable, experienced workforce, resulting in longer investigation timelines and a growing backlog of requests for assistance which OCI struggles to meet. According to the National Guard’s 2018 Report to Congress, OCI’s backlog of investigation requests grew from 7 cases in fiscal year 2014 to 55 in fiscal year 2017. According to OCI officials, the office continued to experience a backlog in fiscal year 2018. OCI Has Policies for Sexual Assault Investigations and Controls to Help Ensure Key Policies Are Followed, but Has Inconsistently Documented How Case Acceptance Criteria Are Met NGB guidance establishes policies for OCI’s investigations, and OCI has implemented internal controls to help ensure it follows key policies. NGB guidance also establishes two criteria that allegations of sexual assault must meet for OCI to begin an investigation; however, this guidance does not require OCI to consistently include documentation in its case files related to how its case acceptance criteria are met. National Guard Bureau Guidance Establishes OCI Investigation Policies The NGB Instruction delineates the authority and responsibilities of NGB and state officials and the NGB Manual serves as the implementing guidance. According to OCI officials, the office’s investigative process was designed based on the Army’s Procedures for Administrative Investigations and Boards of Officers. To determine the allegations OCI will investigate, NGB policy includes specific requirements for OCI’s coordination with state officials such as the Adjutant General and legal staff. According to NGB guidance, OCI officials will work with state officials to determine the appropriateness of sending a case to OCI, but state National Guard officials are responsible for formally requesting an OCI investigation. NGB policy also includes requirements for OCI investigators and outlines policies for the investigation process. The NGB Manual has additional requirements for the office’s dissemination of investigation results back to the state National Guard. Based on the content of the NGB policy, OCI also created Standard Operating Procedures to guide the activities that are designated as the office’s responsibilities. OCI Has Implemented Internal Controls to Help Ensure Key Policies Are Followed Based on our review, we found that OCI has internal controls to help ensure stakeholders follow key policies, including a review of final investigation reports and checklists to monitor activity. OCI’s review of its investigations and case files includes both administrative and legal reviews conducted by officials within OCI and the NGB’s Office of the Chief Counsel, including both administrative staff and leadership. Similar to the Army’s administrative investigations procedures, OCI’s reports of investigation undergo a review process which confirms that case files include all required documentation and provide sufficient evidence for the report’s conclusions. Investigators have primary responsibility for storing administrative and evidentiary case documents, and a team of quality control administrators works with investigators to store and publish case files in accordance with OCI’s policies. According to OCI officials, once investigators produce a report of investigation and determine whether to substantiate the allegation, the Investigations Manager reviews the investigators’ determinations before sending the report to the office’s Deputy Chief to review. According to OCI’s Standard Operating Procedures, after the Deputy Chief’s review, OCI submits the report for review by an independent legal counsel in the Administrative Law Division of the NGB’s Office of the Chief Counsel. Furthermore, OCI’s procedures state that all OCI reports of investigation must be reviewed by both the Chief of OCI and the Deputy Chief Counsel before being submitted to the state that requested the investigation. In addition to the internal controls implemented through OCI’s report review process, OCI officials stated that the office also developed checklists designed to support internal policy adherence. For example, the review process includes an Investigator Checklist which outlines investigation policies and a Quality Control Checklist for administrators to ensure that the final report of investigation includes specific documentation and is coordinated appropriately, consistent with policy. Alongside these checklists, OCI’s Standard Operating Procedures provide guidance to ensure that OCI investigators securely store private and sensitive information, particularly video recordings of personnel related to the case. Our analysis of a non-generalizable sample of 27 case files from 5 states from fiscal years 2016 and 2017—out of a total of approximately 225 cases for those same years—found that OCI generally adhered to key policies. For example, OCI included the Adjutants General requests to initiate the OCI investigation and executive summaries explaining OCI’s determination of whether or not the allegation was substantiated in all 27 case files. However, 4 of 27 case files in our sample contained investigation request letters with personally identifiable information. OCI policy states that these letters should not include such information. OCI officials stated that they are unable to control the information the state National Guards include in their request letters; however, OCI officials also stated that investigators are expected to work with the states to get this information removed. National Guard Bureau Policies Outline Case Acceptance Criteria, but OCI’s Case Files Inconsistently Include Supporting Documentation to Show How the Criteria Have Been Met NGB policies describe two criteria that allegations of sexual assault must meet for OCI to initiate and conduct an investigation. First, OCI may only conduct administrative investigations of sexual assault with an identified National Guard nexus. The NGB Instruction defines a National Guard nexus as generally existing when the reported perpetrator or the alleged victim is or was—at the time of the reported incident—a member or civilian employee of the National Guard. Officials stated that this includes National Guard members in Title 32 or state active duty status. Second, OCI may investigate a case only after a military criminal investigative organization or civilian law enforcement has declined to investigate a case, when a victim declines investigation by civilian law enforcement, or when a civilian law enforcement organization did not sufficiently investigate. Table 1 describes the OCI criteria to administratively investigate sexual assault cases with a National Guard nexus. The NGB Manual includes a template that the states should use when submitting requests for OCI to initiate an investigation of an unrestricted report of sexual assault. The template includes standardized language that the state National Guard staff determined the existence of a National Guard nexus and confirmed coordination with at least one criminal investigative organization prior to requesting OCI’s assistance. All 27 written requests from the Adjutants General included in the sample of case files we analyzed included a statement that used this standardized language and indicated that the state National Guard staff had determined the existence of a National Guard nexus and confirmed coordination with at least one criminal investigative organization prior to requesting OCI’s assistance, consistent with NGB policy. However, we found that OCI’s case files do not consistently include supporting documentation to show how the case acceptance criteria— specifically the determination of a National Guard nexus and verification of coordination with the appropriate criminal investigative organizations— were met. This is because NGB policy does not require that OCI collect and include any additional documentation for verification purposes in its case files. In our review of OCI’s case files, we found that 12 of the 27 case files did not include additional supporting documentation, such as police reports or e-mail correspondence with the appropriate criminal investigative organizations. We also found that 7 of the 27 case files did not include supporting documentation of both the nexus determination and coordination with the appropriate criminal investigative organizations. According to OCI officials, the office relies on state National Guard officials’ evaluation and determination about the nexus criteria and does not always receive supporting documentation to verify the criteria have been met before initiating an investigation. OCI officials further stated that this is due, in part, to the fact that the NGB and Adjutants General cannot require local law enforcement to produce documentation related to their investigations because neither entity has subpoena power over state law enforcement organizations. However, in response to our concerns about the lack of supporting documentation to verify the state National Guard officials’ evaluations of the criteria, in October 2018, OCI officials shared a draft memorandum template that they developed for verifying how the two case acceptance criteria were met. Standards for Internal Control in the Federal Government state that management should design control activities to achieve objectives and respond to risks. More specifically, documentation of such activities should be readily available for examination, properly managed, and maintained. Those standards state that documentation is a necessary part of an effective internal control system and is required to demonstrate design, implementation, and operating effectiveness. Without a requirement that supporting documentation related to the National Guard nexus and criminal investigative organization coordination efforts is included in each case file, OCI does not have reasonable assurance that the cases it is investigating adequately meet the two criteria for case acceptance. Conclusions Through the creation of the Office of Complex Investigations in 2012, the NGB has taken steps to address a gap by exercising its investigative authority to address those instances of sexual assault involving National Guard members that the military justice system or local law enforcement could not or would not investigate. OCI has implemented processes and procedures to help ensure that its policies are followed. However, the NGB does not require OCI to include supporting documentation in its case files for verifying how state National Guard officials determined that case acceptance criteria have been met. Without a requirement to collect and maintain such supporting documentation as part of its case files, OCI does not have reasonable assurance that it is only undertaking investigations that meet case acceptance criteria. Recommendation for Executive Action The Secretary of Defense should ensure that the Chief of the National Guard Bureau, in coordination with the Office of Complex Investigations, includes a requirement in its guidance to collect and maintain supporting documentation as part of its case files that verifies whether and how (1) the National Guard nexus exists, and (2) the allegation has been referred to the appropriate military criminal investigative organization or civilian law enforcement organization prior to opening an OCI investigation. (Recommendation 1) Agency Comments We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with our recommendation and noted actions it was taking. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties, including the Chief of the National Guard Bureau, the National Guard Bureau’s Office of Chief Counsel, and the Office of Complex Investigations. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions regarding this report, please contact Brenda Farrell at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Organizational Structure of The National Guard Bureau’s Office of Complex Investigations The Office of Complex Investigations (OCI) was established within the National Guard Bureau’s Office of the Chief Counsel. To conduct its sexual assault investigations and state assessments, OCI primarily relies on Guard members staffed temporarily to the office as investigators. From August 2012 through September 2014, the office operated with three full- time personnel, who administered the program and conducted investigations with investigative personnel who received assignments as an extra duty. Since fiscal year 2015, however, OCI has used one-year active duty operational support (ADOS) orders to maintain a staff of National Guard members, including between 22 and 28 investigator positions and 4 administrative and support positions. In fiscal year 2018, the office was primarily staffed with traditional Guard members on ADOS tours—4 administrative support personnel and 24 investigators—in addition to one full time Active Guard and Reserve enlisted position and one Department of the Army civilian position. According to OCI officials, the office’s investigative staff consists primarily of individuals with legal or law enforcement experience. Figure 5 illustrates the organizational structure of OCI. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kimberly Seay, Assistant Director; Johana Ayers; Maurice Belding; Vincent Buquicchio; Serena Epstein; Laura Ann Holland; Amie Lesser; Wayne McElrath; Stephanie Moriarty; Clarice Ransom; Ramon Rodriguez; Michael Silver; Jennifer Weber; and Nell Williams made key contributions to this report.
Why GAO Did This Study Sexual assault incidents involving military service members can devastate victims and have far reaching impacts for DOD due to the potential for these crimes to undermine the department's core values, degrade mission readiness, and raise financial costs. The National Defense Authorization Act for Fiscal Year 2018 included a provision that GAO review, among other things, the purpose and structure of OCI and its adherence to policies. This report (1) describes OCI's services and budgetary and staffing resources; and (2) evaluates OCI's policies for investigations and internal controls to ensure its policies are consistently followed. GAO analyzed OCI policy, budget, and staffing documents and interviewed OCI, DOD, Army, and Air Force officials. GAO also interviewed National Guard officials and analyzed case files for select years from a nongeneralizable sample of five states. What GAO Found The National Guard Bureau's (NGB) Office of Complex Investigations (OCI) was established in 2012 to conduct administrative investigations into allegations of sexual assault that are not criminal in nature and are conducted only when criminal law enforcement entities, such as military criminal investigative organizations or local civilian law enforcement, have declined or do not have jurisdiction to investigate and a National Guard nexus has been identified. Since 2013, OCI has completed approximately 380 investigations of allegations of sexual assault at the request of state National Guard officials and 5 assessments of state National Guard units to review the current culture, policies, and practices for the handling of sexual assault, among other things. State National Guard officials told GAO that OCI provides the states with an unbiased or impartial third-party review of reported incidents of sexual assault. OCI is primarily funded through amounts made available for the Sexual Assault Special Victims' Counsel Program in the Department of Defense's (DOD) annual defense-wide Operation and Maintenance appropriation. This funding has increased from approximately $1.4 million in fiscal year (FY) 2014 to almost $5 million in FY 2018; which OCI officials attributed to increasing demands for OCI's services. OCI uses trained National Guard members temporarily assigned to the office as investigators. NGB guidance establishes OCI investigation policies and OCI has implemented controls to help ensure key policies are followed. However, OCI has inconsistently documented how case acceptance criteria have been met. GAO's analysis of a sample of 27 case files from 5 states from FY 2016 and FY 2017 found that OCI generally adhered to key investigation policies. For example, in accordance with its policies, in all 27 case files GAO reviewed, OCI had included the state National Guard's requests to initiate an OCI investigation and executive summaries explaining OCI's determination of whether or not the allegation was substantiated. Furthermore, NGB has established two case acceptance criteria—specifically that a National Guard nexus exists and that coordination with at least one criminal investigative organization occurred. According to OCI officials, state National Guard officials are to verify these criteria are met before submitting requests for OCI to initiate an investigation of sexual assault. NGB has developed a template with standardized language that includes these criteria that the states should use. While OCI's case files included the request letters with standardized language from state National Guards indicating the state National Guard staff had determined the case acceptance criteria were met, they did not consistently include supporting documentation to verify how the case acceptance criteria were met. This is because NGB policy does not require such documentation to be included in OCI's case files. Without such documentation, OCI does not have reasonable assurance that the cases it accepts for investigation adequately meet the two criteria for case acceptance. What GAO Recommends GAO recommends that DOD require OCI to include supporting documentation in case files to verify a National Guard nexus exists and referral to the appropriate law enforcement organization occurs. DOD concurred with the recommendation.
gao_GAO-18-281
gao_GAO-18-281_0
Background Responsibilities of the Choice Program TPAs In October 2014, VA modified its existing contracts with two TPAs that were administering another VA community care program to add certain administrative responsibilities associated with the Choice Program. For the Choice Program, each of the two TPAs—Health Net and TriWest—is responsible for delivering care in a specific multi-state region (See figure 1.). Specifically, the TPAs are responsible for establishing networks of community providers, scheduling appointments with community providers for eligible veterans, and paying community providers for their services. Choice Program Eligibility Criteria As stated in VA’s December 2015 guidance, the Choice Program allows eligible veterans to opt to obtain health care services from the TPAs’ network providers rather than from VHA medical facilities when the veterans are enrolled in the VA health care system and meet any of the following criteria: the next available medical appointment with a VHA clinician is more than 30 days from the veteran’s preferred date or the date the veteran’s physician determines he or she should be seen; the veteran lives more than 40 miles driving distance from the nearest VHA facility with a full-time primary care physician; the veteran needs to travel by air, boat, or ferry to the VHA facility that is closest to his or her home; the veteran faces an unusual or excessive burden in traveling to a VHA facility based on geographic challenges, environmental factors, or a medical condition; the veteran’s specific health care needs, including the nature and frequency of care needed, warrants participation in the program; or the veteran lives in a state or territory without a full-service VHA medical facility. Over the life of the Choice Program, VA has taken various approaches to care for veterans for whom services are not available at a particular VHA medical facility. In May and October of 2015, VHA issued policy memoranda to its VAMCs that required them to offer veterans referrals to the Choice Program before they authorized care through one of VA’s other community care programs, which existed prior to the creation of the Choice Program. Before May 2015, VA provided VAMCs the flexibility to decide on a case-by-case basis whether to refer veterans to the Choice Program or one of VA’s other community care programs when services were not available. In June 2017, VHA issued another policy memorandum that rescinded the referral hierarchy that required VAMCs to refer to the Choice Program first. It directed VAMCs to refer veterans to the Choice Program only if they met the Choice Act’s wait-time, distance, and geographic eligibility criteria, and to instead use other VHA medical facilities, facilities with which VA has sharing agreements, and other VA community care programs to deliver care to veterans when services were not available at a VHA medical facility and veterans did not qualify under the Choice Act’s eligibility criteria. In August 2017, after Congress provided an additional $2.1 billion for the Choice Program, VHA again changed its guidance on referral patterns for the Choice Program and VA’s other community care programs. Specifically, VA issued a fact sheet saying that the new funding will allow VAMCs to refer veterans to the Choice Program to the maximum extent possible. This allowed VAMCs to again offer veterans Choice Program referrals when services are unavailable at VHA medical facilities (until available funds have been exhausted), and also permitted VAMCs to refer veterans to other VA community care programs when services are unavailable. Process for Choice Program Appointment Scheduling Through policies and standard operating procedures for VAMCs and contracts with the TPAs, VA and VHA have established two separate processes for Choice Program routine and urgent appointment scheduling: one process for time-eligible veterans and another for distance-eligible veterans. Table 1 provides an overview of the appointment scheduling process that applies when a veteran is referred to the Choice Program because the veteran is time-eligible. (Appendix II contains additional detail about the Choice Program appointment scheduling process for time-eligible veterans—including differences between the routine and urgent care appointment scheduling process.) When veterans reside more than 40 miles from a VHA medical facility or meet other travel-related criteria, VHA uses the appointment scheduling process it developed for distance-eligible veterans. The process for distance-eligible veterans differs from that for time-eligible veterans in that VAMCs do not prepare a referral and send it to the TPA. Instead, distance-eligible veterans contact the TPA directly to request Choice Program care. See table 2 for an overview of the Choice Program appointment scheduling process that applies for distance-eligible veterans. (See appendix III for additional detail about the Choice Program appointment scheduling process for distance-eligible veterans—including differences between the routine and urgent care appointment scheduling process.) Choice Program Utilization from Fiscal Year 2015 through Fiscal Year 2016 Data we obtained from the TPAs indicate that VHA and the TPAs used the time-eligible appointment scheduling process about 90 percent of the time from fiscal year 2015 through fiscal year 2016 (the first 2 years of the Choice Program’s implementation). More than half of the veterans who were referred to the Choice Program and for whom the TPAs scheduled appointments were referred because the services they needed were not available at a VHA medical facility. The second-most-common reason for referral was that the wait time for an appointment at a VHA medical facility exceeded 30 days. (See figure 2.) The distance-eligible appointment scheduling process was used for about 10 percent of the veterans who used the Choice Program between fiscal year 2015 through fiscal year 2016. Choice Act Wait-Time Requirements for Care Furnished Under the Program In coordinating the furnishing of care to eligible veterans under the Choice Program, VA is required to ensure that veterans receive appointments for Choice Program care within the wait-time goals of VHA for the furnishing of hospital care and medical services. Although the Choice Act defined VHA’s wait-time goals as not more than 30 days from the date a veteran requests an appointment from the Department, the Choice Act gave VA the authority to change this definition if it did not reflect VHA’s actual wait- time goals. If VA wanted to exercise this authority, it was required to notify Congress of VHA’s actual wait-time goals within 60 days of the law’s enactment (i.e., by October 6, 2014). VA did so in an October 3, 2014, report to Congress. To “ensure that care provided through the Veterans Choice Program is delivered within clinically appropriate timeframes,” VA notified Congress that VHA’s wait-time goals were “not more than 30 days from either the date that an appointment is deemed clinically appropriate by a VA health care provider, or if no such clinical determination has been made, the date a Veteran prefers to be seen for hospital care or medical services.” By incorporating VHA’s reported wait- time goal, the Choice Act required VA to ensure the furnishing of care to eligible veterans within 30 days of the clinically indicated date or, if none existed, within 30 days of the veteran’s preferred date. VA’s Other Community Care Programs and Planned Consolidation VA has purchased health care services from community providers through various programs since as early as 1945. Currently, there are six community care programs other than the Choice Program through which VA purchases hospital care and medical services for veterans. These six community care programs offer different types of services and have varying eligibility criteria for veterans and community providers. VA’s six non-Choice community care programs include: Individually authorized VA community care. The primary means by which VHA has traditionally purchased community care is through individual authorizations, where local VAMC staff determine veteran eligibility, create authorizations, and assist veterans in arranging care with community providers that are willing to accept VA payment. Traditionally, VAMCs have approved the use of individually authorized community care when a veteran cannot access a particular specialty care service from a VHA medical facility because the service is not offered or the veteran would have to travel a long distance to obtain it from a VHA medical facility. (See appendix IV for an illustration of how appointment scheduling and care coordination processes for the Choice Program compare to those for individually authorized VA community care.) Two emergency care programs. When VA community care is not preauthorized, VA may reimburse community providers for emergency care under two different community care programs: 1) emergency care for a condition related to a veteran’s service-connected disability and 2) emergency care for a condition not related to a veteran’s service-connected disability, commonly referred to as Millennium Act emergency care. For emergency care to be covered through these two programs, a number of criteria must be met, including (1) community providers must file claims in a timely manner (within 2 years of the date services were rendered for service-connected emergency care and within 90 days for Millennium Act emergency care); (2) the veteran’s condition must meet the prudent layperson standard of an emergency; and (3) a VA or other federal medical facility must not have been feasibly available to provide the needed care, and an attempt to use either would not have been considered reasonable by a prudent layperson. Patient-Centered Community Care (PC3). In September 2013, VA awarded contracts to Health Net and TriWest to develop regional networks of community providers to deliver specialty care, mental health care, limited emergency care, and maternity and limited newborn care when such care is not feasibly available from a VHA medical facility. VA and the TPAs began implementing the PC3 program in October 2013, and it was fully implemented nationwide as of April 2014—prior to the creation of the Choice Program. In August 2014, VA expanded the PC3 program to allow community providers of primary care to join the PC3 networks. PC3 is a program VA created under existing statutory authorities, not a program specifically designed by law. To be eligible to obtain care from PC3 providers, veterans must meet the same criteria that are required for individually authorized VA care in the community services. Agreements with federal partners and academic affiliates. When services are not available at VHA medical facilities, VA may also obtain specialty, inpatient, and outpatient health care services for veterans through two types of sharing agreements—those with other federal facilities (such as those operated by the Department of Defense and the Indian Health Service), and those with university- affiliated hospitals, medical schools, and practice groups (known as academic affiliates). Dialysis contracts. In June 2013, VA awarded contracts to numerous community providers nationwide to deliver dialysis—a life-saving medical procedure for patients with end-stage renal disease (permanent kidney failure). When dialysis services are not feasibly available at VHA medical facilities, veterans may be referred to one of VA’s contracted dialysis providers, and veterans may receive dialysis at local clinics on an outpatient basis, or at home (if the contractors offer home-based dialysis services). The VA Budget and Choice Improvement Act, which was enacted on July 31, 2015, required VA to develop a plan for consolidating all of its community care programs into a new, single program to be known as the “Veterans Choice Program.” VHA submitted this plan, including proposed legislative changes, to Congress on October 30, 2015. VA has moved forward with some aspects of the planned community care program consolidation that it believes can be accomplished without statutory changes. In December 2016, VA issued a request for proposals (RFP) for contractors to help administer the consolidated community care program, through “community care network” contracts. The consolidated community care program VA described in the October 2015 plan it submitted to Congress and the December 2016 RFP, as amended, would be similar to the current Choice Program in certain respects. For example, VA is planning to award community care network contracts to TPAs, which would establish regional networks of community providers and process payments to those providers. In contrast, other aspects of the consolidated community care program VA has planned may differ from the existing Choice Program. For example, VA’s RFP for the community care network contracts, as amended, requires VAMCs—rather than TPAs—to carry out appointment scheduling, unless they exercise a contract option for the TPAs to provide such services. Annual Obligations for the Choice Program and Other VA Community Care Programs In fiscal year 2015, the first year of the Choice Program’s implementation, total obligations for Choice Program health care services accounted for about 4.7 percent of the $8.7 billion VA obligated for all community care services that year. However, as more care was provided through the Choice Program in fiscal years 2016 and 2017, obligations for Choice Program care grew steadily, while obligations for care delivered through other VA community care programs decreased. In fiscal year 2017, total obligations for Choice Program health care services accounted for about 39 percent of the $11.16 billion VA obligated for all community care services that year. See table 3. As shown in Table 4, below, of the $10.37 billion in Choice Program funds that were obligated between fiscal year 2015 and fiscal year 2017, $6.28 billion (or about 61 percent) of the funds were obligated for Choice Program health care services. About $1.76 billion (or 17 percent) of total Choice Program funds obligated between fiscal year 2015 and fiscal year 2017 were obligated for administrative costs. The remaining $2.33 billion (about 22 percent) were obligated for medical services other than those authorized under the Choice Program. As we previously reported, VHA experienced a projected funding gap in its medical services appropriation account in fiscal year 2015, largely due to lower-than-expected obligations for the Choice Program, higher-than-expected obligations for other VA community care programs, and unanticipated obligations for hepatitis C drugs. To address the projected funding gap, on July 31, 2015, VA obtained temporary authority to use Choice Program funds between July 31, 2015 and September 30, 2015 for amounts obligated on or after May 1, 2015 to furnish medical services other than those that it authorized under the Choice Program. Later, in fiscal year 2016 and fiscal year 2017, VA de-obligated about $420 million of the Choice Program funds it had obligated for other VA community care programs and hepatitis C drugs in fiscal year 2015 because they were never expended. Time Allowed to Complete VA’s Choice Program Appointment Scheduling Process Significantly Exceeds the Choice Act’s Required 30-Day Time Frame for Routine Care Our analysis of VA’s scheduling process indicates that veterans who are referred to the Choice Program for routine care because they are time- eligible could potentially wait up to 70 calendar days to obtain care, if VAMCs and TPAs take the maximum amount of time allowed by VA’s process. About 90 percent of Choice Program referrals in fiscal years 2015 and 2016 were scheduled under the time-eligibility process, which means that the majority of veterans referred to the program would have been subject to this potential wait time for an appointment for routine care. This 70-day potential wait time is in contrast to the Choice Act’s required time frame, which is that eligible veterans receive Choice Program care no more than 30 days from the date an appointment is deemed clinically appropriate by a VHA clinician (referred to as the clinically indicated date), or if no such determination has been made, 30 days from the date the veteran prefers to receive care. According to VHA policy, a VHA clinician’s clinically indicated date determination must be based upon the needs of the patient, and it should be the earliest date that it would be clinically appropriate for the veteran to receive care. Therefore, if there is no clinical reason that care should be delayed, a veteran’s clinically indicated date could be the same date that the VHA clinician determined the veteran needed care. The potential wait time of about 70 calendar days for time-eligible veterans to receive routine care through the Choice Program encompasses 18 or more calendar days for VAMCs to prepare veterans’ Choice Program referrals and potentially another 52 calendar days for appointments to occur through the TPAs’ scheduling process, as follows: VAMCs’ process for preparing routine Choice Program referrals. According to VHA policies and guidance, VAMC staff have at least 18 calendar days to confirm that veterans want to be referred to the Choice Program and to send veterans’ referrals to the TPAs: They have 2 business days (or up to 4 calendar days) after a VHA clinician has determined the veteran needs care to begin contacting an eligible veteran by telephone to offer them a referral to the Choice Program. They have up to 14 calendar days after initiating contact to reach the veteran by telephone or letter and confirm that the veteran wants to be referred to the Choice Program. After confirming that a veteran wants to be referred to the Choice Program, however, VA has not set a limit on the number of days VAMCs should take to compile relevant clinical information and send referrals to the TPAs. TPAs’ process for scheduling routine Choice Program appointments. Through its contracts with the TPAs, VA has established a process under which a veteran could potentially wait another 52 calendar days from the date the TPA receives the VAMC’s Choice Program referral for a routine care appointment to take place. This includes up to 16 business days (or 22 calendar days) after receiving a referral to confirm the veteran’s decision to opt in to the Choice Program and create an authorization. The contracts further state that, for time-eligible veterans, an appointment shall take place within 30 calendar days of the clinically indicated date, the authorization creation date, or the veteran’s preferred date, whichever occurs later: The TPA has 2 business days to review the VAMC’s referral and accept it if it contains sufficient information to proceed with appointment scheduling. The TPA has 4 business days to contact the veteran by telephone and confirm they want to opt in to the Choice Program (which means that the veteran wants to receive care through the Choice Program and have the TPA proceed with appointment scheduling). If the veteran is not reached by telephone, the TPA has 10 business days for the veteran to respond to a letter confirming that they want to opt in, at which point the TPA creates the Choice Program authorization. If the authorization is created after the veteran’s preferred date or after the clinically indicated date on the VAMC’s referral has already passed, the TPA has 30 calendar days from the authorization creation date for an appointment for routine care to take place. The TPA can use up to 15 business days of this 30- calendar-day time frame to contact providers and successfully schedule the veteran’s Choice Program appointment. See figure 3 for an illustration of the potential wait time of approximately 70 calendar days for time-eligible veterans to receive routine care through the Choice Program. The process VA established for time-eligible veterans to receive routine care through the Choice Program—which could potentially take 70 days to complete—is not consistent with the requirement that veterans receive care within 30 days of their clinically indicated dates (where available) as applicable under the Choice Act. Furthermore, according to the federal internal control standard for control activities, agencies should design control activities—such as through policies and procedures—that will help ensure federal programs meet their objectives and respond to any risks to meeting those objectives. A key reason that veterans’ overall wait times for Choice Program care could potentially exceed the Choice Act’s 30-day wait-time requirement is that the process VA and VHA designed did not include a limit on the number of days VAMCs have to complete a key step of the process— compiling relevant clinical information and sending referrals to the TPAs after veterans have agreed to be referred to the Choice Program. While the process sets forth time frames for the other steps VAMCs and TPAs must complete to process referrals and schedule appointments, VA and VHA have not specified how many days VAMCs have to send veterans’ Choice Program referrals to the TPAs. VHA has no comprehensive policy directive for the Choice Program, and neither its consult management directive nor its outpatient appointment scheduling directive specifies an amount of time within which VAMCs should prepare Choice Program referrals. Another reason that veterans’ overall wait times for Choice Program care could potentially exceed the Choice Act’s 30-day wait-time requirement is that after VA and VHA implemented their policies, they did not review and address risks that were identified through their actual experience in operating the program. In response to a letter we sent in March 2017, VA’s Deputy General Counsel for Legal Policy said that, based on VA’s and VHA’s experiences with actual operation of the Choice Program since November 2014, “the practical reality” has been that the 30-day wait-time goal VA established just prior to the program’s implementation cannot always be met. VA has not disclosed what timeliness goals it would apply under a future consolidated community care program. We note, however, that VA currently has no timeliness goals for its existing individually authorized community care program and cannot determine the amount of time veterans wait, on average, to receive care through that program, which has accounted for a significant portion of veterans’ community care utilization. We recommended in May 2013 that VA analyze the amount of time veterans wait to see providers in its individually authorized community care program and apply the same wait-time goals to that care that it uses to monitor wait times at VHA medical facilities. VA concurred with the recommendation to conduct an analysis and reported that it was in the process of building wait-time indicators to measure wait-time performance for individually authorized VA community care. VHA has since updated its wait-time goal for care delivered within VHA medical facilities—which is that care must be delivered within 30 days of veterans’ clinically indicated dates (where available). However, VA has not applied that same goal to its individually authorized VA community care program nor begun measuring wait-time performance for that program. Timeliness of appointments is an essential component of quality health care; delays in care have been shown to negatively affect patients’ morbidity, mortality, and quality of life. Without specifying wait-time goals that are achievable, and without designing appointment scheduling processes that are consistent with those goals, VA lacks assurance that veterans are receiving care from community providers in a timely manner. It also lacks a means for comparing the timeliness of veterans’ community care with that of care delivered within VHA medical facilities. Actual Wait Times for Choice Program Care Have Been Lengthy for Selected Veterans, and VHA’s Monitoring of Veterans’ Access Is Limited by Incomplete and Unreliable Data In 2016, Selected Veterans Experienced Lengthy Overall Wait Times to Receive Routine Care and Urgent Care through the Choice Program To examine selected veterans’ actual wait times to receive routine care and urgent care through the Choice Program, we conducted a manual review of a random, non-generalizable sample of 196 Choice Program authorizations. The TPAs created these authorizations between January 2016 and April 2016 in response to referrals sent by six selected VAMCs. Our manual review of veterans’ VA electronic health records and the TPAs’ records for our non-generalizable sample of 55 routine care authorizations and 53 urgent care authorizations for which the TPAs succeeded in scheduling appointments identified the following review times: For the 55 routine care authorizations in our sample, it took VAMC staff an average of 24 calendar days after the veterans’ need for routine care was identified to contact the veterans and confirm that they wanted to be referred to the Choice Program, compile relevant clinical information, and send veterans’ referrals to the TPAs. It took an average of 27 calendar days for the VAMCs to complete these actions for the 53 urgent care authorizations in our sample. For these routine care authorizations, it took the TPAs an average of 14 calendar days to accept referrals and reach veterans by telephone or letter for the veterans to opt in to the Choice Program. It took the TPAs an average of 18 calendar days to complete these actions for the urgent care authorizations in our sample. After the TPAs succeeded in scheduling veterans’ appointments for routine care, an average of 26 calendar days elapsed before veterans in our sample completed their initial appointments with Choice Program providers. For urgent care authorizations in our sample, it took an average of 18 days for the veterans to complete their initial appointments after the TPAs scheduled them. See the following text box for specific examples of the overall wait times experienced by some veterans in the samples of routine and urgent Choice Program authorizations we reviewed. Examples of Delays Experienced by Veterans for Whom the Choice Program Third Party Administrators (TPA) Scheduled Appointments One veteran was referred to the Choice Program for magnetic resonance imaging (MRI) of the neck and lower back because these services were unavailable at a Veterans Health Administration (VHA) medical facility. It took almost 3 weeks for Department of Veterans Affairs (VA) medical center (VAMC) staff to prepare his Choice Program referral for routine care and send it to the TPA, and then it took an additional 2 months after the VAMC sent the referral for the veteran to receive care. Notes in the veteran’s VA electronic health record indicated that his follow- up appointment with a VHA neurosurgeon was at risk of being rescheduled because the VAMC had not received the results of the MRI after the appointment with the Choice Program provider occurred. Ultimately, the veteran’s appointment with the VHA neurosurgeon—where the imaging results and treatment options were discussed—did not occur until almost 6 months after the VHA clinician originally identified the need for the MRI. One veteran was referred to the Choice Program because she needed maternity care, which is generally not available at VHA medical facilities. Almost a month and a half elapsed from the time VAMC staff confirmed her pregnancy (when she was 6 weeks pregnant) to when the VAMC sent the Choice Program referral for urgent care to the TPA. It then took 2 additional weeks for the TPA to make an unsuccessful attempt to contact the veteran to schedule a prenatal appointment; by that point, she was almost 15 weeks pregnant. The veteran called the TPA back, but when the TPA had yet to schedule an appointment by the time she was 18 weeks pregnant, the veteran finally scheduled her initial prenatal appointment herself, almost 3 months after her pregnancy was confirmed by VAMC staff. One veteran was referred to the Choice Program for thoracic surgery to address a growth on his lung because there was a wait for care at a VHA medical facility. TPA documentation we reviewed indicated that VAMC staff contacted the TPA four times to inquire about the status of the veteran’s appointment, and the TPA contacted five Choice Program providers in its unsuccessful attempts to schedule the urgent appointment for the veteran. Ultimately, the veteran scheduled his own initial appointment with a thoracic surgeon in the community and informed the TPA that he had done so. The veteran’s initial appointment occurred 3 weeks after the VAMC sent his referral to the TPA. We also found that veterans in our sample experienced lengthy overall wait times to receive care when the TPAs returned their authorizations to the VAMC without scheduling appointments. When veterans’ Choice Program authorizations are returned, VAMCs must attempt to arrange care through other means—such as through another VA community care program, a new Choice Program referral, or at another VHA medical facility. Among the 88 returned authorizations in our sample, we determined that 53 veterans eventually received care through other means after their authorizations were returned. These 53 veterans ended up waiting an average of 111 days after the VHA clinician originally determined they needed care until their first appointment with a VHA clinician or with a community provider occurred. See the text box below for some examples of delays experienced by veterans in the sample of 88 returned Choice Program authorizations we reviewed. Examples of Delays Experienced by Veterans Whose Authorizations Were Returned to Department of Veterans Affairs Medical Centers (VAMC) by the Choice Program Third Party Administrators (TPA) The VAMC took almost 3 ½ months to refer one veteran to a physical therapist to address her pelvic floor prolapse. When the preferred provider listed in the VAMC’s referral was outside the TPA’s network, the TPA sent a message to the VAMC via its web-based portal to ask if it should try scheduling the appointment with a different provider. By the time VAMC staff responded to the message in the TPA’s portal, the TPA had already returned the authorization—almost 2 weeks after accepting it. Two months later, the VAMC realized that the veteran still needed this care and sent a new Choice Program referral to the TPA. It then took the veteran another 2 ½ months to attend her first appointment. Overall, this veteran waited more than 8 months to receive physical therapy. One veteran who was eligible for the Choice Program because he resided more than 40 miles from a VHA medical facility contacted the TPA to request an appointment with a urologist. More than a month later, the TPA contacted the VAMC via its web-based portal to request a referral for the veteran. VAMC staff responded to the TPA two days later and stated (correctly) that because the veteran was distance-eligible, no referral was required. Four days after receiving the VAMC’s response, the TPA succeeded in scheduling an appointment. However, the veteran declined it because the TPA had scheduled the appointment with a neurologist (a specialist who treats conditions affecting the brain, spinal cord, and nerves) rather than a urologist (a specialist who treats conditions affecting the urinary tract and male reproductive organs). Ultimately, the veteran ended up seeing a urologist at a VAMC nearly 5 months after he originally contacted the TPA to request care. It took about 2 ½ weeks for the VAMC to send one veteran’s referral for pain management to the TPA after a VHA clinician originally determined he needed these services. However, information the TPA needed for scheduling the Choice Program appointment was missing from the VAMC’s referral. The TPA requested the information from the VAMC twice using its web-based portal, but VAMC staff did not reply, and the TPA returned the authorization 2 weeks after receiving it. It then took another month before the veteran ended up receiving pain management services at a VAMC. Overall, this veteran waited almost 2 ½ months for pain management services. After we shared the results of our preliminary analysis with VHA officials in December 2016, VHA required its medical facilities to manually review a sample of about 5,000 Choice Program authorizations that were created in July, August, and September of 2016 for four types of Choice Program care—mammography, gastroenterology, cardiology, and neurology. The purpose of this review was to analyze (1) the timeliness with which VAMCs sent referrals to the TPAs, and (2) veterans’ overall wait times for Choice Program care. VHA calculated the average wait times across these four types of care for each of its 18 Veterans Integrated Service Networks (VISN). VHA’s analysis of data collected by VAMCs identified the following average review times when veterans were referred to the Choice Program because there was a greater-than-30-day wait time for an appointment at a VHA medical facility. Referral wait times. VISN-level averages ranged from 6 to 53 days for VAMC staff to contact veterans and confirm that they wanted to be referred to the Choice Program, compile relevant clinical information, and send veterans’ referrals to the TPAs. The national average was 19 days. Overall wait times. From the time veterans’ need for care was identified until they attended initial Choice Program appointments, average overall wait times ranged from 34 to 91 days across VHA’s 18 VISNs. The national average was 51 days. When veterans were referred to the Choice Program because services were unavailable at a VHA medical facility, VHA’s analysis of VAMCs’ self-reported data identified the following average review times: Referral wait times. VISN-level averages ranged from 6 to 21 days for VAMC staff to contact veterans and confirm that they wanted to be referred to the Choice Program, compile relevant clinical information, and send veterans’ referrals to the TPAs. The national average was 15 days. Overall wait times. From the time veterans’ need for care was identified until they attended initial Choice Program appointments, average overall wait times ranged from 39 to 56 days across VHA’s 18 VISNs. The national average was 47 days. VHA’s Monitoring of Veterans’ Access to Choice Program Care Is Limited by a Lack of Complete, Reliable Data Our analysis indicates that VHA’s ability to monitor Choice Program access is limited because the data VHA uses are not always accurate and reliable, and VHA lacks certain data that are needed to effectively monitor the program. As discussed below, multiple factors contribute to these data limitations. According to federal internal control standards for information and communication and for monitoring, agencies should use quality information to achieve the entity’s objectives, internally and externally communicate quality information, and establish activities to monitor the quality of performance over time and evaluate the results. Without complete, reliable Choice Program data, VHA cannot determine whether the Choice Program has achieved the goals of (1) alleviating the wait times veterans have experienced when seeking care at VHA medical facilities, and (2) easing geographic burdens veterans may face to access care at VHA medical facilities. VHA Cannot Systematically Calculate the Average Number of Days VAMCs Take to Prepare Choice Program Referrals The data VHA currently uses to monitor the timeliness of Choice Program appointment scheduling and completion do not capture the days it takes for VAMCs to prepare veterans’ referrals and send them to the TPAs. This is because VHA has not standardized the manner in which VHA clinicians and VAMC staff categorize consults that lead to Choice Program referrals. We observed inconsistency in the titles of consults that were associated with the non-generalizable sample of Choice Program authorizations we reviewed. For example, consult titles sometimes included the word “Choice,” but in other cases they included the words “non-VA care.” Some of the consult titles indicated the criterion under which the veteran was eligible for the Choice Program and the type of care the veteran needed (for example, “Choice-First Physical Therapy”), while other consult titles only indicated the type of care the veteran needed (for example, “pain management”). We observed this variability among consult titles both within single VAMCs and across all six of the VAMCs we selected for review. According to documentation VHA officials provided to us in December 2016, they planned on implementing a process for standardizing the consult titles associated with Choice Program referrals over the course of calendar year 2017. Originally, they planned on piloting the process at four VAMCs beginning in February 2017 and expected to gradually roll out standardized consult titles across all other VAMCs over the remainder of calendar year 2017. However, in late June and early July 2017, we followed up with the six VAMCs in our sample, and at that time, managers from only one of the VAMCs said that they had implemented the new process for standardizing consult titles associated with Choice Program referrals. When we interviewed VHA officials again in September 2017, they acknowledged that they had been delayed in implementing standardized consult titles, and they provided documentation indicating that they were just beginning to roll out the new process nationwide. In the absence of standardized consult titles for the Choice Program, VHA has no automated way to electronically extract data from VA’s electronic health record and calculate the average number of days it takes for VAMC staff to prepare veterans’ Choice Program referrals after veterans have agreed to be referred to the program. Further, without standardized consult titles, VHA cannot monitor veterans’ overall wait times—from the time VHA clinicians determine veterans need care until the veterans attend their first appointments with Choice Program providers. The lack of standardized consult titles also prevents VHA from tracking average overall wait times and monitoring the timeliness of care for veterans whose Choice Program authorizations are returned by the TPAs without scheduled appointments. Available VHA Data Do Not Capture the Time Spent By TPAs in Accepting VAMCs’ Referrals and Opting Veterans in to the Choice Program The data VHA currently uses to monitor the timeliness of Choice Program appointments capture only a portion of the process that the TPAs carry out to schedule veterans’ appointments after they receive referrals from VAMCs. Specifically, VHA’s data reflect the timeliness of appointment scheduling and completion after the TPAs create authorizations in their appointment scheduling systems, which (according to VA’s contracts, as of June 1, 2016) the TPAs must do only after they have received all necessary information from VA and the veteran has opted in to the Choice Program. Therefore, VHA’s timeliness data do not capture the time TPAs spend (1) reviewing and accepting VAMCs’ referrals, and (2) contacting veterans to confirm that they want to opt in to the Choice Program. Data related to the timeliness of Choice Program appointment scheduling. When we asked how they monitor the timeliness of Choice Program appointment scheduling, VHA officials provided us the following types of data, all of which reflect the time that elapses only after veterans have opted in to the Choice Program and the TPAs have created authorizations: the average number of business days the TPAs take after creating authorizations to schedule appointments for routine and urgent care, the percentage of appointments for routine care that the TPAs schedule within 5 business days after they create authorizations, and the percentage of appointments for urgent care that the TPAs schedule within 2 business days after they create authorizations. Data related to the timeliness with which initial Choice Program appointments occur. VHA officials provided us data on the timeliness with which initial Choice Program appointments have occurred; however, as shown below, almost all of these data reflect the timeliness with which appointments occur only after veterans have opted in to the Choice Program and the TPAs have either created authorizations or successfully scheduled veterans’ appointments: the average number of business days after the TPAs create authorizations in which appointments for routine and urgent care occur; the percentage of appointments for routine care that are completed within 30, 60, 90, and 120 business days or more after the TPAs create an authorization; the percentage of appointments for routine care that are completed within 30 calendar days of either (1) the TPA’s scheduling of the appointment, (2) the clinically indicated date on the VAMC’s referral, or (3) the veteran’s preferred date; and the percentage of appointments for urgent care that are completed within 2 calendar days of the TPAs creating the authorizations. See figure 4 for an illustration of how VHA’s data capture only a portion of the Choice Program process to obtain care. In September 2017, VHA officials told us that they recently began implementing an interim solution that would allow them to track veterans’ overall wait times for Choice Program and other VA community care— from the time VHA clinicians determine veterans need the care until the veterans attend their first appointments with community providers. Specifically, this interim solution requires VAMC staff to enter unique identification numbers on VHA clinicians’ requests for care and on the Choice Program referrals they send to the TPAs. This unique identification number is then carried over to the Choice Program authorizations that are created in the TPAs’ systems. According to VHA officials, the unique identification number creates a link between VHA’s data and the TPAs’ data, so that VHA can monitor the timeliness of each step of the Choice Program referral and appointment scheduling process. However, the success of VHA’s interim solution relies on VAMC staff consistently and accurately entering the unique identification numbers on both the VHA clinicians’ requests for care and on Choice Program referrals, a process that is prone to error. VHA officials said it is their long- term goal to automate the process by which VHA’s data are linked with TPAs’ data in the consolidated community care program they are planning to implement. Because, as previously explained, VHA lacks data on the average timeliness with which VAMCs prepare Choice Program referrals, and VHA also lacks data on the average amount of time that elapses between when the TPAs receive VAMCs’ referrals and when veterans opt in with the TPAs, VHA cannot track veterans’ overall wait times for Choice Program care—from the time VHA clinicians determine that veterans need care until the veterans attend their first appointments with Choice Program providers. In addition, the lack of data on the timeliness with which the TPAs have (1) accepted VAMCs’ referrals and (2) determined that veterans wish to opt in to the program also prevents VHA from assessing whether the TPAs’ average timeliness in completing these actions has improved over time. Clinically Indicated Dates Are Sometimes Changed by VAMC Staff Before They Send Choice Program Referrals to the TPAs Our analysis of a sample of 196 Choice Program authorizations shows that another way in which VHA’s monitoring of veterans’ access to care is limited by available data is that the clinically indicated dates included on referrals that VAMCs send to the TPAs may not be accurate. We found that the clinically indicated dates on VAMCs’ referrals were not always identical to the clinically indicated dates that were originally entered into VA’s electronic health record by the VHA clinicians who treated the veterans. VHA’s policy directive on consult management and its Choice Program standard operating procedure for VAMCs state that the clinically indicated date is to be determined by the VHA clinician who is treating the veteran. However, in reviewing VA’s electronic health records for our sample of 196 Choice Program authorizations, we identified 60 cases where the clinically indicated dates VAMC staff entered on Choice Program referrals they sent to the TPAs differed from the clinically indicated dates that were originally entered by VHA clinicians. In 46 of these 60 cases, VAMC staff entered clinically indicated dates on the Choice Program referrals that were later than the dates originally determined by the VHA clinicians, which would make the veterans’ wait times appear to be shorter than they actually were. VHA could not explain why the dates differed. Clinically indicated dates are manually entered on VAMCs’ electronic referrals to the TPAs, a practice that is subject to error or manipulation. It is unclear if VAMC staff mistakenly entered incorrect dates, or if they inappropriately entered later dates when the VAMC was delayed in contacting the veteran, compiling relevant clinical information, and sending the referral to the TPA. If VAMCs’ Choice Program referrals have clinically indicated dates that are different from the ones VHA clinicians originally entered without additional supporting documentation, there is a risk that VHA’s data will not accurately reflect veterans’ actual wait times. Specifically, VHA will not be able to determine how often veterans receive Choice Program care within the Choice Act’s required 30-day time frame. VAMCs and TPAs Frequently Re-Categorize Routine Choice Program Referrals as Urgent Referrals, Sometimes Inappropriately Another limitation of VHA’s monitoring of veterans’ access to Choice Program care is that VAMCs and TPAs do not always categorize referrals in accordance with the contractual definition for urgent care when they are processing referrals and scheduling appointments for veterans. According to VA’s contracts with the TPAs, Choice Program referrals are to be marked as “urgent” when a VHA clinician has determined that the veteran needs care that (1) is considered essential to evaluate and stabilize conditions and (2) if not provided would likely result in unacceptable morbidity or pain when there is a significant delay in evaluation or treatment. It is VA’s goal that the TPAs schedule appointments for urgent care and ensure that they take place within 2 business days of accepting the referrals from VA. Among the sample of 53 Choice Program authorizations for urgent care we reviewed, VHA and TPA documentation showed that in 35 cases (about 66 percent), VHA clinicians originally determined that veterans needed routine care, but VAMC or TPA staff later re-categorized the referrals or authorizations as urgent. In 4 of these 35 cases, we found documentation showing that VHA clinicians had reviewed the pending referrals and determined that the veterans’ clinical conditions or diagnoses warranted re-categorizing the veterans’ routine care referrals or authorizations as urgent. In 31 other cases we reviewed, however, we found no documentation indicating that a VHA clinician had identified a clinical reason for the veteran to receive care faster. In at least 15 of these 31 cases, it appeared that the VAMC or TPA staff changed the status of the referral or authorization in an effort to administratively expedite appointment scheduling when they were delayed in sending referrals and scheduling veterans’ Choice Program appointments. According to the VA contracting officer who is responsible for the Choice Program contracts, VA’s contracts with the TPAs do not include provisions for separating clinically urgent Choice Program referrals and authorizations from those that the VAMC or the TPA has decided to expedite for administrative reasons (such as when the veteran or VAMC staff has expressed frustration with a delay in the referral or appointment scheduling process). If Choice Program referrals for routine care are inappropriately categorized as urgent care referrals under the Choice Program, VHA’s data on the timeliness of urgent appointment scheduling and completion will not accurately reflect the extent to which veterans who have a clinical need for urgent care are receiving it within the time frames required by the TPAs’ contracts. The TPAs’ Choice Program Performance Data Did Not Become Comparable until 18 Months After the Program Began, Which Limits VA’s Ability to Monitor Whether Access Has Improved The authorization creation date is the primary starting point from which VHA monitors the TPAs’ timeliness in appointment scheduling and the extent to which veterans’ initial Choice Program appointments occur in a timely manner. However, when initially implementing the Choice Program—beginning in November 2014—the two TPAs had differing interpretations of contractual requirements relating to when they should create authorizations in their appointment scheduling systems. According to VA contracting officials and VHA community care officials we interviewed, at the start of the program, one of the TPAs was creating authorizations as soon as it accepted referrals from VAMCs, but the other was waiting until after veterans opted in to the Choice Program to create authorizations. It was not until May 2016 (about 18 months into the Choice Program’s implementation) that VA modified its contracts to clarify that the TPAs are to create Choice Program authorizations only after they have contacted the veterans and confirmed that they want to opt in to the program. Due to these differing interpretations, VA lacked comparable performance data for the two TPAs for the first 18 months of the Choice Program’s expected three-year implementation. Therefore, it could not compare the timeliness of access nationwide. In addition, since VA modified the TPAs’ contracts midway through the Choice Program’s implementation, officials can only comparatively examine whether the timeliness of both TPAs’ appointment scheduling and completion has improved since June 2016, which is when the relevant contract modification took effect. TPAs Sometimes Select Incorrect Return Reasons or Inappropriately Return Choice Program Authorizations without Making Appointments VHA collects data and monitors various reasons the TPAs return Choice Program authorizations to VAMCs without making appointments. Each month, VA monitors how each TPA performs on Choice Program performance measures related to the timeliness of appointment scheduling. Authorizations that are returned for reasons that are attributable to the TPA—such as a lack of network providers in close proximity to the veteran’s residence—negatively impact the TPAs’ monthly performance measures. In our sample, we found that VHA’s data on the TPAs’ reasons for returning Choice Program authorizations are not reliable. Specifically, we questioned the validity of the TPAs’ return of 20 out of the 88 authorizations in our sample, for the following reasons: In 11 of the 20 cases, we found VHA or TPA documentation that substantiated the return, but the TPAs selected the incorrect return reasons when they sent the authorizations back to VA. For example, in one case, the TPA was unable to schedule an appointment with a primary care provider—even after contacting 11 different network providers—but the TPA staff returned the authorization to the VAMC indicating that the veteran had declined care. TPA officials who reviewed this authorization with us agreed that it was inappropriate to mark this authorization as having been returned because the veteran declined care and that their staff instead should have indicated that they had been unable to schedule an appointment with a network provider. In the remaining 9 of the 20 cases, we could find no VHA or TPA documentation to substantiate the reasons the TPAs selected when they returned the authorizations to VA, nor any other reasons for return. For example, the TPAs incorrectly selected “missing VA data” as the reason they returned 5 of these 9 authorizations. Based on VHA and TPA documentation we reviewed, the VAMCs’ referrals were complete and not missing any of the information the TPAs needed to proceed with appointment scheduling. TPA officials could not explain why their staff selected incorrect return reasons or inappropriately returned authorizations for which they should have kept attempting to schedule appointments. However, TPA staff must manually select return reasons when they send authorizations back to VAMCs, a process that is subject to error or manipulation. There is a process by which VA’s contracting officer’s representatives validate the monthly data submitted by the TPAs, but it cannot identify the data reliability issues we found when manually reviewing VHA and TPA documentation associated with a sample of returned Choice Program authorizations. VHA officials told us that VA’s contracting officer’s representatives do not have access to veterans’ electronic health records, which means that they cannot check whether VHA documentation substantiates the return reasons selected by the TPAs. Without reliable data on reasons that veterans have been unable to obtain appointments through the Choice Program, VHA cannot properly target its efforts to address challenges—such as network inadequacy— that may be causing the TPAs to return authorizations without making appointments. In addition, the lack of reliable data makes it difficult for VA to monitor whether the TPAs are meeting their contractual obligations, such as establishing adequate networks of community providers. VHA Does Not Have Performance Measures for Monitoring Average Driving Times between Veterans’ Homes and the TPAs’ Choice Program Network Providers Another way in which VHA’s monitoring of veterans’ access is limited is that VA lacks contract performance measures that would provide VA and VHA with data related to veterans’ driving times to access care from the TPAs’ Choice Program network providers. Such performance measures would help VA monitor the TPAs’ network adequacy. In contrast, for PC3, VA does collect data from the TPAs to monitor urban, rural, and highly rural veterans’ maximum commute times to specialty care providers, providers of higher level care, primary care providers, and mammography and maternity care providers. When we asked why VA had not established driving time performance measures for the Choice Program, a VHA official responsible for monitoring the Choice Program contracts told us he thought that these performance measures had simply been overlooked in the haste to implement the Choice Program. VA concurred with a recommendation we made in our December 2016 report about VA health care for women veterans, in which (among other things) we stated that the department should monitor women veterans’ driving times to access sex-specific care through the Choice Program and VA’s future community care contracts. However, VA stated in its June and October 2017 written updates on actions it has taken to address this recommendation that it does not intend to modify the current Choice Program contracts to address our recommendation because the contracts will be ending soon and it would be too costly to do so. Without driving time performance measures for the Choice Program, VHA lacks assurance that the TPAs’ networks include a sufficient number of community providers in close proximity to where veterans live, and it cannot monitor the extent to which veterans’ geographic access to care has improved or diminished. Multiple Factors Have Adversely Affected Veterans’ Access to Care under the Choice Program, Providing Potential Lessons Learned for VA’s Future Community Care Program Officials we interviewed from VA’s contracting office, VHA’s Office of Community Care, and both of the TPAs, along with leadership officials, managers, and staff from the six selected VAMCs told us about various factors that have directly or indirectly affected veterans’ access to care throughout the Choice Program’s implementation. Chief among these are (1) administrative burden associated with the Choice Program’s complex referral and appointment scheduling processes; (2) inadequate VAMC staffing and poor communication between VHA and the VAMCs; and (3) the TPAs’ slow development of a robust provider network. We also identified actions VA and VHA have taken to address these factors. (See appendix VI for additional information about actions that VA and VHA took to address these three access-related issues for the Choice Program.) To the extent that these factors persist under the consolidated community care program that VA plans to establish, they will continue to adversely affect veterans’ access to care. VA and VHA Took Several Actions to Address Administrative Burden Caused by Complex Choice Program Processes, but Opportunities Still Exist to Improve Care Coordination VHA and TPA officials, as well as managers and staff from the six selected VAMCs, told us they encountered administrative burden associated with the complexities of the Choice Program’s referral and appointment scheduling processes. Further, they lacked care coordination tools throughout the time they were operating the Choice Program, which affected their ability to provide timely care to veterans. Among the main issues cited were the following: Manual referral processes and lack of TPA access to veterans’ records. To prepare veterans’ Choice Program referrals, VAMC staff had to follow a manual, time-consuming process to retrieve and collate key contact and clinical information from veterans’ VA electronic health records. This was because—throughout most of the Choice Program’s implementation—VA had no system for automatically generating referral packages that contained all of this information; nor did TPA staff have access to veterans’ VA electronic health records. If VAMC staff made mistakes (such as mistyping or inadvertently omitting veterans’ telephone numbers or addresses) or if the referrals were missing clinical information that the TPAs needed for appointment scheduling purposes, the TPAs had to either contact the VAMC to correct or obtain the missing information or return the referrals to VA without attempting to schedule appointments. These manual processes impeded the VAMCs’ progress in preparing referrals and the TPAs’ progress in scheduling veterans’ Choice Program appointments. Limited availability of care coordination tools and dependence on telephone-based customer service for appointment scheduling. A lack of care coordination tools and near-constant telephone calls also delayed VAMC and TPA staff from efficiently processing veterans’ referrals for appointments. For example, the Choice Program had no web-based portal through which VAMC staff and veterans could view the TPAs’ step-by-step progress in scheduling appointments. While both of the TPAs had portals that allowed VAMC staff (but not veterans) to obtain certain information— such as whether the TPA had already scheduled an appointment—the portals did not show if, or when, veterans’ referrals had been accepted, the dates and times of the TPAs’ attempts to contact veterans, or the number of community providers the TPA had contacted in its attempts to schedule an appointment. VAMC staff we interviewed said that while they could submit written messages to the TPAs through the portals, TPA staff did not always answer these messages in a timely manner. This, in turn, made telephone calls between veterans, the VAMCs, and the TPAs the most effective form of follow-up regarding veterans’ Choice Program referrals, according to VAMC managers and staff. Officials from one selected VAMC estimated that their community care staff (which included about 30 employees) was answering approximately 10,000 calls per month, and another VAMC had hired a full-time staff person just to answer telephone calls. Workload associated with re-authorizing veterans’ care. VAMC and TPA staff also told us they faced a lengthy administrative process to re-authorize care if veterans’ Choice Program authorizations expired before veterans received care or if veterans needed services that were outside the scope of their original authorizations. The TPAs referred to these as “secondary authorization requests” or “requests for additional services.” Without these re-authorizations, veterans’ care from community providers could be delayed or interrupted. VAMC and TPA staff had to process a high volume of these requests for two main reasons. First, the Choice Program originally had a 60- day limit on episodes of care, which meant that all appointments within the episode of care had to be completed within 60 days of the initial date of service. Even if the veteran needed care that could routinely be expected to outlast this 60-day time frame (such as maternity care or cancer treatment), community providers and the TPAs would still have to request additional referrals from the VAMCs to authorize the remaining care. Second, TPAs would have to request additional referrals if an episode of Choice Program care was already in progress and the veteran needed services that were not specifically authorized in the VAMC’s original referral. According to some VAMC managers and staff, this generated significant workload for the VAMCs. Officials from one of the selected VAMCs said it had to hire a full-time nurse just to process secondary authorization requests. Manual post-appointment follow-up processes. According to VAMC managers and staff we interviewed, the manual processes used for post-appointment follow-up also added to delays for veterans seeking care through the Choice Program. After an episode of care is complete—whether services are delivered at a VHA medical facility or in the community—VHA’s policy requires VAMC staff to document that care was provided and make the results of encounters available to VHA clinicians by entering medical records or other clinical information into the veteran’s VA electronic health record. When medical records from the community provider became available, VAMC staff had to retrieve copies from the TPAs’ portals and scan them into veterans’ VA electronic health records. (See appendix V for an illustration of this process.) VAMC staff described this as a very time-consuming process because it could take months for claims or medical records from Choice Program appointments to appear in the TPAs’ portals. At the time of our interviews in the summer of 2016, managers from two of the VAMCs in our sample said they each had backlogs of more than 6,000 Choice Program and other community care consults to complete. These backlogs adversely affected veterans’ access to Choice Program care because the time VAMC staff spent attempting to complete some veterans’ consults could not be spent on preparing other veterans’ Choice Program referrals. Over the course of the Choice Program’s implementation, VA and VHA took multiple actions to address administrative burden, including the following. Opportunities exist to improve or build on these actions as VA moves forward with the consolidated community care program it plans to implement. Implementation of a web-based tool to automate Choice Program referral preparation. In early 2016, to improve the process of gathering information from veterans’ VA electronic health records to prepare Choice Program referrals, staff from two VAMCs developed a web-based tool—called the “referral documentation” (REFDOC) tool. According to VHA documentation, the REFDOC tool automates the process of gathering necessary information and assembling it in a standardized format for veterans’ Choice Program referrals. VHA’s initial analyses of the REFDOC tool’s effectiveness found that it sped up the process of preparing Choice Program referrals by about 20 minutes per referral, which helped reduce the administrative burden associated with preparing referrals. However, VHA’s nationwide dissemination of the tool to all of the VAMCs was slowed by limitations of VA’s information technology systems. As of November 2016 (about 9 months after the tool was created), it had only been implemented at 18 of VHA’s 170 VAMCs. VHA gradually made the tool available at the remaining VAMCs between November 2016 and May 2017. Standardized episodes of care. In April 2017, VHA approved standardized episodes of care—or “bundles” of clinically necessary medical services and procedures—that are to be authorized whenever veterans are referred to community providers for specified types of care. This was intended to help address administrative burden associated with clinical review processes and improve veterans’ access to care. To start, VHA approved standardized episodes of care for 15 different types of care, including physical therapy, maternity care, and optometry. VA and VHA documentation indicate that they intend to roll out additional standardized episodes of care over time and continue using them once VA transitions to the consolidated community care program it is planning to implement. Acquisition of a secure e-mail system and a mechanism for TPAs and community providers to remotely access veterans’ VA electronic health records. VA recently established two different care coordination tools that were intended to make the process of providing veterans’ medical records to Choice Program and other VA community care providers more efficient. Secure e-mail system. In the spring of 2017, VA acquired software that allows VAMC managers and staff to e-mail encrypted files containing veterans’ medical records to the TPAs and community providers. Only the intended recipient can decrypt and respond to messages containing the files. According to VHA documentation, this secure e-mail system was intended to improve the efficiency of coordinating veterans’ Choice Program care and address potential security risks associated with printing paper copies of veterans’ medical records and sending them to the TPAs or community providers via fax or U.S. mail. Remote access to veterans’ VA electronic health records. In May 2017, VHA began offering a secure, web-based application called the Community Viewer as a tool for community providers nationwide to have access to assigned veterans’ VA electronic health records. Like the secure e-mail system, this tool is intended to improve the efficiency of coordinating veterans’ Choice Program care. However, VHA’s ability to seamlessly coordinate care with community providers remains limited—even with the secure e-mail system and the Community Viewer—because these tools only facilitate a one-way transfer of the information needed to coordinate the care veterans receive at VHA medical facilities and in the community. For the purposes of care coordination, it is important that information sharing among all participants concerned with a veteran’s Choice Program or other VA community care—including VHA clinicians, the TPAs, community providers, and the veteran—is as seamless as possible. According to the federal internal control standard for information and communication, agencies should internally and externally communicate the necessary information to achieve their objectives. While the secure e-mail system and Community Viewer tool provide an interim solution for VAMCs to transfer information from veterans’ VA electronic health records to the TPAs and community providers, they do not provide a means by which VAMCs or veterans can (1) view step-by-step progress in scheduling appointments, or (2) electronically receive the clinical results of Choice Program or other VA community care encounters. Building such a capability into the future consolidated community care program that VA plans to implement would allow VHA to improve the care coordination processes that exist in the Choice Program. Pilot programs for VAMC staff to schedule Choice Program appointments. In July 2016 and October 2016, VHA began implementing pilot projects, whereby staff at two VAMCs took over from the TPAs the responsibility of scheduling veterans’ Choice Program appointments. Specifically, VA modified its contracts with TriWest and Health Net to implement the two VAMC scheduling pilots at the Alaska VA Health Care System and the Fargo VA Health Care System, respectively. In these two locations, VAMC staff schedule veterans’ appointments and send relevant clinical documentation to the Choice Program providers. According to VHA officials, this had the potential to improve veterans’ access to care by improving the efficiency of the Choice Program appointment scheduling process. The results of these two VAMC scheduling pilots are particularly relevant, given that VA’s RFP, as amended, for its planned consolidated community care program indicates that VAMCs—rather than TPAs—will carry out community care appointment scheduling, unless VA exercises a contract option for the TPAs to provide such services for VAMCs that request them. However, while VHA officials told us that while they have taken some steps to begin evaluating the effectiveness of the pilots in improving appointment scheduling, these efforts have not been completed. The lack of an evaluation of the two VAMC scheduling pilots is inconsistent with the federal internal control standard for risk assessment, which stipulates that an agency should identify, analyze, and respond to risks related to achieving defined objectives. In addition, the federal internal control standard for monitoring calls for ongoing monitoring to assess the effectiveness of management strategies, make needed corrections if shortcomings are identified, and determine if corrective actions are achieving desired outcomes. Without evaluating the results of the scheduling pilots at the Alaska and Fargo VA Health Care Systems, VA lacks assurance that VAMC staff have the potential to schedule veterans’ community care appointments in a more timely manner than TPA staff otherwise would schedule them. Furthermore, VA is missing an opportunity to inform its planning and decisions for scheduling under its planned consolidated community care program. Inadequate Staffing and Ad Hoc Communication Contributed to Choice Program Access Delays, and Actions Taken Have Been Focused on the Staffing Concerns TPA officials and managers and staff from the six selected VAMCs frequently discussed staffing- and communication-related factors that adversely affected the timeliness of veterans’ Choice Program care. During the course of our review, they cited the following factors that delayed VAMCs’ processing of veterans’ referrals and TPAs’ scheduling of appointments: Staff vacancies and turnover. TPA officials and managers and staff at selected VAMCs said that VAMCs and TPAs were initially understaffed as Choice Program implementation began. VAMCs. Managers at the six selected VAMCs told us that after implementing the Choice Program, they hired additional community care staff, with one of them increasing its community care staffing level almost five-fold by July 2016. Some VAMC managers told us in 2016 and again in 2017 that they still struggled with staff retention and vacancies—among both managers and staff. Five of the VAMCs said they relied on overtime for their existing staff to keep up with the Choice Program workload. According to community care managers from four of the selected VAMCs, it takes about 6 months to recruit, hire, and train new community care staff, and this process could take more time if the VAMC’s human resources office is also understaffed, which was the case for at least one of the six VAMCs. That VAMC had not had a permanent community care manager for more than 2 years as of July 2017—which covered the majority of the Choice Program’s original 3-year implementation. TPAs. Officials from both TPAs also told us that they initially underestimated the workload associated with scheduling Choice Program appointments, and they brought on additional staff, including sub-contractors, to better manage their workloads as utilization of the program increased. One TPA opened eight operations centers in addition to the two it already had when the Choice Program was initially implemented. Ineffective mechanisms for VAMCs to resolve problems. VAMC managers and staff we interviewed also said they lacked useful mechanisms and points-of-contact when they needed to resolve issues and problems they were having with referral and appointment scheduling processes. VHA established a web-based Choice Program “issue tracker” system for VAMCs to report problems to VHA’s Office of Community Care. However, staff at four of the selected VAMCs told us they rarely used the tracker and some had stopped using the tracker altogether because it took too long for VHA’s Office of Community Care or the TPAs to respond and resolve the issues (if they responded at all), and they did not see the value in taking the time to report them via this mechanism. Managers at one of the VAMCs also told us about a phone line that their TPA had established to escalate and resolve urgent issues, but the TPA told the VAMC only to use it for emergencies. VHA’s untimely communication of Choice Program policy and process changes. According to managers and staff at the six selected VAMCs, VA and VHA have issued numerous contract modifications and policy changes with little advanced notice throughout the Choice Program’s implementation. According to these VAMC managers and staff, the untimely communication of changes created confusion at the VAMC level that affected veterans’ access to Choice Program care. We reviewed documentation showing that from October 2014 (when it modified the TPAs’ contracts to add responsibilities related to Choice Program administration) until July 2017, VA modified each TPA’s contract about 40 times. Many of these contract modifications—along with other legislative and regulatory changes that VA implemented during this period—changed VAMC or TPA processes related to Choice Program referrals and appointment scheduling. Many of the VAMC managers and staff we interviewed said they struggled to keep up with the contract modifications and policy changes, that VHA’s Office of Community Care did not always leave adequate time to prepare for them, and they felt they were never really able to become proficient at new processes before additional changes occurred. This meant that training sometimes happened after the contract modifications or VHA policy changes had already gone into effect. For example, managers and staff at three of the selected VAMCs told us that they were not informed in advance about a June 2016 contract modification that required the TPAs to return Choice Program authorizations to VAMCs if they failed to schedule appointments within required time frames. This contract modification had the potential to significantly increase VAMCs’ workloads, because they would have to arrange veterans’ care through other means once the authorizations were returned. According to individuals at two of these three VAMCs, they first heard about this change from TPA staff, rather than from VHA. VHA took the following two actions intended to help address staffing- related factors that adversely affected the timeliness of veterans’ Choice Program care. Staffing tool for VAMCs to estimate needs. In the spring of 2017, VHA developed a tool that is intended to help VAMCs project their staffing needs for the consolidated community care program VA plans to implement. VHA used workload data and site visit observations to develop the tool. Among the six selected VAMC managers we interviewed, impressions about the reasonableness of the staffing estimates generated by the community care staffing tool were mixed. For example, managers at two of the VAMCs said that the tool likely underestimated the number of staff they would need to handle referrals and appointment scheduling once VA transitions to the consolidated community care program. In contrast, managers from two other VAMCs thought that the tool’s staffing estimates seemed about right. Co-locating TPA staff at selected VAMCs to assist with resolution of problems. To help facilitate problem resolution between VAMCs and the TPAs as they work to schedule veterans’ Choice Program appointments, VA modified the TPAs’ contracts in November 2015 to allow for TPA staff to be co-located at selected VAMCs. VHA officials expected that one potential benefit of co- locating TPA staff would be that fewer veterans’ Choice Program referrals would be returned to VAMCs because of missing clinical information because TPA staff could help resolve such problems locally before the TPA returned referrals. As of May 2017, TPA staff were working at 70 of VHA’s 170 VAMCs—or about 40 percent of all VAMCs. Similar care coordination arrangements may exist under the consolidated community care program VA is planning to implement, if VA exercises a contract option for the TPAs to provide such services at VAMCs that request them. However, the communication-related factors that VHA and TPA officials identified as affecting the timeliness of veterans’ Choice Program care remain. VHA relied on ad hoc communications such as memoranda, fact sheets, e-mails, national conference calls, and occasional web-based trainings to communicate policy and process changes to VAMCs throughout the Choice Program’s implementation. Our interviews with VAMC managers and staff suggest that these were not the most effective methods of communication because messages about key changes sometimes lacked sufficient detail or failed to reach the VAMC staff responsible for implementing them in a timely manner. According to the federal internal control standard for control activities, agencies should implement control activities through their policies and procedures, which document the responsibilities of managers and staff who are responsible for implementing a program. Among other things, this may include management reviewing and updating policies and day-to-day procedures in a timely manner after a significant change in the program has occurred. VHA has no comprehensive policy directive or operations manual for the Choice Program, and its broader policy directive for VA community care programs has not been updated since January 2013. As a result, VAMC staff have operated in an environment that is frequently changing with no definitive reference source or sources of up-to-date policy and processes to consult, such as a comprehensive policy directive or operations manual. Instead, VAMC staff have had to keep track of the Choice Program’s policy and process changes through VHA’s various ad hoc communications. This poses a risk to VHA, as it increases the likelihood that VAMCs will implement new policies and processes inconsistently. In addition, there is risk that VAMC managers and staff will not always be aware of the most current policies and processes. Unless a comprehensive policy directive or operations manual is created, those risks could remain for the consolidated community care program VA is planning to establish. Inadequate Provider Networks Affected Timely Access, but VHA Plans to Improve Available Information Related to Provider Capacity and Veteran Demand for Future TPAs According to VAMC managers and TPA officials we interviewed, the TPAs’ inadequate networks of community providers affected both the timeliness with which veterans received Choice Program care and the extent to which veterans were able to access community providers located close to their homes. In September 2015, about 11 months after the Choice Program was implemented, VA contracting officials sent corrective action letters to both TPAs, citing network adequacy (i.e., the number, mix and geographic distribution of network providers) as a concern. TPA officials we interviewed acknowledged that their networks initially were not adequate to meet demand for Choice Program care. From the TPAs’ perspective, the brief transition period before the Choice Program began operations in November 2014 was not enough time to strengthen the community provider networks they had previously established under PC3, another VHA community care program. Furthermore, the TPAs told us that VA had not provided them with sufficient data on the expected demand for Choice Program care—by clinical specialty and zip code—prior to or after the Choice Program’s implementation. The overall number of community providers participating in the TPAs’ Choice Program networks nationwide grew dramatically over the following year—from almost 39,000 providers in September 2015 to more than 161,000 providers as of September 2016. However, at the time of our review, managers at five of the six selected VAMCs told us that they still observed TPA network inadequacies that impeded veterans’ access to Choice Program care. Similarly, managers at three VAMCs in our sample said that key community providers—including large academic medical centers—have refused to join the TPAs’ networks or dropped out of the networks after joining them, often because the TPAs had not paid them in a timely manner for the services they provided. Establishing adequate networks of Choice Program providers in rural areas has been particularly difficult. Officials at two of the three of the rural VAMCs in our sample pointed to general health care workforce shortages in rural areas as one cause for the TPAs’ network inadequacy—a challenge that is not limited to the Choice Program or VA’s health care system. According to a December 2015 analysis by VHA researchers, the majority of network providers in two of the three VISNs examined were located within 40 miles of VAMCs, leaving large geographic areas of these VISNs (particularly rural areas) outside the 40- mile radius with few network providers. For example, only 3.8 percent of primary care providers and 3.2 percent of behavioral health providers in VISN 20 (which covers Alaska, Idaho, Oregon, and Washington) were located more than 40 miles from VAMCs within that VISN. While the areas lacking network providers generally have fewer veterans relative to other areas within these VISNs, the analysis by VHA researchers suggests that veterans living in these areas are likely to have difficulty accessing Choice Program network providers that are located closer to their homes than the nearest VAMC, which is over 40 miles away. VA and VHA have tried to address network inadequacy that existed under the Choice Program and either have taken or plan to take additional actions to address this issue for the community care program VA plans to implement, including the following. Establishment of Choice Program provider agreement process. To help address inadequacies in the TPAs’ provider networks and improve veterans’ access to care under the Choice Program, VHA established the Choice Program provider agreement process in February 2016. This process allowed VAMCs to establish agreements with community providers, schedule veterans’ appointments, and reimburse the providers directly (using Choice Program funds) when the TPAs failed to schedule veterans’ appointments for reasons relating to network inadequacy, among others. Originally, the VAMCs were required to send veterans’ referrals to the TPAs and wait for them to be returned before they could proceed with arranging care through a Choice Program provider agreement. While this process had the potential to increase the availability of providers for the Choice Program, it did not immediately improve the timeliness of veterans’ Choice Program care because veterans still had to wait for as long as it took the VAMCs to send their referrals to the TPAs and for TPAs to return them before the VAMCs could proceed with arranging care through Choice Program provider agreements. According to the policies and contractual requirements that were in effect at the time, it could have taken up to 40 calendar days after a VHA clinician first identified the veteran’s need for care until the TPA returned the referral and the VAMC could proceed with arranging care through a Choice Program provider agreement. However, in March 2017, VHA updated the Choice Program provider agreement process so that—if the TPAs were returning a high volume of a VAMC’s referrals for one or more types of care—the VAMC could seek approval from its VISN and VHA’s Office of Community Care to bypass the TPA and proceed directly to arranging that type of care through Choice Program provider agreements. This had the potential to improve the timeliness of veterans’ access to Choice Program care because it eliminated the steps of sending referrals to the TPAs and waiting for them to be returned. Improving quality of information given to future TPAs. To help inform the recruitment of network providers for the consolidated community care program VA plans to establish, VA plans to provide future TPAs more robust data than they provided the current TPAs at the start of the Choice Program. In particular, VA’s RFP for the consolidated community care program, as amended, indicates that VA will provide (1) zip-code-level data on the number of authorizations that were issued in fiscal year 2015 for specific types of care (e.g., chemotherapy and obstetrics) and (2) VAMC-level data on the clinical specialties with the greatest wait times for appointments at VAMCs. These local-level data could help TPAs estimate the number of network providers of various specialties they will need to recruit in specific localities if awarded a contract for the consolidated community care program that VA is planning to implement. Performing market assessments. In preparation for the consolidated community care program VA plans to establish, VA and VHA officials are planning to conduct market assessments in 96 markets nationwide. Through these market assessments, officials told us, VA will (1) examine the clinical capacity that currently exists within VHA medical facilities and among community providers, (2) assess veterans’ current and future demand for health care services, and (3) develop long-term plans for ensuring that veterans will have access to high-quality health care services—whether they receive care from VHA clinicians or from community providers. According to VHA officials, the market assessments will help inform network provider recruitment efforts for the consolidated community care program VA is planning to implement. In addition, VHA officials told us that the market assessments will help VISN- and VAMC-level leaders make more informed, strategic decisions about whether it is more efficient to maintain or build capacity for delivering particular types of care within VHA medical facilities, or if they should routinely purchase certain types of care in the community. In November 2017, VHA officials told us that they expect to begin conducting the market assessments early in calendar year 2018, and the officials estimate that it will take about 18 months to complete assessments for all 96 markets. Conclusions The Choice Program is approaching the end of its life, and with plans to consolidate it with VA’s other community care programs, opportunities to improve the program are diminishing. Congress created the Choice Program in 2014 in response to longstanding challenges in veterans’ access to care delivered within VHA medical facilities. However, we found numerous operational and oversight weaknesses with VHA’s management of scheduling veterans’ medical appointments through the Choice Program. While it may not be feasible for VA and VHA to implement corrective actions to address all of our findings before the Choice Program ends, it is imperative that VA incorporate lessons learned from the Choice Program when it implements the consolidated community care program it has planned. First, we found VHA’s process for scheduling appointments for veterans through the Choice Program was not consistent with statutory requirements. The Choice Act requires veterans to receive care no more than 30 days from the date an appointment is deemed clinically appropriate or from the date the veteran prefers to receive care; however, we found that veterans could potentially wait up to 70 calendar days to receive routine care through the Choice Program. In effect, we found that in 2016, some veterans’ actual wait times far exceeded 30 days. Although VA has made some relevant contract modifications and issued guidance to address Choice Program wait times, VHA has not adjusted the Choice Program’s appointment scheduling process or established timeliness standards for all steps of the process. In addition, VHA’s monitoring of access to Choice Program care has been limited by incomplete and unreliable data. In particular, the data VHA uses preclude it from accurately identifying the number of days that occur within each phase of the process, from initial referral to the actual appointment. Furthermore, a lack of controls has allowed for inappropriate changes to be made in veterans’ clinically indicated dates and routine versus urgent care categorizations, affecting VA’s ability to monitor whether veterans are receiving Choice Program care in a timely manner. The lack of reliable data and performance measures also hinders VHA’s ability to oversee the program and identify problems and corrective actions. Further, we found that VHA is missing out on opportunities to enhance its design of the planned consolidated community care program. For example, VHA has not fully evaluated its pilot programs for scheduling appointments nor developed tools such as a mechanism that would allow the seamless sharing of information between VHA and the TPAs. Lastly, we found that VHA often relied on inefficient, ad hoc methods of sharing information (such as memoranda, fact sheets and emails), which often failed to reach the VAMC managers and staff responsible for implementing the program. After the Choice Program ends, VA anticipates that veterans will continue to receive care from non-VHA providers under the consolidated community care program that it is planning to implement. VA’s and VHA’s design of the future program can benefit from the lessons learned under the Choice Program. Ignoring these lessons learned and the challenges that have arisen under the Choice Program as VA and VHA design the future consolidated program would only increase VA’s risk for not being able to ensure that all veterans will receive timely access to care in the community. Recommendations for Executive Action To ensure that VA and VHA incorporate lessons learned from the Choice Program as they develop and implement a consolidated VA community care program, we are making the following 10 recommendations: The Under Secretary for Health should establish an achievable wait- time goal for the consolidated community care program that VA plans to implement that will permit VHA to monitor whether veterans are receiving VA community care within time frames that are comparable to the amount of time they would otherwise wait to receive care at VHA medical facilities. (Recommendation 1) The Under Secretary for Health should design an appointment scheduling process for the consolidated community care program that VA plans to implement that sets forth time frames within which (1) veterans’ referrals must be processed, (2) veterans’ appointments must be scheduled, and (3) veterans’ appointments must occur, which are consistent with the wait-time goal VHA has established for the program. (Recommendation 2) The Under Secretary for Health should establish a mechanism that will allow VHA to systematically monitor the average number of days it takes for VAMCs to prepare referrals, for VAMCs or TPAs to schedule veterans’ appointments, and for veterans’ appointments to occur, under the consolidated community care program that VA plans to implement. (Recommendation 3) The Under Secretary for Health should implement a mechanism to prevent veterans’ clinically indicated dates from being modified by individuals other than VHA clinicians when veterans are referred to the consolidated community care program that VA plans to implement. (Recommendation 4) The Under Secretary for Health should implement a mechanism to separate clinically urgent referrals and authorizations from those for which the VAMC or the TPA has decided to expedite appointment scheduling for administrative reasons. (Recommendation 5) The Under Secretary for Health should (1) establish oversight mechanisms to ensure that VHA is collecting reliable data on the reasons that VAMC or TPA staff are unsuccessful in scheduling veterans’ appointments through the consolidated community care program VA plans to implement, and (2) demonstrate that it has corrected any identified deficiencies. (Recommendation 6) The Secretary of Veterans Affairs should ensure that the contracts for the consolidated community care program VA plans to implement include performance metrics that will allow VHA to monitor average driving times between veterans’ homes and the practice locations of community providers that participate in the TPAs’ networks. (Recommendation 7) The Secretary of Veterans Affairs should establish a system for the consolidated community care program VA plans to implement to help facilitate seamless, efficient information sharing among VAMCs, VHA clinicians, TPAs, community providers, and veterans. Specifically, this system should allow all of these entities to electronically exchange information for the purposes of care coordination. (Recommendation 8) The Under Secretary for Health should conduct a comprehensive evaluation of the outcomes of the two appointment scheduling pilots it established at the Alaska and Fargo VA Health Care Systems (where VAMC staff, rather than TPA staff, are responsible for scheduling veterans’ Choice Program appointments), which should include a comparison of the timeliness with which VAMC staff and TPA staff completed each step of the Choice Program appointment scheduling process, as well as the overall timeliness with which veterans received appointments. (Recommendation 9) The Under Secretary for Health should issue a comprehensive policy directive and operations manual for the consolidated community care program VA plans to implement and ensure that these documents are reviewed and updated in a timely manner after any significant changes to the program occur. (Recommendation 10) Agency Comments and Our Evaluation VA provided written comments on a draft of this report, which are reprinted in Appendix VII. In its comments, VA concurred with 8 of our 10 recommendations and described its plans for implementing them. VA stated that VHA’s Office of Community Care will work collaboratively with other VA and VHA offices to evaluate modifications to the current wait- time goals and measurement processes so that wait times for VA community care can be compared to wait times for care delivered at VHA medical facilities. VA did not concur with our recommendation to implement a mechanism to separate clinically urgent referrals and authorizations from those that are designated as urgent for administrative reasons. VA stated that because VAMC staff (rather than TPA staff) will be responsible for scheduling veterans’ appointments under the consolidated community care program it plans to implement, there would no longer be a need to separate clinically urgent referrals from those that need to be administratively expedited. However, we maintain that our recommendation is warranted. In particular, we found that VA’s data did not always accurately reflect the timeliness of urgent care because both VAMC and TPA staff inappropriately re-categorized some routine care referrals and authorizations as urgent ones for reasons unrelated to the veterans’ health conditions. Regardless of whether VAMC staff or TPA staff are responsible for appointment scheduling, VA will need to ensure that it uses reliable data to monitor the extent to which veterans receive urgent care within required time frames. Without a means of separating clinically urgent referrals and authorizations from ones for which the scheduling process must be administratively expedited, VA’s data on the timeliness of urgent care will continue to be unreliable. VA agreed in principle with our recommendation to issue a comprehensive policy directive and operations manual, but stated in its comments that it would wait to determine whether a comprehensive policy directive is needed until after the consolidated community care program has been fully implemented and any interim implementation challenges have been resolved. However, when implementing a new program, it is important that agencies establish the program’s structure, responsibilities, and authorities at the beginning to help ensure that the new program’s objectives are met. Relying on outdated policies and unreliable communication methods increases VA’s risk of encountering foreseeable challenges. Without issuing a comprehensive policy directive and operations manual before the start of the new program, VA risks experiencing untimely communication issues similar to those that affected veterans’ access to care throughout the Choice Program’s implementation. A comprehensive policy directive and operations manual that could be updated as changes occur would give VAMCs a definitive source of real-time, up-to-date information and reduce the likelihood that VAMCs will implement new policies and processes inconsistently under the future program. We are sending copies of this report to the Secretary of Veterans Affairs, the Under Secretary for Health, appropriate congressional committees, and other interested parties. This report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Sharon M. Silas at (202) 512-7114 or silass@gao.gov or A. Nicole Clowers at (202) 512-7114 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Appendix I: Scope and Methodology for Examining Choice Program Wait Times and the Data VHA Uses to Monitor Access To examine selected veterans’ actual wait times to receive routine care and urgent care through the Choice Program and the information VHA uses to monitor access to care under the program, we took five key steps. We (1) analyzed Choice Program appointment wait times for selected veterans using a sample of 196 Choice Program authorizations for routine and urgent care; (2) reviewed VHA’s analysis of Choice Program appointment wait times for a sample of about 5,000 Choice Program authorizations; (3) reviewed data VHA uses to monitor the timeliness of Choice Program care and reasons that the TPAs have returned Choice Program referrals without making appointments; (4) interviewed VA, VHA, and TPA officials; and (5) reviewed federal internal control standards, as follows. 1. Our analysis of Choice Program wait times for a sample of 196 authorizations. To analyze the timeliness of Choice Program appointment scheduling and completion for a sample of veterans, we selected six VAMCs and a random, non-generalizable sample of 196 authorizations for veterans who were referred to the Choice Program by those six VAMCs between January 2016 and April 2016. We judgmentally selected the six VAMCs to include variation in geographic location, with three VAMCs that serve rural veteran populations and three VAMCs that serve urban veteran populations. In addition, three of the VAMCs were served by Health Net, and three were served by TriWest. (See table 5.) To select our random, non-generalizable sample of 196 Choice Program authorizations, we obtained VA data on all authorizations created by the TPAs between January and April 2016 for veterans who were referred to the program by the six VAMCs we selected—a universe of about 55,000 authorizations. From these 55,000 authorizations, we randomly selected: 55 routine care authorizations (about 10 authorizations per VAMC) for which the TPAs scheduled appointments for veterans, 53 urgent care authorizations (about 10 authorizations per VAMC) for which the TPAs scheduled appointments for veterans, and 88 routine and urgent care authorizations (about 15 authorizations per VAMC) that the TPAs returned to VA without scheduling appointments for any one of the following three reasons—(1) VA requested the authorization be returned, (2) VA data were missing from the referral, and (3) the veteran declined or did not want Choice Program care. For all 196 Choice Program authorizations in our sample, we manually reviewed VHA documentation (specifically, the veterans’ VA electronic health records) and TPA documentation to track the number of calendar days that elapsed at each step of the Choice Program appointment scheduling process. For the authorizations that the TPAs returned to the VAMCs without making appointments, we examined VHA and TPA documentation to determine whether the veterans eventually obtained care through other means—such as through another VA community care program, a different Choice Program referral, or at a VHA medical facility—and how long it took to receive that care. Determining whether veterans in our sample experienced clinical harm or adverse clinical outcomes because of delays in the VAMCs’ or TPAs’ processing of their referrals and authorizations was outside the scope of our review. We selected our sample of 55 routine care and 53 urgent care authorizations for which the TPAs succeeded in scheduling appointments to include only authorizations for which the TPAs did not meet VA’s appointment scheduling goals at one phase of the appointment scheduling process: when the TPAs attempt to schedule appointments after the veterans have opted in to the program. This was to ensure that our sample included only authorizations for which scheduling was delayed, so that we could examine the potential causes of appointment scheduling delays and whether delays also occurred at other phases of the process (such as when VAMCs were preparing the veterans’ referrals or when the TPAs were attempting to reach the veterans for them to opt in to the program). We omitted this phase of the appointment scheduling process when calculating the timeliness of appointment completion for the 55 routine care authorizations and 53 urgent care authorizations in our sample. Rather than reporting veterans’ overall wait times for these authorizations, we report the average number of calendar days that elapsed (1) while VAMCs were preparing veterans’ Choice Program referrals, (2) while the TPAs were attempting to reach veterans for them to opt in to the program, and (3) while veterans waited to attend their appointments after the TPAs succeeded in scheduling them. To assess the reliability of the authorization data we used, we interviewed knowledgeable agency officials, manually reviewed the content of the data, and electronically tested it for missing values. We concluded that these data were sufficiently reliable for the purposes of our reporting objectives. The findings from our review of Choice Program authorizations cannot be generalized beyond the VAMCs and the veterans’ Choice Program authorizations we reviewed. 2. VHA’s analysis of Choice Program wait times for a sample of about 5,000 authorizations. We obtained from VHA’s Office of Community Care the results of a nationwide analysis of Choice Program appointment timeliness it conducted in February 2017. Specifically, VHA directed its VAMCs to manually review veterans’ health records and TPA documentation and report observations for a non-generalizable sample of about 5,000 randomly selected Choice Program authorizations that were created between July and September of 2016. The sample was limited to authorizations for Choice Program appointments that had been scheduled for time- eligible veterans who needed four types of specialty care— mammography, gastroenterology, cardiology, and neurology. According to VHA officials, they limited their analysis to these four types of care because delayed treatment for any of these specialties could cause adverse health outcomes for patients. To assess the reliability of VHA’s data, we manually reviewed the results of its analysis and interviewed knowledgeable agency officials about potential outliers. We concluded that VHA’s data were sufficiently reliable for the purposes of our reporting objective. The results of VHA’s analysis cannot be generalized beyond the sample of Choice Program authorizations that it reviewed. 3. VHA data on timeliness of Choice Program appointments and the reasons TPAs return referrals without making appointments. To evaluate the information VHA uses to monitor access to care under the Choice Program, we reviewed data that VHA collects to monitor the timeliness with which the TPAs schedule appointments and the timeliness with which appointments occur after the TPAs have scheduled them. We also reviewed and tested the reliability of VHA data on the reasons the TPAs have returned Choice Program referrals to VAMCs without scheduling appointments, which may offer insights about access to care (e.g., the percentage of referrals that are returned due to a lack of providers in the TPAs’ networks). 4. Interviews with officials. We interviewed VA, VHA, and TPA officials responsible for administering the Choice Program contracts and overseeing implementation of the program. We interviewed these officials to gain an understanding of the processes they followed and the information they used to monitor veterans’ access to Choice Program care. 5. Federal internal control standards. We examined the results of our and VHA’s analyses and the information VHA uses to monitor veterans’ access to care under the program in the context of federal standards for internal control for (1) information and communication and (2) monitoring. The internal control standard for information and communication relates to management’s ability to use quality information to achieve the entity’s objectives. The internal control standard for monitoring relates to establishing activities to monitor the quality of performance over time and evaluating the results. Appendix II: Process for Veterans to Obtain Department of Veterans Affairs (VA) Choice Program Care if They Are Time-Eligiblea Appendix II: Process for Veterans to Obtain Department of Veterans Affairs (VA) Choice Program Care if They Are Time-Eligiblea If the veteran does not respond to the letter within 14 calendar days, a notification is sent to the veteran’s VA clinician so that they can determine if additional action should be taken. Appendix III: Process for Veterans to Obtain Department of Veterans Affairs (VA) Choice Program Care if They Are Distance-Eligiblea Appendix IV: Comparison of Processes for Arranging Choice Program and Individually Authorized Community Care Appendix IV: Comparison of Processes for Arranging Choice Program and Individually Authorized Community Care The Veterans Health Administration (VHA) uses the time-eligible appointment scheduling process when the services needed are not available at a VHA medical facility or are not available within allowable wait times. Appendix V: Process for Obtaining the Clinical Results of Choice Program Appointments Appendix VI: Selected Actions Taken by VA and VHA to Address Choice Program Access Issues We found 21 actions that the Department of Veterans Affairs (VA) and the Veterans Health Administration (VHA) took after the Choice Program’s November 2014 implementation that were intended to help address issues related to veterans’ access to care. Table 6, below, provides a chronological summary of the actions VA and VHA had taken as of August 2017 and the issues they were intended to address. Appendix VIII: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Marcia A. Mann (Assistant Director), Alexis C. MacDonald (Analyst-in-Charge), Daniel Powers, and Michael Zose made major contributions to this report. Also contributing were Muriel Brown, Christine Davis, Helen Desaulniers, Krister Friday, Sandra George, Jacquelyn Hamilton, and Vikki Porter. Related GAO Products Veterans’ Health Care: Preliminary Observations on Veterans’ Access to Choice Program Care, GAO-17-397T (Washington, D.C.: March 7, 2017). VA Health Care: Improved Monitoring Needed for Effective Oversight of Care for Women Veterans, GAO-17-52 (Washington, D.C.: December 2, 2016). VA’S Health Care Budget: In Response to a Projected Funding Gap in Fiscal Year 2015, VA Has Made Efforts to Better Manage Future Budgets, GAO-16-584 (Washington, D.C.: June 3, 2016). Veterans’ Health Care: Proper Plan Needed to Modernize System for Paying Community Providers, GAO-16-353 (Washington, D.C.: May 11, 2016). VA Health Care: Actions Needed to Improve Monitoring and Oversight of Non-VA and Contract Care. GAO-15-654T (Washington, D.C.: June 1, 2015). VA Health Care: Further Action Needed to Address Weaknesses in Management and Oversight of Non-VA Medical Care, GAO-14-696T (Washington, D.C.: June 18, 2014). VA Health Care: Actions Needed to Improve Administration and Oversight of VA’s Millennium Act Emergency Care Benefit, GAO-14-175 (Washington, D.C.: March 6, 2014). VA Health Care: Management and Oversight of Fee Basis Care Need Improvement, GAO-13-441 (Washington, D.C.: May 31, 2013).
Why GAO Did This Study Congress created the Choice Program in 2014 to address longstanding challenges with veterans' access to care at VHA medical facilities. The Joint Explanatory Statement for the Consolidated Appropriations Act, 2016 included provisions for GAO to review veterans' access to care through the Choice Program. This report examines for Choice Program care (1) VA's appointment scheduling process, (2) the timeliness of appointments and the information VHA uses to monitor veterans' access; and (3) the factors that have adversely affected veterans' access and the steps VA and VHA have taken to address them for VA's future community care program. GAO reviewed applicable laws and regulations, VA's TPA contracts, and relevant VHA policies and guidance. Absent reliable national data, GAO also selected 6 of 170 VAMCs (selected for variation in geographic location and the TPAs that served them) and manually reviewed a random, non-generalizable sample of 196 Choice Program authorizations. The authorizations were created for veterans who were referred to the program between January and April of 2016, the most recent period for which data were available when GAO began its review. The sample of authorizations included 55 for routine care, 53 for urgent care, and 88 that the TPAs returned without scheduling appointments. GAO also obtained the results of VHA's non-generalizable analysis of wait times for a nationwide sample of about 5,000 Choice Program authorizations that were created for selected services between July and September of 2016. What GAO Found Through the Veterans Choice Program (Choice Program), eligible veterans may receive care from community providers when it is not readily accessible at Veterans' Health Administration (VHA) medical facilities. The Department of Veterans Affairs (VA) uses two contractors—or third party administrators (TPA)—to schedule most veterans' Choice Program appointments after receiving referrals from VA medical centers (VAMC). GAO found that veterans who are referred to the Choice Program for routine care because services are not available at VA in a timely manner could potentially wait up to 70 calendar days for care if VAMCs and the TPAs take the maximum amount of time VA allows to complete its appointment scheduling process. This is not consistent with the statutory requirement that veterans receive Choice Program care within 30 days of their clinically indicated date (when available), which is the soonest date that it would be appropriate for the veteran to receive care, according to a VHA clinician. Without designing appointment scheduling processes that are consistent with this requirement, VA lacks assurance that veterans will receive Choice Program care in a timely manner. GAO and VHA found that selected veterans experienced lengthy actual wait times for appointments in 2016, after manually reviewing separate samples of Choice Program authorizations. For example, when GAO analyzed 55 routine care authorizations that were created between January and April of 2016, it found that the process took at least 64 calendar days, on average. When VHA analyzed about 5,000 authorizations created between July and September of 2016, it took an average of 51 calendar days for veterans to receive care. a GAO excluded from its analysis the amount of time the TPA took to schedule the appointment and the overall wait time because its sample selection methodology differed from VHA's in a way that would have skewed these two averages but not the averages for the other segments of the process. GAO also found that VHA cannot systematically monitor the timeliness of veterans’ access to Choice Program care because it lacks complete, reliable data to do so. The data limitations GAO identified include: A lack of data on the timeliness of referring and opting veterans in to the program. GAO found that the data VHA uses to monitor the timeliness of Choice Program appointments do not capture the time it takes VAMCs to prepare veterans’ referrals and send them to the TPAs, nor do they capture the time spent by the TPAs in accepting VAMCs’ referrals and opting veterans in to the Choice Program. VHA has implemented an interim solution to monitor overall wait times that relies on VAMC staff consistently and accurately entering unique identification numbers on VHA clinicians’ requests for care and on Choice Program referrals, a process that is prone to error. Inaccuracy of clinically indicated dates. GAO found that clinically indicated dates (which are used to measure the timeliness of care) are sometimes changed by VAMC staff before they send Choice Program referrals to the TPAs, which could mask veterans’ true wait times. GAO found that VAMC staff entered later clinically indicated dates on referrals for about 23 percent of the 196 authorizations it reviewed. It is unclear if VAMC staff mistakenly entered incorrect dates manually, or if they inappropriately entered later dates when the VAMC was delayed in contacting the veteran, compiling relevant clinical information, and sending the referral to the TPA. Unreliable data on the timeliness of urgent care. GAO found that VAMCs and TPAs do not always categorize Choice Program referrals and authorizations in accordance with the contractual definition for urgent care. According to the contracts, a referral is to be marked as “urgent,” and an appointment is to take place within 2 days of the TPA accepting it, when a VHA clinician has determined that the needed care is (1) essential to evaluate and stabilize the veteran’s condition, and (2) if delayed would likely result in unacceptable morbidity or pain. GAO reviewed a sample of 53 urgent care authorizations and determined that about 28 percent of the authorizations were originally marked as routine care authorizations but were changed to urgent by VAMC or TPA staff, in an effort to administratively expedite appointment scheduling. Without complete, reliable data, VHA cannot determine whether the Choice Program has helped to achieve the goal of alleviating veterans’ wait times for care. GAO found that numerous factors adversely affected veterans’ access to care through the Choice Program. These factors include: (1) administrative burden caused by complexities of referral and appointment scheduling processes, (2) poor communication between VHA and its VAMCs, and (3) inadequacies in the networks of community providers established by the TPAs, including an insufficient number, mix, or geographic distribution of community providers. VA and VHA have taken numerous actions throughout the Choice Program’s operation that were intended to help address these factors, though not all access factors have been fully resolved. For example, to help address administrative burden and improve the process of coordinating veterans’ Choice Program care, VA established a secure e-mail system and a mechanism for TPAs and community providers to remotely access veterans’ VA electronic health records. However, these mechanisms only facilitate a one-way transfer of necessary information. They do not provide a means by which VAMCs or veterans can view the TPAs’ step-by-step progress in scheduling appointments or electronically receive medical documentation associated with Choice Program appointments. While the Choice Program will soon end, VA anticipates that veterans will continue to receive community care under a similar program that VA plans to implement, which will consolidate the Choice Program and other VA community care programs. Incorporating lessons learned from the Choice Program into the implementation and administration of the new program could help VHA avoid similar challenges. What GAO Recommends For VA's future consolidated community care program, GAO is making 10 recommendations, which include: establishing an achieveable wait-time goal for the community care program that will permit VHA to monitor whether veterans are receiving care within time frames that are comparable to the amout of time they would otherwise wait for care at VHA medical facilities; designing an appointment scheduling process that (1) is consistent with the wait-time goal and (2) sets forth time frames within which veterans' referrals must be processed, appointments must be scheduled, and appointments must occur; allow VHA to systematically monitor the amount of time taken to prepare referrals, schedule appointments, and complete appointments; prevent veterans' clinically indicated dates from being modified by individuals other than VHA clinicians; and separate clinically urgent referrals and authorizations from those for which the VAMC or the TPA has decided to expedite appointment scheduling for administrative reasons; and establishing a system that will help facilitate seamless, efficient care coordination and exchanges of information among VAMCs, VHA clinicians, TPAs, community providers, and veterans. VA generally agreed with all but one of GAO's recommendations, which was to separate clinically urgent referrals from those that are administratively expedited. GAO maintains that implementing this recommendation will help improve future monitoring of urgent care timeliness for reasons explained in the report.
gao_GAO-19-95
gao_GAO-19-95_0
Background The federal government, states, colleges, students and their families all play important roles in financing higher education costs. Under Title IV of the Higher Education Act of 1965, as amended, the federal government offers students at all types of colleges financial assistance to help pay for their education, such as through the William D. Ford Federal Direct Loan (Federal Direct Loan), the Federal Pell Grant (Pell Grant), and the Federal Work-Study programs. Some of this aid is targeted toward students based on their financial need. For example, Education provided almost $27 billion in Pell Grants to low-income students in fiscal year 2017. States also provide funding to public colleges through state appropriations for operating expenses and grant programs that provide financial aid directly to students based on financial need, merit, or a combination of both. Despite the substantial federal expenditure in higher education, rising college costs have outpaced federal and state grant aid and, over time, have led to an increasing share of the cost being borne by students and their families. For example, over the past 30 years, the average in-state net price for a full-time undergraduate student at a public 4-year college—after taking into account all grant aid and education tax benefits—has nearly doubled, from about $8,000 in 1990-1991 to nearly $15,000 in 2017-2018. At public 2-year colleges, the net price for full-time students increased over the same time period from about $6,800 to $8,000. To plan for the cost of college, students and their families must consider the full cost of attendance, which includes not only tuition and fees, but also room and board and other miscellaneous expenses. The federal government requires colleges to estimate and distribute information on the full cost of attendance to prospective and enrolled students. The amount of need-based federal aid a student is eligible for is based, in part, upon the school’s estimated cost of attendance. Changes in College Student Demographics National data show that, over the past several decades, an increasing percentage of students from low-income households are enrolling in college. According to NPSAS data, the percentage of all undergraduates who had a household income at or below 130 percent of the federal poverty line increased from 28 percent in 1996 to 39 percent in 2016. In addition, the percentage of college students receiving a Pell Grant has nearly doubled over roughly the same time period. For example, in 1999- 2000, approximately 23 percent of college students received a Pell Grant, and in 2016, this figure was about 40 percent. Some researchers have suggested that reductions in federal and state funding of higher education relative to the increasing cost of college have coupled with these student demographics to increase the share of college costs borne by students, which can reduce the amount students have to support their basic needs, such as food and housing. A traditional college student is generally considered to be someone who is enrolled in college full time immediately after graduating from high school, is financially dependent on his or her parents, and either does not work during the school year or works part time. However, these students represent a minority of students enrolled in college today. According to NPSAS data, about half of all undergraduate students enrolled in college in 2016 were considered financially independent from their parents. About 22 percent had dependent children themselves, and 14 percent were single parents. The average college student in 2016 was 26 years old and first enrolled at age 21. Sixty-four percent of college students in 2016 worked at least part time while enrolled, and a quarter worked full time. See figure 1 for the percentages of traditional and nontraditional students in 2016 and for Education’s list of traditional and nontraditional student characteristics. Federal Food Assistance Programs Available to College Students FNS oversees the states’ administration of SNAP, the main federal benefit program to address food insecurity for low-income households. In fiscal year 2017, the program provided benefits to about 42 million individuals in more than 20 million households. The purpose of the SNAP program is to safeguard the health and well-being of the nation’s population by providing a monthly cash benefit to raise the purchasing power and nutrition level of low-income households. FNS is responsible for establishing program regulations and ensuring that state officials administer SNAP in compliance with program rules. Officials in seven FNS regional offices assist officials from the FNS national office in this oversight work. FNS shares information and policy guidance with state SNAP agencies in part through its regional offices, the FNS website, and annual conferences. The states, or in some cases counties within a state, administer SNAP by determining whether households meet the program’s eligibility requirements, calculating monthly benefits for qualified households, issuing benefits to participants on an electronic benefits transfer card, and investigating and prosecuting recipient fraud. States are also allowed to establish some state-specific modifications in how they administer SNAP policy. Beyond SNAP, the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) is another federal food assistance program available to eligible college students who are pregnant or postpartum. FNS also oversees the WIC program, which is administered by state and local agencies through approximately 10,000 clinic sites. College Student Eligibility for SNAP SNAP eligibility is largely based on a household’s income and certain other characteristics. However, in 1980 federal law restricted college students who are enrolled at least half time from receiving SNAP benefits. This law generally prevents traditional college students—who may appear to have a low income while attending college but receive financial support from their parents—from receiving SNAP benefits. Federal law establishes several exemptions to this restriction so that college students who are enrolled at least half time and have a legitimate need can access SNAP. For example, assuming that they meet all other SNAP eligibility criteria, a full-time college student may be exempt from the college student restriction if they are: younger than age 18 or age 50 or older; a parent caring for a child under age 6; a parent caring for a child aged 6 to 11 who is unable to obtain childcare to attend school and work; a single parent caring for a child under 12 years old and enrolled full working a minimum of 20 hours per week at paid employment; participating in a state- or federally-financed work-study program; receiving Temporary Assistance for Needy Families (TANF) benefits; not physically or mentally fit (e.g., have a disability); or enrolled in certain programs for the purpose of employment and training. FNS officials told us that states have flexibility regarding which programs may qualify a student for the exemption pertaining to enrollment in certain programs for the purpose of employment and training. These programs must be operated by a state or local government, target low-income households, and increase participants’ employability. State SNAP agencies have discretion to determine which programs in their state qualify. These employment and training programs may be operated at community colleges, among other community partners. FNS officials said that in 2014 the agency expanded its focus on SNAP Employment & Training (E&T) program services, which are intended to help individuals in SNAP households acquire skills, training, and work experience that will increase their ability to obtain regular employment that will ultimately lead to greater self-sufficiency and reduce their reliance on SNAP. State agencies have flexibility in designing SNAP E&T program services, and FNS encourages states to enter into partnerships with established providers, including community colleges, to deliver SNAP E&T program services. For example, a SNAP recipient could train to become a Certified Nursing Assistant at a community college as part of a state’s SNAP E&T program. In addition to providing employment and training services, state SNAP E&T programs are required to provide participants with necessary supportive services, such as transportation, childcare, and textbooks. Information about the Prevalence of Food Insecurity among College Students Nationally is Limited and Many Potentially Eligible, At-Risk Students Do Not Receive SNAP Studies Identify a Range of Food Insecurity Rates among Respondents, but Results Cannot Be Generalized to All Students Our review of 31 studies provided some information regarding food insecurity among college students, but all of the studies have limitations and none provide estimates of food insecurity for this population in general. Estimates of food insecurity among college students included in the studies we reviewed ranged from 9 percent to well over 50 percent, with 22 of these of 31 studies estimating food insecurity rates of over 30 percent. These results reflect the studies’ different samples and methods, and the estimates from the studies included in our review are not generalizable to the college student population as a whole. None of these studies are based on a sufficiently large or diverse random sample of college students to constitute a representative study. The studies addressed the difficulty of sampling the college student population in different ways, including by extrapolating from household data, surveying students in a particular degree program or on a particular campus, or targeting particular, non-random sub-groups of the college student population. Most of the studies were also conducted on only one campus, although some studies gathered data from more than one campus. Despite the limitations, these studies as a whole help shed some light on the range of food insecurity that exists among some groups of college students. Of the 31 studies we reviewed, 2 used nationally representative household data sets, the Current Population Survey and the Survey of Income and Program Participation. The study that used the Current Population Survey data from 2011-2015 found that an estimated: 11 percent of households with a student in a 4-year college 14 percent of households with a student in vocational/technical education experienced food insecurity, and 17 percent of households with a student in a community college experienced food insecurity. These national household surveys assess the food security of households with a college student member, but they do not directly survey college students and only measure food security at the household, and not the individual, level. For example, these household data may not capture a college student’s food insecurity in situations where the student member of the household does not live at home for most of the year. The remaining 29 studies we reviewed collectively surveyed college students on approximately 200 campuses across multiple states, including two large state university systems, and produced a wide range of estimates of food insecurity. In most cases, the results can be characterized as applying only to the respondents of the survey. The 29 studies based on campus surveys provide a range of food insecurity rates among respondents, from 9 percent to over 50 percent. For example, a study first published in 2017 found that 15 percent of student respondents at one 4-year college experienced food insecurity, with an additional 16 percent of student respondents at that college estimated to be at-risk for food insecurity. Two recent surveys of college systems in California found that 40 percent of respondents from University of California campuses and 42 percent of respondents from California State University campuses experienced food insecurity. Estimates of food insecurity rates in the studies we reviewed tended to be higher at 2-year than at 4-year colleges. Four studies examined only 2- year college students and three of these studies estimated food insecurity rates among respondents at 2-year colleges to be 40 percent or higher. Three studies looked at both 2-year and 4-year colleges and estimated food insecurity to be higher among students at 2-year colleges. For example, a large, multi-college study conducted in 2017 found that during the 30 days preceding the survey, 42 percent of community college students who responded and 36 percent of students at 4-year colleges who responded indicated they were food insecure. Further, the two studies that used national household data sets found that households with community college and vocational education student members had higher food insecurity levels than households with students at 4-year colleges. Federal Data Show Most Low-Income Students Had Multiple Risk Factors Associated with Food Insecurity in 2016 We identified and analyzed the prevalence of risk factors associated with food insecurity among students through our review of peer-reviewed publications on food insecurity and through interviews with academic researchers, college officials, state and federal officials, and officials from relevant policy organizations. In the studies we reviewed and in our interviews with researchers, having a low income was consistently identified as a key risk factor for food insecurity. The other risk factors we included in our analysis are: being a first-generation college student, receiving SNAP, being a single parent, being disabled, being homeless or at risk of homelessness, and being a former foster youth. In our analysis, we focused on students with a household income at or below 130 percent of the federal poverty line, which represents 39 percent of all undergraduates. While having a low income is itself the most common risk factor for food insecurity among college students, our analysis found that the majority of low-income students also experience additional risk factors for food insecurity. The three most common risk factors for food insecurity among low-income students were being a first- generation college student; receiving SNAP (receiving SNAP can be considered a risk factor in that it may reduce, but not entirely eliminate, food insecurity); and being a single parent. Of the approximately 7.3 million low-income students, 31 percent were first-generation college students, 31 percent reported receiving SNAP, and 25 percent were single parents. The prevalence of risk factors among low-income students was lower at 4-year colleges compared to other colleges. For example, about 21 percent of low-income 4-year college students were single parents in 2016 compared to about 42 percent of low-income students in less than 2-year programs. Low-income individuals enrolled in less than 2-year programs had the highest prevalence for almost all risk factors (see table 1). Twenty-nine percent of all U.S. undergraduates had a low income and experienced at least one additional risk factor for food insecurity, according to our analysis of 2016 NPSAS data—14 percent had a low income and one other risk factor and 15 percent had a low income and two or more additional risk factors associated with food insecurity (see table 2). Risk factors associated with food insecurity are more prevalent among low-income students than among the general student population, with 75 percent of low-income students experiencing one or more additional risk factors. Students at 2-year colleges and those in less than 2-year programs were also more likely to have multiple risk factors. Fifty-Seven Percent of Potentially Eligible Low- Income Students with Food Insecurity Risk Factors in 2016 Did Not Participate in SNAP In our analysis of SNAP participation among students, we focused on low-income students with at least one additional risk factor for food insecurity because these students would likely meet the income threshold for SNAP eligibility and have an additional risk factor that could put them in need of food assistance. Our analysis of 2016 NPSAS data identified about 5.5 million low-income students with at least one additional risk factor for food insecurity and found that about 59 percent of these students (3.3 million) reported being enrolled at least half time and meeting a SNAP student eligibility exemption. About 1.8 million of these low-income students with an additional risk factor reported meeting a student exemption and also that they were not receiving SNAP benefits. In other words, among potentially SNAP eligible low-income students with at least one additional factor for food insecurity, 57 percent did not report participating in SNAP in 2016 (see fig. 2). About one-quarter of the 5.5 million low-income students with at least one additional risk factor for food insecurity did not meet any of the student exemptions we could identify in the NPSAS data and reported that they did not receive SNAP benefits. These students would likely be ineligible to participate in SNAP unless they begin meeting one of the student eligibility exemptions in the future, such as working 20 hours per week. Selected Colleges Are Using a Range of Approaches to Address Student Food Insecurity The 14 selected colleges we contacted are addressing student food insecurity in three main ways: by educating faculty, staff, and students; by providing students free food and emergency assistance; and by centralizing and coordinating their student services and helping students apply for federal and state benefits. Officials at 9 of these colleges said that they viewed student food insecurity as part of students’ increasing inability to meet their basic needs as a result of the decreasing affordability of higher education or the high cost of living. This sentiment was echoed by selected students we spoke with during discussion groups (see text box). All of the colleges we contacted have implemented on-campus initiatives to combat students’ food insecurity with the goal of improving their student outcomes, such as retention, completion, and loan repayment rates. As one community college official told us: “We have come to realize that we can’t address retention and completion without addressing students’ basic needs.” See figure 3 for the range of initiatives the 14 colleges we contacted were taking to address food insecurity among college students on their campuses. Educating the campus community. Officials at several of the selected colleges told us that many administrators, faculty, staff, and students on their campus are unaware that students experience food insecurity, which hinders their college’s efforts to address the issue (see text box). At all 14 colleges we contacted, officials said they are educating their campus community about available resources, both on campus and off, to address student food insecurity. All of the 14 colleges we contacted also educate their students about the resources available to address food insecurity in a variety of ways, such as by providing information during student orientations, on flyers and pamphlets, or through social media and text messages. Eight of the 14 colleges we contacted hold trainings or distribute information to faculty and staff about the on-campus and community resources available to students. Nine of these colleges have created supplemental or for-credit courses on topics such as financial literacy or cooking and nutrition. For example, one college we visited runs a workshop for first-year students on writing a spending plan and a food budget. At several of the selected colleges, faculty members include blurbs about basic needs-related resources, such as campus food pantries, in their syllabi. Providing food and emergency financial assistance. All of the 14 colleges we contacted address student food insecurity by providing students free food and most provide students emergency financial assistance. Nationwide, the College and University Food Bank Alliance has reported that at least 656 colleges have or were developing food pantries as of September 2018. Each of the 14 colleges we contacted had a food pantry, with 7 having started their pantry in the past 5 years. According to college officials, individual faculty and staff members are often first to identify food insecurity as a campus concern and provide food to students. For example, officials at several of the colleges we contacted traced the origins of their college’s food pantry to a drawer of food a faculty or staff member kept in their office for students, or to a professor who brought jars of peanut butter or bagels for any student who wanted one. The college food pantries we visited varied in terms of their size and location, which can depend upon the space available on campus. For example, some pantries we visited consisted of only a couple of shelves of non-perishable items, while others spanned multiple rooms containing refrigerators and freezers. Directors at four of the selected food pantries said that student need was great enough to support expanding the food pantry, but that they had been unable to expand because space on campus is at a premium (see text box). Several pantries also had separate sections providing students personal health items and clothing and offered auxiliary services, such as information about cooking, food budgeting, or SNAP enrollment (see fig. 4 for pictures of some of the college food pantries at selected colleges). Officials at 11 of the selected colleges we contacted said that a major barrier they face is overcoming the stigma some students associate with accepting help for their basic needs, such as using the food pantry (see text box). Concern about this stigma led at least 3 of the colleges we contacted to place their food pantry in a less-public area of campus to address students’ privacy concerns. In contrast, 3 other colleges we contacted centrally located their food pantry to advertise its existence and normalize its use. One college president we spoke with said that “until normalized and pulled it to the center of campus, it was underutilized,” and stated that moving the food pantry to the center of campus quadrupled its use. Officials at 9 of the 14 colleges we contacted reported that their campus food pantry had seen an increased number of users over time as the student body became aware of this resource. One student we spoke with said that his college’s food pantry was his only source of food, while another estimated that the food pantry allowed him to save about $100 per month on food. Officials at 10 of the 14 selected colleges we contacted told us they partner with national organizations or campus dining services or both to try to respond to the needs of students who might be experiencing food insecurity. For instance, public colleges in California receive state funding to incentivize them to address student food insecurity in a variety of ways, including by establishing campus food pantries, providing information to students about SNAP benefits, and establishing meal point donation programs. Two California colleges we contacted were working with a national organization to set up a meal point donation program. One college in another state we visited included in their contract with their private dining services vendor funding for several initiatives, such as a campus-wide survey of student food insecurity, on-campus farmer’s markets, and a learning kitchen that teaches students hands-on cooking skills. Additionally, 2 of the colleges we contacted are working to have SNAP benefits accepted at campus markets. Beyond providing students with free food, officials at 12 of the 14 colleges we contacted said that their college makes emergency cash assistance available to students through small loans, grants, or grocery store or gas station gift cards. These emergency funds are intended to help students pay bills for one-time financial emergencies, such as buying groceries or paying for a car repair or a utility bill. One community college we visited directly ties this assistance to its retention efforts, providing a one-time amount of up to $500 for students judged to have sufficient need and who are likely to remain in school if the bill is paid. Centralizing and coordinating student services and access to benefits. Officials at many of the colleges we contacted told us they have centralized their student support and financial aid services, among others, and several have introduced a case management approach to better collaborate across departments and more efficiently and holistically address their students’ basic needs (see text box). Of the 14 colleges we contacted, 8 had centralized some or all of their student services. For example, one community college we visited has co-located many of its student services—including its financial aid, academic counseling, payroll, food pantry, veterans’ services, and women’s resource center, among others—around a central hub of the student union. Students visiting this central hub may be assigned a caseworker to connect them with the on- campus, community, state, and federal benefits for which they are eligible. Officials at a few of the colleges we contacted said that centrally locating student services also helped faculty and staff by providing a single point of contact to refer students. One official said that she tells faculty and peer mentors: “If you see a student in any kind of distress at all—mental health, hunger, homelessness, anything—send them to us.” She added that it is too much to ask faculty to figure out which office or official to send students to for specific concerns. Officials at 8 of the 14 colleges we contacted told us their campus has established a coordinated benefits access program or is actively screening students for potential eligibility for, and helping them enroll in, federal and state benefit programs like SNAP, WIC, Medicaid, and the Earned Income Tax Credit. For example, one community college we contacted had a staff member build a statistical model to analyze the college’s existing data on first-time students, such as data on students’ household income, demographics, and course enrollment, to identify students at risk of not returning to college and to provide these students, their professors, and their faculty advisors with information about on- campus resources. Officials at one college we visited told us the campus hosts weekly clinics with county SNAP eligibility analysts to screen students for SNAP eligibility and help them apply for benefits. At a community college system we visited, the administration told us they were working with the state SNAP agency to identify which students were receiving SNAP benefits and they plan to send targeted information on SNAP to those potentially eligible students not receiving benefits. Officials at three of the colleges we contacted said that their college was purchasing software that creates a centralized portal where faculty and staff can share information about a student’s situation with student support providers so they can better provide help. For example, at a college we visited that is using such software, officials said that a professor might note in the centralized portal that an at-risk student was either failing or not attending a class, and that student would be flagged in the portal to notify academic advisors, counselors, and other college staff who can direct the student to the on-campus resources they may need, such as the food pantry or help in completing a SNAP application. While SNAP Can Supplement Other Federal Aid for Some Low-Income Students, FNS Does Not Share Key Information to Help States Better Leverage SNAP to Assist Students Federal Programs Are Limited in the Extent to Which They Can Address the Needs of College Students Experiencing Food Insecurity Federal Student Aid Generally Does Not Cover All College Costs for Low-Income Students Federal grant aid is available to help low-income college students and their families pay for college, but for many students, the maximum amount of grant aid available to them does not cover all of the costs associated with attending college. Officials from many of the organizations we interviewed said that the federal Pell Grant Program for low-income college students was a major source of financial support for these students, but that it does not cover the full cost of college attendance for many students, and particularly for those at 4-year colleges or in areas with high costs of living. Most low-income students also work while attending college. Despite this, several college officials we interviewed told us that the gap between the amount of financial aid available and what it costs to attend college is continuing to grow. One financial aid director told us that students used to be able to pay for groceries or rent with some of their financial aid “refund” money (that is, financial aid funds refunded to a student after tuition, fees, and other school charges are paid, which can be used to pay for other education and living expenses); however, he said students rarely receive a refund any more. According to data from Education’s National Center for Education Statistics, the average Pell Grant used to cover more of the cost of college than it does today. For example, about 40 years ago— soon after the Pell Grant Program was established—the average award covered about 50 percent of the average cost of in-state tuition, fees, room, and board at public 2-year colleges, and 39 percent at public 4- year colleges. Today, the average Pell Grant award amount covers just 37 percent of these costs at public 2-year colleges, and 19 percent at public 4-year colleges. Federal Work-Study Program employment opportunities may be available to qualifying students, but several officials we interviewed noted that funding for this program is extremely limited, especially at community colleges where there are more students at risk of food insecurity. When grant funds and student earnings are insufficient to cover the full cost of college, students can take out federal student loans to make up the difference. Officials at a national association of community colleges and at a few colleges we visited told us that low-income students often use federal loans to help them pay for basic living expenses—such as food or rent. While these loans can be helpful for some students who need additional funds to support themselves while in college, officials at a few community colleges also cautioned that loans may not be the best choice for all students, and may worsen the financial position of already vulnerable students. For example, at one 4-year college we visited, the financial aid director said that many of their students have reached their maximum federal lifetime loan limit (see text box for an example). He also noted that graduates have, on average, $25,000 of student loan debt. He said his college has historically trained its students for public sector careers, e.g., teachers or counselors, and he worries that salaries in these professions will not allow graduates to repay this amount of student loans. College Students Have Limited Access to Federal Food Assistance Programs Given the limitations of federal student aid funding, officials from several organizations we interviewed spoke about the importance of leveraging other federal benefits, such as food assistance programs, to help address the needs of college students experiencing food insecurity. According to research on the effect of SNAP benefits, these benefits can provide some help to students, although they may not completely eliminate their food insecurity. However, college students have limited access to several key federal food assistance programs that could help address some of their needs. For example, several college officials we spoke with noted that many low-income students received federally subsidized free or reduced price lunch while in elementary and secondary school, but a comparable program does not exist for college students, even though many face the same level of need. In addition, many college students are prohibited from receiving federal SNAP benefits because of restrictions on student eligibility. Several college officials told us that when students are unable to meet one of the student exemptions for SNAP benefits, they will try to connect them to community resources or to the on-campus food pantry, but a few characterized these as short-term solutions to their students’ problems. We also heard from officials at several colleges that students who are pregnant or postpartum may qualify for the WIC program, which provides food assistance to mothers with infants and young children; however, this program serves only a small minority of college students who may be experiencing food insecurity. Some State SNAP Agencies Are Assisting Potentially Eligible Students to Access SNAP Benefits About one-third of state SNAP agencies reported they were taking actions to inform college students about SNAP and help them access SNAP benefits. These state SNAP agencies reported assisting college students in various ways, including by developing guidance or training for state and college officials on student eligibility rules, by conducting outreach at local colleges, or by providing students with options to qualify for a SNAP student exemption by participating in employment and training services. Several States Are Clarifying Student Rules and Conducting Training and Outreach about SNAP Student Eligibility Eleven state SNAP agencies reported clarifying policy on college student eligibility to SNAP staff who determine eligibility for benefits or providing training to third-party partners to increase awareness of students’ potential eligibility for SNAP. For example, in 2015 and 2017 California’s state SNAP agency issued policy letters to its county offices clarifying college student eligibility rules and expanding the list of college programs that qualify a student for an exemption under the employment and training provision. Minnesota’s state SNAP agency reported that it conducts technical assistance training on student eligibility issues for its caseworkers twice a year. State SNAP agencies also reported partnering with colleges to increase awareness of potential student SNAP eligibility or to reduce the burden of the application process for students. For example, Missouri’s state SNAP agency reported that it recently began a partnership with the state’s community college association to increase students’ awareness of their potential eligibility for SNAP. To reduce the burden students face in applying for SNAP benefits, Rhode Island’s state SNAP agency reported that its outreach partner holds regular “office hours” at state community college campuses to answer questions about SNAP, screen students for potential eligibility, and assist with application completion. Officials from California’s state SNAP agency stated that its county SNAP agencies periodically hold SNAP enrollment clinics on college campuses. At one time, a community college in California had a county SNAP staff member located on campus to assist their students with benefit applications. Finally, two of the states we visited partially fund their state higher education grants for low-income college students with some of their federal TANF block grant dollars. Because these grant recipients receive TANF benefits, they are eligible for the corresponding SNAP student exemption. For example, the California state SNAP agency issued guidance in February 2017 to all of its county offices to explain that this SNAP student exemption applies to any student who receives the state’s higher-education grant for low-income students. In Massachusetts, the state SNAP agency issued similar guidance in August 2017 to state SNAP staff who determine eligibility for benefits. Some States Are Implementing Approaches that Provide Additional Employment and Training Options for Certain Students Some state SNAP agencies are taking steps related to the exemption for students who are enrolled in certain employment and training programs, which can be offered at 2-year colleges and other community-based organizations. Seven states reported taking steps to designate specific programs at their community colleges to qualify as employment and training programs to make it easier for students and SNAP staff who determine eligibility for benefits to identify students who could meet this exemption. In these states, according to the SNAP agency, they have determined that certain programs at community colleges qualify enrolled students for one of the student SNAP exemptions because they are programs for low-income households, aimed at employment, and run by a state or local government. According to FNS, state SNAP agencies have the authority to decide which programs would qualify enrolled students for this exemption, and several states have identified qualifying programs at community colleges in their state. Students in these designated community college programs who attend at least half time and do not meet one of the other student exemptions can be eligible for SNAP under this provision if they meet all other eligibility criteria. In 2010, Massachusetts’ state SNAP agency began using a dedicated form that provides community college students in these state-designated employment and training programs support for their SNAP application. According to officials at the state SNAP agency, this form has helped to streamline the application process for both students and state SNAP agency staff who determine eligibility for benefits. Other states are developing opportunities for students to meet the employment and training exemption through partnerships with the states’ SNAP Employment & Training (E&T) programs. Twenty-four state SNAP agencies reported that they have implemented a third-party partnership with at least one community college to deliver SNAP E&T program services on campus. Under these state SNAP E&T program partnerships, the state SNAP agency works with community colleges to enroll SNAP recipients in programs that are designed to increase the employability of the participant. One FNS official told us that state SNAP E&T programs were an ideal way to provide college students who qualify for SNAP benefits with additional services and support, such as counseling or transportation assistance, and that they can help students persist in their community college program and ultimately improve their self-sufficiency. According to FNS, state agencies can enroll individuals in these SNAP E&T programs in one of two ways. A SNAP recipient may enroll in the designated community college training program affiliated with the state’s SNAP E&T program, which allows them to continue receiving SNAP benefits even if they attend the program more than half time. Or, the community college partner can refer individuals already enrolled at the college to the state SNAP agency to determine if they are eligible for state SNAP E&T program services—a process known as a “reverse referral.” In the case of a reverse referral, individuals who are enrolled in certain training programs and who are experiencing food insecurity may be able to qualify for a student exemption to receive SNAP, as well as additional services through state SNAP E&T programs. According to Washington’s state SNAP agency, SNAP E&T programs operate at all 34 community colleges in the state, and have served approximately 20,000 students each fiscal year since 2015. A senior program official at Washington’s state SNAP agency told us that the vast majority of incoming community college students in Washington are screened for potential eligibility and reverse referral into the state’s SNAP E&T program services. College and State Officials Reported That FNS Does Not Share Key Information That Could Help Them Assist Students Experiencing Food Insecurity At 9 of the 14 colleges we contacted, some officials and students we spoke with indicated that they either did not know about or found it difficult to understand the SNAP student rules. For example, in a student discussion group at one community college, some students said they were uncertain about how SNAP student rules applied to them when they lived with their parents but received no financial support or food from them. Officials at another college told us that many students are not even aware of or do not realize that the SNAP student rules apply to them. In a student discussion group we held at another college, some students told us that they had been unaware that they may be eligible for SNAP until they spoke to someone at their college. Further, we found college officials may also have difficulty understanding SNAP student rules—for example, officials at one college said that they believed that college students are not eligible for SNAP. College officials can be an important source of information for students regarding SNAP, but this can create barriers to access if college officials do not have the correct information. For example, at one college we visited, two students said they were misinformed by officials at their college or their state SNAP agency about their potential eligibility for SNAP. Officials we met with at three colleges said that they would like information from FNS about college student eligibility rules so they can help educate and enroll students in SNAP, but FNS has not developed such targeted information to distribute to colleges and students. Officials at one college said they requested information from FNS to distribute to students, but the general SNAP eligibility brochure FNS provided did not reference college student eligibility requirements. A senior FNS official said developing printed materials expressly explaining the college student eligibility requirements is primarily a state agency responsibility, and that information about this topic was available on the FNS website. However, we found that the information specifically related to college student eligibility requirements on the FNS website was not easy to find. For example, the main webpage of FNS’s SNAP eligibility website lists the special circumstances under which certain specific populations may be SNAP eligible, but it does not include college students nor does it link to the webpage listing the student exemptions. Further, the webpage containing information on SNAP for college students restates the list of student exemptions from the regulations, using legal and technical language that is not always easy to understand. For example, the webpage states that students “may be able to get SNAP benefits if otherwise eligible, and they ‘get public assistance benefits under a Title IV-A program of the Social Security Act.’” Many college officials and students may not realize this refers to TANF benefits. In addition, the website does not list being “not physically or mentally fit” (e.g., having a disability) as one of the ways to qualify for a student exemption, nor does it provide information relevant to how students may qualify for an exemption because they are assigned to or placed in certain employment and training programs. A senior official from the FNS national office said that college student eligibility and the student exemptions were among the most complicated SNAP policies to explain and that they frequently receive questions from the general public about how the rules apply to certain students in certain situations. This official said that because the student SNAP rules are so difficult to navigate, FNS responds to these individual questions and circumstances as they arise, rather than developing materials that could apply broadly to every situation, and that state SNAP agencies are primarily responsible for assisting students. Officials at all four FNS regional offices we spoke with said materials explaining the student rules tailored to colleges and college students would prove useful to states and colleges in their regions. While developing clear written materials about a complicated policy is challenging, Standards for Internal Control in the Federal Government states that agencies should communicate key information to their internal and external stakeholders. Further, a core activity of the SNAP program is to work with its partners to ensure that those eligible for nutrition assistance can make informed decisions about applying for the program. The lack of clear and easily accessible information on student SNAP eligibility requirements can make it difficult for potentially eligible students to make informed choices about applying for SNAP, and for colleges to develop their own materials to help potentially eligible students apply for SNAP. As a result, students could miss opportunities to obtain the additional support they may need to stay in college and graduate. In addition, we found that some state SNAP agencies had limited information about approaches that they could take to help potentially eligible college students who may qualify for a student exemption. Specifically, officials at four of the five state SNAP agencies and at three of the four FNS regional offices that we spoke with said that it is not entirely clear to them under which circumstances college students may be eligible for a student exemption if they are enrolled in a qualifying employment and training program run by a community college. State SNAP agency officials in four of the five states, as well as officials in three of the four FNS regional offices, told us that they would like more information from FNS about how to implement the approach some state SNAP agencies are taking to help college students who may qualify for an employment and training exemption access SNAP. One state SNAP agency official said that she believes that the lack of guidance and leadership from FNS on this issue leaves many state SNAP agencies operating with uncertainty, and, as a result, many of them do not take any actions to identify those college students who may qualify for an employment and training exemption under SNAP rules. Several of the FNS regional office officials we interviewed agreed that the FNS national office was uniquely positioned to collect and share information about potential approaches that states are using to implement the student exemption for employment and training programs so that other states could also consider using such approaches to assist low- income college students who may qualify. Officials at one FNS regional office said that an FAQ-type document on college student eligibility scenarios would be helpful. At the same time, a few FNS regional office officials said that the national office is cautious about developing information for all states when each state’s SNAP program operates slightly differently. According to FNS national office officials, FNS issued the most recent document discussing general SNAP eligibility for students in August 2010. This document explained that certain employment and training services provided by a state or local government may qualify a student for a SNAP student exemption. In November 2016, six federal agencies including USDA (on behalf of FNS) released an interagency letter, Aligning Federal Supports and Program Delivery for College Access and Completion, that includes information from FNS related to general student eligibility for SNAP. However, neither of these documents included specific strategies or examples of approaches states have used or can use to help potentially eligible college students access SNAP benefits. Standards for Internal Control in the Federal Government states that agency management should internally communicate the necessary information to achieve the program’s objectives. In addition, part of the role of the FNS national office is to work with its partners, including its regional offices and the state SNAP agencies, to improve program administration and ensure access to benefits for eligible individuals. FNS officials told us FNS has several existing mechanisms for information sharing with the regional offices and the state SNAP agencies, including policy memos, webinars, and annual conferences. However, a senior FNS official told us that she was not aware of any plans to share additional information with state SNAP agencies or regional offices on this topic, noting that college students are a relatively small population compared to other SNAP recipients. As a result, state SNAP agencies may not be aware of approaches other states have used that they could take to assist college students experiencing food insecurity in accessing SNAP benefits, and FNS may not be fulfilling its role to ensure program access for college students who are eligible. In addition to noting how complicated the college student SNAP eligibility rules are, most state higher education and SNAP policy organization officials we interviewed remarked that the student exemptions can make it challenging for many students who are food insecure to obtain SNAP benefits that could help them succeed in college. Specifically, a few researchers and state higher education officials said the eligibility restrictions were instituted when college students were generally from higher-income households, whereas many students enrolled in college today are from low-income households. Several higher education officials and one researcher noted that when a student qualifies for a student exemption by working 20 hours a week, it can have a detrimental impact on college completion. For example, research has shown that full-time college students who work more than 15 hours a week or who reduce their college course load and attend part time in order to increase their work hours are less likely to complete their degree or educational program. At the same time, FNS officials and officials at one state SNAP agency stressed the importance of having proper controls in place to prevent certain students from improperly receiving benefits. A senior FNS official noted that the college student restrictions were established to prohibit traditional college students who are supported by their parents from receiving SNAP benefits. This official said that the student eligibility rules should ensure that middle-class and wealthy students do not access SNAP while attending college. Further, officials at a few organizations and one state SNAP agency we interviewed expressed support for some of the student exemptions, such as the exemption for college students who work 20 hours per week. Conclusions The federal government invests billions of dollars annually in higher education through grants and loans to low-income students. Partially as a result of this investment, a college education is accessible to more low- income Americans than ever before. Despite this federal support, many low-income college students struggle to meet their basic needs, including obtaining the food that they need, and may drop out of college as a result. SNAP can be an important source of support for low-income students, although it may not completely ameliorate food insecurity. However, because the SNAP eligibility requirements for college students can be difficult for students and colleges to understand, students may be unaware of or misinformed about their potential eligibility for SNAP. FNS has not made information that clearly explains student SNAP eligibility requirements easily accessible to students and college officials and, as a result, students experiencing food insecurity may remain unaware that they could be eligible for SNAP. In addition, some states are exercising existing state flexibilities to help students experiencing food insecurity to access SNAP, but FNS does not actively share this information among state SNAP agencies. By collecting and sharing information on approaches taken by state SNAP agencies active in this area, FNS could potentially help state SNAP agencies identify ways to help eligible students who are experiencing food insecurity. Better supporting these students will also help the Department of Agriculture and the Department of Education meet their respective goals and make good use of the substantial federal investment in higher education while improving the health and nutrition of individuals experiencing food insecurity. Recommendations for Executive Action We are making the following two recommendations to FNS: The Administrator of FNS should make information on their website regarding student SNAP eligibility requirements easier to understand and more accessible, as a resource for colleges and state SNAP agencies. (Recommendation 1) The Administrator of FNS should coordinate with its regional offices to collect and review information about existing SNAP flexibilities and examples of approaches state SNAP agencies are taking to assist eligible college students to access SNAP benefits, and share such information with state SNAP agencies. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Agriculture and the Department of Education for review and comment. The Department of Education provided technical comments, which we incorporated into the report as appropriate. On November 28, 2018, and December 7, 2018, the Directors of the FNS SNAP Program Development Division and Office of Employment and Training met with us to provide the agency’s comments orally. At the December 7, 2018 meeting, FNS officials told us they partially concur with our recommendations and believe that FNS has sufficient guidance in place for states to provide further information to colleges. However, the agency agrees with the intent of GAO’s recommendations and plans to review its existing guidance to determine if any improvements are warranted. We continue to believe that additional action is necessary to address our recommendations. While reviewing its existing information would be helpful, we believe that changes to FNS’s existing information are also needed to improve the clarity and accessibility of information about SNAP student eligibility requirements on FNS’s website, and that FNS needs to work with its regional offices to identify and share additional information about state approaches to assist eligible college students with access to SNAP benefits. In response to FNS officials’ comments, we also clarified both recommendations to focus more on actions that fall under the responsibility of the FNS National Office. FNS also provided technical comments, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Agriculture, Education, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology This report examines (1) what is known about the extent of food insecurity among college students and their use of the Supplemental Nutrition Assistance Program (SNAP); (2) how selected colleges are addressing student food insecurity; and (3) the extent to which federal programs assist college students experiencing food insecurity. This appendix provides details of the data sources used to answer these questions, the analyses we conducted, and any limitations to our analysis. We used multiple methodologies to conduct this review. We conducted a review of academic studies based on original research to determine what is known about food insecurity among college students. We assessed the quality of these studies by evaluating their research methods and determined that the studies we included in our review were sufficiently reliable for our use. To describe the prevalence of risk factors for food insecurity among college students, we used data on student characteristics from the nationally representative National Postsecondary Student Aid Study (NPSAS). We assessed the reliability of NPSAS data by reviewing existing information about the data and the system that produced them and by interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of describing the prevalence of risk factors for food insecurity among college students and students’ participation in SNAP. To understand how selected colleges address student food insecurity, we conducted four state site visits (California, Kentucky, Massachusetts, and Michigan) selected based on whether colleges and/or state government agencies were taking steps to address food insecurity among students, and geographic diversity, among other criteria. In each state, we visited public colleges and universities, where we met with college officials, students, and researchers. We also interviewed state higher education and SNAP officials, as well as experts from relevant policy organizations. To assess federal efforts, we identified federal programs that may assist college students in need of food, interviewed officials from Education and USDA, and reviewed relevant federal laws, regulations, and agency guidance and program documents, as well as federal internal controls standards applicable to these programs. To understand what is currently known about the extent of food insecurity among college students, we conducted an in-depth review of studies. Our preliminary search in Scopus identified a recent systematic literature review on food insecurity on college campuses. Upon reviewing the article’s scope and methodology, we chose to update rather than duplicate their efforts. We expanded the original search terms to include “higher education” and “postsecondary” among others, and searched two additional research databases (ProQuest and Scopus) in addition to the original list of sources (MEDLINE, PSYCHINFO, and Web of Science). We identified peer-reviewed journal articles and other published research through this search. Through news reports on food insecurity and interviews with researchers, we also identified studies published up to August 31, 2018 that may not have been included in our initial review. We included studies in our review if they met the following criteria: (1) were based on research conducted and published in the United States; (2) were published since 2007; and (3) contained original, direct estimates of food insecurity rates among college students. We identified a total of 35 studies that met these criteria and conducted an initial review to determine if the studies met generally accepted social science standards and were appropriate for our purpose to provide information on the prevalence of food insecurity among college students. We eliminated some studies if we determined that the methods were not appropriate or rigorous—specifically, we concluded that we could not report the results of four studies due to research design limitations. For instance, some studies did not fully disclose their methods, had small sample sizes, used data based on low survey response rates, or did not attempt to correct for or address potential biases in their methodology. For studies included in this report, we performed an initial in-depth review of the findings and methods, and a GAO methodologist performed a second review to confirm our reported analysis of the findings. As a result, we determined 31 studies to be of sufficient quality and we summarized the findings of these 31 studies in our report (see table 3). While these 31 studies are of sufficient quality to provide information on what is known about food insecurity among college students, the generalizability of their findings require significant caveats. Most of the survey results in these studies are not generalizable to a population larger than their sample size, meaning that the findings apply only to the respondents of the survey. None of the studies in our review conducted non-response bias analyses or attempted to address potential selection bias in the sample. Despite these limitations, the studies collectively offer assessments of food insecurity conducted on over 200 campuses in more than 30 states, at both 2- and 4-year schools, and all but three of the studies used adapted versions of the USDA food insecurity measure. We analyzed data from the Department of Education’s (Education) National Postsecondary Student Aid Study (NPSAS). Because no federal datasets contain food insecurity data specifically about college students, we chose to analyze NPSAS data for the prevalence of risk factors associated with food insecurity. Additionally, we used some summary statistics from frequencies presented in the 2016 NPSAS data codebook. NPSAS data contain nationally representative, detailed demographic and financial aid data for college students enrolled in less than 2-year, 2-year, 4-year, and graduate postsecondary programs. These data come from institutional records, government databases, and interviews with students. Because the NPSAS data are based on probability samples, estimates are calculated using the appropriate sample weights provided which reflect the sample design. Each of these samples follows a probability procedure based on random selection, and they represent only one of a large number of samples that could have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, all percentage estimates from the NPSAS data analysis have 95 percent confidence intervals within plus or minus 5 percentage points of the percent estimate, and other numerical estimates have confidence intervals within plus or minus 5 percent of the estimate itself. We compared 95 percent confidence intervals to identify statistically significant differences between specific estimates and the comparison groups. The information provided in the NPSAS data, particularly those from the interview portion of the study, are self-reported and not all of the data are based on federal determinations or cross-verified with outside sources. For example, students self-report their disability status, their hours worked, and so on. Such self-reported data are subject to several sources of nonsampling error, including the inability to obtain information about all sample cases; difficulties of definition; differences in the interpretation of questions; respondents’ inability or unwillingness to provide correct information; and errors made in collecting, recording, coding, and processing data. These nonsampling errors can influence the accuracy of information presented in the report, although the magnitude of their effect is not known. Identification of Risk Factors for Food Insecurity In order to identify risk factors associated with food insecurity among college students, we reviewed published articles and reports on the topic of food insecurity and interviewed researchers, college and state officials, and officials at relevant policy organizations. We present the list of risk factors for food insecurity we considered in table 4. Not all of the risk factors we identified have a corresponding NPSAS variable. For example, NPSAS does not ask respondents about unmet medical needs or childhood food insecurity. Additionally some of the risk factors overlapped and were thus not included in our analysis. For example, the NPSAS dataset contains multiple variables pertaining to student and student household income, such as household income, financial aid, and receipt of public benefits. Many indicators of low-income status likely overlap (e.g., being eligible for a Pell Grant and receiving other financial aid), and many students who have one indicator will likely have others. Although this is not an exhaustive list of risk factors, individuals who experience one of the following seven characteristics may be at risk of food insecurity: being disabled, homeless or housing insecure, being a former foster youth, receiving SNAP benefits, being a single parent, and being the first-generation in a student’s family to attend college. Table 5 shows how we compared these risk factors with corresponding variables from the 2016 NPSAS data. Table 5. Selected Risk Factors and Corresponding Variables in the 2016 National Postsecondary Student Aid Study Data Set Description Indicates student has some type of disability or condition. Includes some students who were determined by a professional to be homeless (via the Free Application for Federal Student Aid or FAFSA), but predominantly measures student- determined “risk of homelessness.” This is not a direct measure of homelessness. Indicates student is an orphan, ward of court, emancipated minor, or in legal guardianship. Indicates whether any member of the student’s household received Food Stamp (SNAP) Benefits during the 2013 or 2014 calendar year. Identifies independent students who were single parents/caretakers during the 2015-2016 academic year. Indicates total 2014 income as a percentage of the federal poverty level thresholds for 2014. For our purposes, low income is defined as having a household income level at or below 130 percent of the federal poverty level. Indicates the highest level of education achieved by a parent, stepparent, or guardian of the student. Per previous Department of Education studies, we define first generation as college students whose parents’ maximum educational attainment was a high school diploma or less. Note that students who did not know their parent’s highest education were not counted as first generation students. The data are self-reported. The student may not be eligible for or receiving federal disability benefits. The data are reported by the student and their family on the FAFSA or during the student interview. National level, individual SNAP enrollment data are not available to verify this variable, as states provide aggregate statistics to FNS. Because our analysis does not include some of the risk factors for food insecurity listed in table 4, our findings may underestimate the number of college students who have a risk factor for food insecurity. For example, we heard in some of our interviews with researchers and in our discussions with students that being an undocumented or an international student was a risk factor for food insecurity. Such students are generally ineligible for federal financial aid and are restricted in the type of other federal aid they can receive. Undocumented students are also more likely than other students to be poor. However, NPSAS does not contain detailed data about undocumented or international students, so we could not include this risk factor for food insecurity in our analysis. The risk factors for food insecurity we included in our analysis may also be correlated with one another and can co-occur. For example, youth who were formerly in foster care are more likely than other youth to be low- income. Indeed, the prevalence of additional risk factors for food insecurity is higher among low-income than wealthier students. We did not analyze the extent to which some risk factors are more strongly associated with food insecurity than others or attempt to rank or weight the relative importance of risk factors. To calculate potential student SNAP eligibility, we first calculated the number of students who might qualify for SNAP based upon having a household income at or below 130 percent of the federal poverty line, which is the standard income requirement for households that do not include a member who is 60 years of age or older or disabled to qualify for SNAP benefits. Next, we analyzed NPSAS variables to identify those that corresponded with SNAP student eligibility rules. We deemed all students who met the income requirements, were enrolled in school at least half time, and met one of the student eligibility exemptions we were able to identify in the data as potentially eligible for SNAP. However, our analysis has limitations and does not precisely identify all students who are SNAP eligible. The 2016 NPSAS data set contains several variables that match up closely with certain student eligibility exemptions. For example, the exemptions related to age, having young dependents, working 20 hours per week, and receiving certain federal benefits have corresponding NPSAS variables (see table 6). For two of the exemptions, we used variables from the NPSAS data set that do not perfectly correspond to the statute but were the closest available proxies in the data. For the eligibility exemption that covers parents caring for a child 6-11 years old who are unable to obtain childcare to attend school and work, we identified students who have a child 6-11 years old and indicate they have no paid childcare. However, some individuals may have unpaid childcare, such as family members, and be able to work and attend school despite not having paid childcare, meaning they would not meet this SNAP student eligibility exemption. For the disability exemption, we used the NPSAS variable based on an interview question that asks students if they have a mental or physical disability. However, because of different definitions, the NPSAS disability variable may include students with disabilities who would not qualify for the SNAP student exemption related to disability. Specifically, to qualify for this SNAP student exemption, the student must not be “physically or mentally fit,”, while the NPSAS interview question asks students if they have some type of disability or condition, including a long-lasting condition such as serious difficulty hearing; blindness or serious difficulty seeing; difficulty concentrating, remembering or making decisions, a serious learning disability, depression, or Attention Deficit Hyperactivity Disorder; or serious difficulty walking or climbing stairs. As a result, we may overestimate the number of students who would qualify for the student exemption related to having a disability or caring for a child age 6-11. Lastly, NPSAS does not contain a variable to capture the student eligibility exemption related to enrollment in certain programs aimed at employment, such as the Workforce Innovation and Opportunity Act or Temporary Assistance for Needy Families employment and training programs. Therefore, we could not identify any students who met this eligibility exemption for SNAP and may have therefore underestimated the number of students who were potentially eligible for SNAP. Additionally, SNAP eligibility for college students depends not only on income and meeting a student exemption, but also on other determinations such as the level of the individual’s financial assets, including savings and any state policy waivers that may apply to the individual’s eligibility. Given that our analysis relied on self-reported information, and did not capture all aspects of student SNAP eligibility, we did not make any legal determinations about whether individuals were eligible for SNAP, and therefore our analysis can be characterized as providing only a rough estimate of those students who may potentially be eligible for SNAP benefits. To understand how selected colleges address student food insecurity, we conducted four state site visits (California, Kentucky, Massachusetts, and Michigan). We selected these states based on the following criteria: Mentioned in interviews with researchers or government officials as being a state that is: actively addressing college food insecurity, or has at least one public college that is taking action to address food insecurity among college students (number of mentions). School or state program on hunger or food insecurity featured in research papers or policy briefs (number of mentions). FNS data on food insecurity rates in the state, to indicate whether food insecurity among college students might also be a problem (rank by state). FNS data on SNAP enrollment and participation in the state, to indicate the level of SNAP usage in the state (rank by state). FNS information regarding the number of SNAP waivers a state has received, as a proxy for SNAP policy activity in the state (rank by state). We also sought geographic diversity in our site visit states. To achieve this, we created summary rank ordering of states based upon our criteria, then, from those states that ranked in the top 15, we selected one state from the Northeast, South, Midwest, and West census regions. Some of our criteria were purely qualitative in nature, such as information from interviews, research papers, and policy briefs regarding states and colleges with promising practices. Our site visit selection focused specifically on states and colleges with documented activity addressing college student food insecurity, and is therefore biased toward those that had taken action to address college student food insecurity. Our selection strategy did not capture situations where there was high food insecurity among students but the college or state was taking no action to address it, nor did we seek to identify or visit locations where food insecurity had not been identified as a problem. In addition to our site visits, we conducted interviews with officials from one college in Texas and one college in Ohio to learn about specific campus food insecurity initiatives in these states. In each site visit state, we visited several colleges that were taking action to address food insecurity among their student populations, selected based on recommendations from researchers and college officials. We also considered geographic proximity when selecting colleges to visit. Overall, we spoke with officials representing 14 2- and 4-year public colleges (12 in-person and 2 telephone interviews). In each of our site visit states, we visited at least one large public university and one community college. See table 7 for a list of the 2- and 4-year colleges we interviewed in each state. At colleges, we asked members of the leadership team, financial aid officers, student affairs administrators, and other staff members questions about how they recognize, measure, and address college student food insecurity. We also conducted discussion groups with students at seven colleges we visited and asked about their experiences with food insecurity and federal assistance programs, such as SNAP. Students were invited by college officials to participate in these meetings. In each state we visited, we also met with officials from the state agencies that administer SNAP and any state governmental agencies, such as those overseeing higher education or involved in addressing food insecurity among college students. Lastly, in each site visit state, we identified and interviewed staff members at policy organizations, such as legal policy institutes or hunger advocacy groups, involved in efforts to address food insecurity among college students. Assessing Federal Efforts to Address Food Insecurity We assessed the extent to which federal programs assist college students experiencing food insecurity by reviewing relevant federal laws, regulations, and agency guidance and program documents related to specific SNAP requirements for college students and we interviewed FNS national office officials, including representatives of the Divisions of SNAP Program Development, Employment and Training, and Retailer Policy. We also interviewed FNS regional office officials in four of the seven FNS regions about their experiences working with the FNS national office and with state SNAP agencies in their regions to address college student food insecurity and access to SNAP. We also sent an email to all 51 state SNAP agency directors (all 50 states plus the District of Columbia) to ask about any actions their state has taken to address college student food insecurity. We received responses from 50 of the 51 state SNAP agencies, for a 98 percent response rate. This email inquiry was conducted in March and April 2018 and may not include all state actions that have occurred since April 2018. We conducted in-depth interviews with officials at five state SNAP agencies and asked about any specific policies or actions their agencies have taken to address college student food insecurity or to assist potentially eligible college students to access SNAP. We conducted these interviews in person with state SNAP agencies during our four state site visits, and interviewed the Washington state SNAP agency by phone. We conducted this performance audit from July 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michelle L. St. Pierre (Assistant Director), Nora Boretti (Analyst-In-Charge), Jessica K. Rider, and Stephen C. Yoder made significant contributions to this report. Also contributing to this report were Holly A. Dye, Barbara J. El Osta, Sarah C. Gilliland, Alison E. Grantham, Gina M. Hoover, Saida B. Hussain, Sheila R. McCoy, John W. Mingus Jr., Mimi Nguyen, Monica P. Savoy, Benjamin A. Sinoff, Almeta Spencer, Rachel R. Stoiko, Elaine L. Vaurio, and David A. Watsula.
Why GAO Did This Study Increasing evidence indicates that some college students are experiencing food insecurity, which can negatively impact their academic success. However, college students are only eligible for SNAP in certain cases. Given the substantial federal investment in higher education and the risk posed if students do not complete their degrees, GAO was asked to review food insecurity among college students. This report examines (1) what is known about the extent of food insecurity among college students and their use of SNAP; (2) how selected colleges are addressing student food insecurity; and (3) the extent to which federal programs assist students experiencing food insecurity. GAO reviewed relevant federal laws and agency documents and studies on student food insecurity; analyzed 2016 federal student data (the most recent available), and visited four states, selected based on actions taken to address student food insecurity, geographic diversity, and other factors. GAO interviewed researchers; officials from Education, FNS national and regional offices; and officials at 14 colleges, including students at 8 of these colleges. GAO also emailed all state SNAP agencies about their efforts related to students. What GAO Found There is limited information about the national prevalence of food insecurity among college students. GAO reviewed 31 studies that identified a wide range of food insecurity rates among the students studied, but the studies did not provide national estimates. College students at risk of food insecurity may be eligible for benefits from the Food and Nutrition Service's (FNS) Supplemental Nutrition Assistance Program (SNAP). However, GAO's analysis of Department of Education (Education) data shows that almost 2 million at-risk students who were potentially eligible for SNAP did not report receiving benefits in 2016. According to GAO's analysis, having a low income is the most common risk factor for food insecurity among college students. Among low-income students, most have one additional risk factor associated with food insecurity, such as being a first-generation student or a single parent. The 14 selected colleges that GAO contacted were addressing student food insecurity in a number of ways. For example, all 14 were providing free food to students through on-campus food pantries, and most were offering emergency funds to help students pay for living expenses that might otherwise force them to choose between buying food or staying in school. Many of these colleges had centralized student services to better address their students' basic needs and provide other support, such as screening students for potential eligibility and helping them apply for federal benefit programs like SNAP. Federal student aid generally does not cover all college costs for low-income students, and college students may have limited access to federal food assistance programs such as SNAP because of program eligibility restrictions. Some state SNAP agencies reported that they are taking steps to help students access SNAP by conducting outreach to colleges and developing guidance. Nevertheless, at 9 of the 14 colleges GAO contacted, some college officials and students said that they were unfamiliar with or did not fully understand SNAP's student eligibility rules. Some college officials said that they would like information from FNS to better explain SNAP student rules, but FNS has not made such information easily accessible on its website. Further, college officials and state SNAP agencies noted that FNS does not share examples of actions taken by other states to help eligible students access SNAP. Clarification of SNAP student eligibility rules and enhanced information sharing about state efforts could help ensure that potentially eligible college students can access federal food assistance programs. What GAO Recommends GAO recommends that FNS (1) improve student eligibility information on its website and (2) share information on state SNAP agencies' approaches to help eligible students. FNS partially concurred, and plans to review its information. GAO continues to believe additional action is warranted, as discussed in the report.
gao_GAO-19-121
gao_GAO-19-121_0
Background History of the National Cemetery Administration The National Cemeteries Act of 1973 created the modern veterans’ cemetery system. NCA, within VA, manages a majority of veterans’ cemeteries in the United States. In that role NCA maintains existing national cemeteries and builds new national cemeteries for the nation’s veterans and their family members. Since 1978 NCA has also provided funding through VA’s Veterans Cemetery Grants Program (Grants Program) to help establish, expand, or improve state and tribal veterans’ cemeteries. States and tribal governments seeking funding from the Grants Program must apply to the VA. Any cemetery established, expanded, or improved through funding from VA’s Grants Program must be maintained and operated in accordance with NCA’s operational standards. Veterans from all 50 states, the District of Columbia, Puerto Rico, and some U.S. territories are served by national, state, or tribal cemeteries. In addition, over time NCA has changed its policies and procedures to better fulfill its mission to serve and honor veterans and their family members. For example, in 2011 NCA lowered its policy threshold for establishing new national cemeteries from an area having at least 170,000 veterans who are unserved by burial options to an area having 80,000 unserved veterans. NCA established this revised policy threshold in recognition that many highly populated areas still lacked reasonable access to a burial option, and based on data and analysis provided by an independent review of VA’s burial benefits program in 2008. This revised minimum veteran population threshold was chosen based on data showing that state veterans’ cemeteries funded through VA’s Grants Program were located in areas that typically served a maximum of 80,000 veterans within a 75-mile service area. According to VA documentation, moving to this lower threshold has enabled the agency to establish new national cemeteries in areas where states may not have been willing to place them because of the size and cost of operating a larger state veterans’ cemetery. NCA’s Various Burial Options NCA offers a variety of facilities to meet the burial needs of veterans, including various cemetery configurations that either provide burial options to eligible veterans or improve their access to burial options, as shown in table 1. NCA’s Methodology for Determining Access to Burial Options NCA uses county-level population data to determine whether veterans currently have reasonable access to burial options and uses county-level population projections to support decisions about future cemetery locations. NCA makes its decisions regarding whether a veteran is served or unserved based on the county in which the veteran resided, without reference to the location of the veteran’s actual residence. NCA’s methodology uses a veteran’s county of residence as a proxy for being within 75 miles of a veterans’ cemetery. NCA Plans to Establish Eighteen New National Cemeteries and Use Its Grants Program for the Creation of State Veterans’ Cemeteries to Increase Access to Burial Options NCA’s plan entails establishing 18 new national cemeteries—comprised of five traditional national cemeteries and 13 urban and rural initiative national cemeteries—and awarding funds for new state veterans’ cemeteries. In 2014, we reported that NCA estimated approximately 90 percent of the veteran population had reasonable access to burial options, and that it expected to reach its strategic goal of providing reasonable access to 96 percent of veterans by the end of fiscal year 2017. Since 2014, NCA has revised its strategic goal to provide reasonable access to 95 percent of the veteran population, and NCA’s current long-range plan to achieve this goal covers fiscal years 2018- 2022. NCA’s 2014 plan to increase veterans’ access to burial options included building 18 new national cemeteries as follows: Five traditional national cemeteries, to be located in Western New York; Central East Florida; Southern Colorado; Tallahassee, Florida; and Omaha, Nebraska. Taken together, according to NCA, these cemeteries are intended to provide a burial option to an additional 550,000 veterans and their families. Five urban initiative cemeteries, to be located in Los Angeles, California; the San Francisco Bay Area, California; Chicago, Illinois; Indianapolis, Indiana; and New York, New York. Taken together, according to NCA, the urban initiative is intended to expand burial options for approximately 2.4 million additional veterans in certain urban areas. NCA announced this initiative in 2011 with the purpose of expanding burial options in urban areas through building columbaria-only (facilities for cremated remains) national cemeteries close to the urban core. Eight rural initiative cemeteries, to be located in Idaho, Maine, Montana, Nevada, North Dakota, Utah, Wisconsin, and Wyoming. Taken together, according to NCA, the intent of the rural initiative is to increase the burial options for approximately 106,000 additional veterans in certain rural areas. NCA announced this initiative in 2012 with the purpose of increasing access by establishing new national cemeteries for states with no open national cemetery and a population of 25,000 or fewer veterans. In addition, since 1978, NCA has used the Grants Program to help increase veterans’ cemetery access. The Grants Program was established to complement national cemeteries by assisting state, territory, and tribal government applicants to establish, expand, or improve veterans’ cemeteries in order to provide gravesites for veterans in those areas where NCA cannot fully satisfy their burial needs. As noted earlier, states and tribal governments seeking grant funding must apply to the VA. States, funded by the Grants Program, often build in areas with veteran populations that are too small to qualify for a national cemetery. NCA prioritizes pending grant applications by giving the highest priority to cemetery construction projects in geographic locations with the greatest projected number of veterans who will benefit from the project, as determined by NCA based on county-level population projections. In 2018, NCA provided funding for a total of 15 grants for the expansion, improvement, or establishment of state and tribal government veterans’ cemeteries. This includes the establishment of two new state and tribal government veterans’ cemeteries. In 2019, NCA expects to provide funding for 17 state and tribal government veterans’ cemetery projects, three of which would be for new cemeteries. NCA Has Made Limited Progress in Implementing Its Plan for Increasing Burial Access and Faces Continuing Challenges While NCA has made some progress in implementing its plan to increase burial access for veterans, that progress has been limited, as it is years behind its original schedule for opening new cemeteries. In its efforts, NCA has experienced three key challenges: (1) acquiring suitable land for new national cemeteries, (2) estimating the costs associated with establishing new national cemeteries, and (3) using all available data to inform how its Grants Program targets unserved veteran populations. NCA Has Opened Six New Cemeteries since 2014 but Is Years Behind Its Original Schedule In 2014, NCA planned to open 18 new sites by the end of fiscal year 2017 to better serve the burial needs of the veteran population. As of September 2019, NCA has opened four new traditional national cemeteries—Tallahassee National Cemetery in Tallahassee, Florida; Cape Canaveral National Cemetery in Mims, Florida; Omaha National Cemetery in Omaha, Nebraska; and Pikes Peak National Cemetery in Colorado Springs, Colorado. NCA also opened two of its eight planned rural initiative cemeteries—Yellowstone National Cemetery in Laurel, Montana, and Fargo National Cemetery in Harwood, North Dakota. As a result, according to NCA, by the end of fiscal year 2018 the percentage of veterans with reasonable access had increased from 90 percent to about 92 percent. As previously discussed, NCA’s goal is to provide 95 percent of veterans with reasonable access to burial options. As we reported in 2014, NCA had initially planned to open all of its 13 urban and rural initiative sites by the end of fiscal year 2017. As shown in figure 1, NCA had originally estimated completing all five of its urban initiative sites by the end of fiscal year 2015. However, the completion dates for all of these sites have slipped multiple times. In July 2019, NCA officials stated that the planned completion dates for the urban initiative sites were as follows: October 2019 for Los Angeles, sometime in 2020 for New York and Indianapolis, September 2021 for Chicago, and sometime in 2027 for San Francisco. As shown in figure 2, NCA has opened two of its rural initiative sites, in Laurel, Montana, and Fargo, North Dakota. However, the completion dates for the other six rural initiative sites have slipped multiple times. In September 2019, NCA officials stated that the planned completion dates for the rural initiative sites were currently Fall 2019 for Twin Falls, Idaho, Machias, Maine, and Rhinelander, Wisconsin; sometime in 2020 for Cheyenne, Wyoming; and Summer 2021 for Cedar City, Utah. NCA did not provide a specific estimated completion date for the site in Elko, Nevada, affirming that it would be completed “in a future year.” When we asked NCA officials why the rural and urban initiative sites were currently projected to take years longer to complete than originally planned, they replied that they might have overstated their 2014 expectations for having all initiative sites completed by the end of fiscal year 2017. NCA officials also stated that it takes at least 12 months for the land acquisition phase of cemetery construction projects; 9 to 12 months for the design phase; and 12 to 15 months—sometimes up to 30—for the construction phase. According to NCA officials, as of September 2019, five of the 11 initiative sites had reached the construction phase, and one of the sites no longer had an estimated completion date. There were still some outstanding or unresolved issues that had complicated NCA’s ability to estimate a completion date for the site in Elko, Nevada. See figure 3 for a timeline of each of NCA’s urban and rural initiative sites as of September 2019. NCA Has Faced Challenges in Implementing Its Efforts to Increase Access to Burial Options for Veterans In executing its plans to increase access to burial options for veterans, NCA has experienced three key challenges: (1) acquiring suitable land for new national cemeteries; (2) estimating the costs associated with establishing new national cemeteries; and (3) using all available data to inform how its Grants Program targets unserved veteran populations. Challenges in Acquiring Land Have Led to Delays in Implementing NCA’s Plan The primary factor that has led NCA to adjust its timelines for completing these cemeteries concerns challenges in acquiring suitable land. Such challenges include difficulty in finding viable land for development, legal issues related to the acquisitions process, and resistance from the local community, among others. Four examples are described below, including two instances in which, as of July 2019, NCA had not yet acquired suitable land, which may further delay the opening of those specific urban and rural sites. Chicago, Illinois. NCA officials stated that they are on their fifth attempt to acquire land for the urban initiative site in Chicago, Illinois. In addition, they said that the environmental assessment process for the Chicago site is currently underway, and that a site viability decision will not occur until the environmental assessment process is completed later in 2019. According to NCA documentation we reviewed, NCA initiated the land acquisition process for the Chicago site in June 2011 and planned to complete the process by July 2018. If the fifth attempt to acquire land is not successful, then NCA will attempt—for the sixth time—to acquire land. According to NCA officials, this would result in an additional 12 to 18 months to identify and evaluate new property for potential acquisition, likely further delaying the opening of this site. See figure 4 for more details on NCA’s attempts to acquire land for the urban initiative site in Chicago. Elko, Nevada. NCA officials stated that they have identified a top- rated site for the rural initiative site in Elko, Nevada, on land currently owned by the Bureau of Land Management. However, according to NCA officials, Congress would need to enact legislation transferring this land from the Bureau of Land Management to VA before NCA could begin construction. As of June 2019, Congress had not done so. According to NCA officials, VA has opened dialogue with local officials about drafting a utility agreement for the city to construct infrastructure needed to supply water to the site. Implementation of a utility agreement would be dependent upon whether future legislation may potentially be introduced and subsequently passed authorizing the Bureau of Land Management to permanently transfer property to VA for national cemetery use. Also, according to NCA officials, once legislation has passed to allow the transfer of land from the Bureau of Land Management to VA, they estimate it will take 12 to 18 months for the land transfer to be completed. Indianapolis, Indiana. In a written response, NCA officials stated that construction for the urban initiative site in Indianapolis, Indiana, has been delayed by about a year due to a public protest of NCA’s acquisition of the site because of environmental concerns, which resulted in a land transfer with the previous landowner in January 2019. In addition, NCA had to conduct a partial project re-design for the exchanged property. According to NCA’s May 2018 plan of actions and milestones, it had expected to have acquired the land for the Indianapolis site by August 2018 and to have completed construction in December 2019. However, officials told us in September 2018 that, due to the delays in acquiring the land, NCA had revised its planned construction completion date to August 2020. Los Angeles, California. According to officials, NCA is partnering with the Veterans Health Administration, which transferred property for the proposed columbarium at the Los Angeles, California, urban initiative site. Officials stated that this project was delayed initially due to the need to remove existing encumbrances on the land (for example, leases with tenants), among other things. In July 2019, officials stated that the project is scheduled for completion in October 2019. According to NCA officials, unforeseen site conditions can also contribute to delays in cemetery construction projects. During the design phase, soil and geotechnical samples are taken but do not cover the entire site. After excavation begins, issues such as rock formations or hazardous waste not identified during the geotechnical investigation may create challenges to developing land for cemetery use. For example, in July 2019 NCA officials stated that the urban initiative site in San Francisco had encountered major geotechnical and soil issues, causing the project completion to slip to 2027. Also, according to NCA’s 2017 annual status report to Congress on new national cemeteries, the cemetery construction contract for a new cemetery construction project in Western New York could not begin solicitation until additional parcels of land had been acquired. Those parcels of land have a gas well and a gas pipeline that must be relocated. According to NCA officials, as of September 2019, six of the 11 urban and rural initiative sites had not yet begun to be excavated, and any issues that arise during the excavation process at these sites could pose further scheduling delays. NCA’s Cost Estimates for Most of Its Rural Initiative Sites Have Increased Significantly We found that NCA’s cost estimates for seven rural initiative sites have increased significantly above what NCA officials had initially estimated. In its strategy, NCA had estimated that the construction cost estimate for each of the seven rural initiative sites would be approximately $1 million (totaling approximately $7 million). However, NCA officials told us in August 2018 that the construction cost estimates for these sites had increased to more than $3 million each (totaling almost $24 million). This amounts to a cost increase of more than 200 percent. Further, the information they provided was not always consistent. For example, in July 2018 NCA officials provided us the average land acquisition and construction costs for the urban and rural initiatives. According to the document they provided, the average construction cost for each urban initiative cemetery is $7.5 million. However, in August 2018 NCA stated in a written response that the construction cost estimates for each of the urban initiatives ranged from approximately $9 million to more than $22 million, reflecting an average cost of $13.6 million. NCA’s cost-estimating guidance used to prepare construction cost estimates does not fully incorporate the 12 steps identified in our Cost Guide that should result in reliable and valid estimates that management can use to make informed decisions, as shown in table 2. Appendix I provides a detailed summary of our assessment of NCA’s cost-estimating guidance. Specifically, NCA’s cost-estimating guidance fully met one step, substantially met four steps, partially met four steps, minimally met two steps, and did not meet one step. For example: NCA’s cost-estimating guidance fully met the step of “obtaining the data” in that it requires a market survey that explores all factors that will affect the bid cost and collects valid and useful historical data to develop a sound cost estimate. NCA’s cost-estimating guidance substantially met the step of “updating the estimate” in that it requires cost estimates to be regularly updated. For instance, it requires an updated cost-estimating report at each stage of the design of the construction project. NCA’s cost-estimating guidance minimally met the step of “conducting a risk and uncertainty analysis” in that, while it mentions the inclusion of a risk analysis, it does not describe what a risk analysis is and how it relates to cost. Additionally, none of the guidance we reviewed contains any discussion of risk management. NCA’s cost-estimating guidance did not meet the step of “conducting a sensitivity analysis.” According to our Cost Guide, a sensitivity analysis should be included in all cost estimates because it examines the effects of changing assumptions and ground rules. Because uncertainty cannot be avoided, it is necessary to identify the cost elements that represent the most risk, and cost estimators should if possible quantify the risk. NCA uses multiple guidance documents on cost estimation and requires that managers and contractors use all of these documents in implementing their projects. Specifically, NCA uses VA’s 2011 Manual for Preparation of Cost Estimates and Related Documents for VA Facilities (Manual); VA’s 2011 Architect/Engineer (A/E) Submission Requirements for National Cemetery Projects Program Guide PG 18-15 Volume D (Guide); and NCA’s Construction Program Conceptual Estimate Worksheet. We refer to these documents collectively as “NCA’s cost- estimating guidance.” We previously reported on VA’s management of minor construction projects and made several recommendations, including that the Veterans Health Administration revise its cost-estimating guidance to incorporate the 12 steps presented in the Cost Guide, to help VA have greater assurance that its cost estimates for minor construction projects are reliable. VA concurred and stated that it would ensure that the Veterans Health Administration update its cost-estimating guidance by incorporating the 12 steps outlined in the Cost Guide, as applicable. As of August 2019, VA had not taken any action to implement this recommendation. The guidance document it plans to update, the VA Manual, is also used by NCA. Further, NCA uses additional guidance documents to develop cost estimates for its cemetery construction projects—including the urban and rural initiatives—that do not fully incorporate the 12 steps presented in the Cost Guide. Without NCA’s revising its cost-estimating guidance to more fully reflect the 12 steps in the Cost Guide, including “conducting a risk and uncertainty analysis,” NCA will not be well-positioned to provide reliable cost estimates to VA and enable it to make informed decisions regarding the management of cemetery construction projects. NCA’s Grants Program Does Not Use All Available Data for Targeting Unserved Veteran Population Sites As noted earlier, the Grants Program is part of NCA’s plan to increase veterans’ reasonable access to burial options. According to NCA officials, their plan to meet their strategic goal of 95 percent of veterans being served by burial options relies, in part, on the state and tribal government efforts funded by the Grants Program. The Grants Program, in turn, relies on states and tribal governments applying for funding to build new cemeteries or expand existing cemeteries. An NCA official told us that NCA does not have the authority to formally request that a state seek grant funding to expand access in an unserved area. However, according to VA officials, the Grants Program has had informal discussions with states that it believes have larger concentrations of unserved veterans, in order to encourage grant applications to provide increased burial access for unserved veteran populations. When reviewing grant applications, NCA considers a number of factors, including how the grant would enhance access for unserved veterans. NCA officials stated that they use the VA’s county-level population data to identify veteran population areas unserved by national, state, or tribal government veterans’ cemeteries. This analysis also allows NCA to project where additional state and tribal government veterans’ cemeteries may be most needed. Specifically, NCA has ranked what it identified as the 40 largest currently unserved veteran population areas. NCA performs this ranking at the county level, not the more precise census tract level, although as we have previously reported it has the technical ability to use census tract data. In September 2014, we reported that NCA was using population data at the county level to identify veterans not served by burial options, and that using population data at the census tract level would enhance NCA’s management of the national cemetery program. Specifically, we recommended that NCA use its existing capabilities to estimate the served and unserved veteran populations using census tract data. This would have allowed them to make better-informed decisions concerning where to locate new national cemeteries, as well as identify which state and tribal government cemetery grant applications would provide reasonable burial access to the greatest number of veterans. However, VA did not concur with that recommendation. In its comments on our draft report, VA agreed that census tract data may yield more precise information than county-level population data, but it disagreed with our conclusion that the use of census tract data would have helped VA to make better-informed decisions regarding the location of burial options. For this review, we performed an analysis using census tract data to examine the 40 prospective sites that NCA has identified as the currently largest unserved areas, using current veteran population data. Our analysis yielded estimates for veterans in the service areas for these prospective sites that differed substantially in some instances from the numbers used by NCA (see figure 5). For example, NCA ranked Erie, Pennsylvania, as 4th on its list of prospective sites, based on its estimate that an additional 45,154 veterans could be served by a cemetery at this location. However, using census tract data we estimate that only about 10,000 veterans could be served there, resulting in a lower priority for Erie, Pennsylvania, on this list of prospective sites. Similarly, the county- based methodology used by NCA ranked Decatur, Alabama, as 25th on the list of prospective sites, while our methodology based upon nearby census tracts placed it 2nd on the list by estimated number of veterans in the service area. Thus, even though it could serve many additional veterans, Decatur, Alabama, would not be ranked highly on the list for funding using NCA’s methodology. By using the more precise census tract data to help inform its grant- making decisions, NCA could enhance its ability to implement its plan to provide burial options to unserved veterans. Comparing estimates of unserved veterans based on current census tract data with such estimates based on current county-level data can be a useful supplement to NCA’s current reliance on long-term projected county-level population data. Comparing census tract data with county-level data could also identify areas where the county-level projections might be overridden or require additional scrutiny. This could position NCA to better identify those areas of the country that will have the most significant unserved veteran populations. Additionally, this could help NCA refine its current plans or develop new ones, as it deems appropriate. We therefore continue to maintain the validity of our 2014 recommendation for VA to use census tract data to estimate the served and unserved veteran populations to help inform its plans for providing reasonable access to burial options. Conclusions By NCA’s estimates, more than 2.1 million veterans—about 10 percent of the veterans in the United States—did not have reasonable access to burial options at the end of fiscal year 2013. According to NCA, its plan had helped increase the percentage served by burial options to about 92 percent of the veteran population by the end of fiscal year 2018. However, completion of some of the urban and rural sites that are part of NCA’s plan is currently estimated to take 5 years or longer than planned at significantly higher cost, in part because construction cost estimates for the remaining sites may be unreliable. Without NCA’s revising its cost- estimating guidance to more fully reflect the 12 steps in the Cost Guide, including “conducting a risk and uncertainty analysis,” NCA will not be well-positioned to provide reliable cost estimates to VA and enable it to make informed decisions regarding the funding and oversight of NCA’s ongoing minor construction projects to enhance veterans’ burial options. Recommendation for Executive Action The Secretary of Veterans Affairs should ensure that the Under Secretary for Memorial Affairs update its cost-estimating procedures for cemetery construction projects to fully incorporate the 12 steps identified in the GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. Agency Comments We provided a draft of this report to VA for review and comment. In written comments, VA concurred with our recommendation. VA also provided technical comments, which we incorporated as appropriate. VA’s comments are printed in their entirety in appendix II. In its technical comments, VA disagreed with our finding that NCA had made limited progress implementing its plan for increasing burial access for veterans and stated that NCA had instead made significant progress. As we note in this report, in 2014, NCA planned to open 18 new sites by the end of fiscal year 2017 to better serve the burial needs of the veteran population. However, as of September 2019, only six of the planned sites were open, with NCA years behind its original schedule. For this reason, we characterized the progress as “limited.” While the progress has been limited, it is important to note that the opening of the six sites has increased accessibility of burial options to veterans. VA also stated that it continues to disagree with our 2014 recommendation that VA use census tract data to estimate the current served and unserved veteran populations to inform its plans for providing reasonable access to burial options. In its written response, VA stated that we recommended NCA use census tract rather than county-level data. However, that is not what we recommended. As we stated in this report, comparing estimates of unserved veterans based on current census tract data with estimates based on current county-level data would provide a useful supplement to NCA’s current reliance on long-term projected county-level population data. Specifically, NCA would be better positioned to identify those areas of the country that will have the most significant unserved veteran populations and refine its current plans or develop new ones, as it deems appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Veterans Affairs. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. Appendix I: GAO Assessment of the National Cemetery Administration’s (NCA) Guidance for Developing Cemetery Construction Cost Estimates We compared NCA’s cost-estimating guidance with the 12 steps identified in the GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs (Cost Guide). We found that NCA’s cost-estimating guidance on preparing cost estimates for cemetery construction projects—specifically Department of Veterans Affairs’ (VA) Manual for Preparation of Cost Estimates & Related Documents for VA Facilities (Manual), VA’s Architect/Engineer Submission Requirements for National Cemetery Projects, Program Guide 18-15 Volume D (Guide), and NCA’s Construction Program Conceptual Estimate Worksheet (Worksheet)—does not fully incorporate these 12 steps, as shown in table 3. The guidance incorporates some of the 12 steps to some degree, but not others, raising the possibility of unreliable cost estimates for NCA’s urban and rural initiatives. Specifically, NCA’s guidance on preparing cost estimates: fully or substantially met five of the 12 steps, partially met four of the 12 steps, and minimally met or did not meet three of the 12 steps. Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Diana Maurer, (202) 512-9627 or maurerd@gao.gov. Staff Acknowledgments In addition to the contact named above, Brian Lepore, Director (Retired); Maria Storts, Assistant Director; Pamela Nicole Harris, Analyst-in-Charge; Brian Bothwell, Jennifer Echard, Alexandra Gonzalez, Jason Lee, Amie Lesser, Serena Lo, John Mingus, Brenda Mittelbuscher, Maria Staunton, Frank Todisco, Cheryl Weissman, and John Wren made significant contributions to this report.
Why GAO Did This Study The VA is responsible for ensuring that veterans have reasonable access to burial options in a national or state veterans' cemetery. In fiscal year 2018 VA estimated that about 92 percent of veterans had reasonable access to burial options, which was an increase from 90 percent in fiscal year 2014 but short of its goal of 96 percent by the end of fiscal year 2017. The House Appropriations Committee has expressed concerns that there are geographic pockets where veterans remain unserved by burial options. House Report 115-188 accompanying a bill for the Military Construction, Veterans Affairs, and Related Agencies Appropriations Act, 2018, includes a provision for GAO to examine veterans' access to burial options. This report (1) describes VA's plan for increasing reasonable access to burial options for veterans and (2) assesses VA's progress in implementing its plan and any challenges experienced. GAO reviewed applicable VA and NCA documents, compared NCA's cost-estimating practices with GAO's cost-estimating 12 steps, and met with cognizant officials regarding NCA's efforts to provide reasonable access to burial options. What GAO Found Within the Department of Veterans Affairs (VA), the National Cemetery Administration (NCA) has a plan to establish 18 new national cemeteries to increase reasonable access to burial options for veterans. NCA defines reasonable access as a national or state veterans' cemetery being located within 75 miles of veterans' homes. Key parts of NCA's plan include establishing 13 urban and rural initiative national cemeteries and awarding grant funds to state applicants for establishing new state veterans' cemeteries. NCA has made limited progress in implementing its plan to increase burial access and is years behind its original schedule for opening new cemeteries. For example, NCA has opened only two of its planned urban and rural initiative sites and is behind its original schedule for the other 11 (see fig. below). The primary factor delaying NCA's completion of these cemeteries has been challenges in acquiring suitable land. NCA has also been challenged in producing accurate estimates of construction costs for most of its rural initiative sites. Cost estimates have increased more than 200 percent (from about $7 million to $24 million) for these sites, and NCA's guidance for developing cost estimates for the cemeteries does not fully incorporate the 12 steps identified in cost-estimating leading practices—such as conducting a risk and uncertainty analysis or a sensitivity analysis. As a result, NCA is not well positioned to provide reliable and valid cost estimates to better inform decisions to enhance veterans' cemetery access. What GAO Recommends GAO recommends that NCA fully adopt cost-estimating leading practices into its procedures to assist in improving its cost estimates for establishing cemeteries. NCA concurred with our recommendation.
gao_GAO-18-459
gao_GAO-18-459_0
Background The Department of Agriculture Administers the Animal Welfare Act USDA’s APHIS is responsible for implementing the Animal Welfare Act. The act and its implementing regulations govern, among other things, how federal and nonfederal research facilities must treat particular species of warm-blooded animals to ensure their humane treatment when used in research, teaching, testing, or experimentation. The Animal Welfare Act’s definition of “animal” excludes birds, rats of the genus Rattus, and mice of the genus Mus when those animals are bred for use in research. The act also excludes horses not used for research purposes and other farm animals used or intended for use as food or fiber or in certain types of research. The Animal Welfare Act also excludes cold- blooded animals—such as fish, reptiles, or amphibians—and invertebrates. See table 1 for a summary of the animals covered and not covered by the Animal Welfare Act. (Animals covered by the Health Research Extension Act are also included in table 1 and described in the next section.) The Animal Welfare Act and its regulations contain specific standards for research facilities. These include: Registration. Nonfederal research facilities that conduct activities regulated by the Animal Welfare Act must register with APHIS. The act does not require that federal research facilities register with APHIS. APHIS does, however, assign federal research facilities certificate numbers that it uses to track whether they have submitted their required annual report (see below). As of March 2018, APHIS had assigned such numbers to 157 federal research facilities. Some of these federal research facilities, such as VA, have elected to report information to APHIS on an individual basis, while others, such as the HHS’s Centers for Disease Control and Prevention, submit a single report covering research facilities in several states. Annual report. Reporting facilities that used or intended to use live animals in research, tests, experiments, or for teaching must submit a retrospective annual report about those animals to APHIS on or before December 1 of each calendar year. Standards for humane handling, care, treatment, and transportation of animals. The Animal Welfare Act directs research facilities to meet certain standards of care for the animal species that are covered by the act. The standards of care are tailored to particular species of animals or groups of species. Institutional Animal Care and Use Committees. Research facilities must appoint a committee to, at least semi-annually, review the facility’s program for humane care and use of animals, to inspect all facilities, and to prepare reports of its evaluation. The committee is responsible for reviewing research proposals to determine whether the proposed activities are in accordance with the act or there is an acceptable justification for a departure from the act. Federal inspections. APHIS officials have the authority to inspect nonfederal research facilities, records, and animals to enforce the provisions of the act. The Animal Welfare Act does not expressly provide APHIS the authority to inspect federal research facilities, and APHIS will not do so unless invited. The Animal Welfare Act exempts farm animals, other than horses, from its coverage when they are used or intended for use as food or fiber or in agricultural research that is intended to improve animal nutrition, breeding, management, or production efficiency, or to improve the quality of food or fiber. According to officials with USDA’s Agricultural Research Service (ARS), most of the agency’s research activities fall under this exemption. Nevertheless, in February 2016, APHIS and ARS signed a memorandum of understanding concerning laboratory animal welfare. The intent of the memorandum of understanding is to maintain and enhance agency effectiveness and avoid duplication by allowing APHIS to use applicable sections of the Animal Welfare Act’s requirements, regulations, and standards to inspect ARS animal research facilities. Among the provisions of the memorandum, ARS agreed to register its animal research facilities with APHIS and submit an annual report to APHIS. As of March 2018, 35 ARS animal research facilities were voluntarily registered with APHIS, and ARS facilities submitted their first annual reports for activities conducted in fiscal year 2016. NIH Administers the Health Research Extension Act NIH, within the Department of Health and Human Services, administers the Health Research Extension Act. The act calls for the Director of NIH to establish guidelines that govern how certain research institutions that conduct activities using animals are to consider animal welfare. In particular, the guidelines govern how those research institutions— including federal facilities—that receive funding from Public Health Service agencies are to ensure the humane treatment of all vertebrate animals used in biomedical or behavioral science research. NIH conducts site visits at selected institutions to assess compliance with the act. Whereas the Animal Welfare Act applies to certain warm-blooded animals, the definition of animals used for the purposes of the Health Research Extension Act covers all vertebrates, including mice, rats, and fish species that are commonly used in laboratory research (see table 1). Under the act, research institutions are required to provide certain information to NIH in order to be eligible for Public Health Service funding. In particular, they must provide for NIH approval a document that describes their animal care and use program and that assures that the facility meets applicable standards. NIH calls for research institutions to provide, among other information, a commitment to comply with all applicable provisions of the Animal Welfare Act and other federal statutes and regulations relating to animals, a description of the facility, and an “average daily inventory” of species housed at the facility. In addition, research institutions approved for Public Health Service funding must annually report changes in their animal use program to NIH. As of September 2017, NIH had approved 111 federal facilities across 8 agencies for funding under the act. APHIS and NIH Have Instructed Federal Agencies to Provide Data on Animal Use, but APHIS’s Instructions Have Not Ensured Consistent and Complete Reporting As directed by the regulations implementing the Animal Welfare Act, the 10 agencies we reviewed submitted to APHIS the required annual reports on their use of animals covered by the act from fiscal years 2014 through 2016. However, APHIS’s reporting instructions have not ensured consistent and complete reporting because they have been unclear about which animal species, activities, and activity locations are required to be reported for the purposes of the Animal Welfare Act. Federal facilities that conduct activities with animals using Public Health Service funding that we reviewed met NIH requirements to provide assurance documentation about their animal use programs and to provide required annual reports for fiscal years 2014 through 2016. Federal Agencies Generally Report to APHIS on Animal Use, but APHIS Has Not Provided Sufficient Instructions to Ensure Consistent and Complete Reporting The Animal Welfare Act regulations require federal agencies that use or intend to use live animals in research to report on their use of these animals. As directed by APHIS, these agencies, or their individual research facilities, must submit an annual report to APHIS on or before December 1 of each calendar year. APHIS instructs research facilities to submit an annual report that: includes information about animals covered by the Animal Welfare Act’s regulations and the number of such animals used as well as those held for use but not used, and provides assurances that the facility has met applicable standards, such as standards for the appropriate use of anesthetic, analgesic, and tranquilizing drugs. In addition, facilities must report whether the animals fall into one of three categories related to pain or distress and the efforts the facilities took to relieve pain or distress. Facilities must also attach a summary of any activity that did not meet the standards of the act but that were approved by the facility’s Institutional Animal Care and Use Committee. All 10 of the federal agencies we reviewed submitted annual reports to APHIS showing that their facilities had used animals in research in fiscal years 2014 through 2016. APHIS has procedures in place to track which agencies’ facilities have reported and to notify any that have not done so. For example, APHIS has developed schedules for sending reminders to facilities that have not yet reported. APHIS expects federal research facilities that it has assigned certificate numbers but that did not use any animals in a particular fiscal year to submit a report with that information. APHIS data show that the 10 federal agencies in our review reported that their facilities used more than 210,000 animals covered by the Animal Welfare Act in fiscal years 2014 through 2016. However, in our comparison of federal agencies’ annual reports to APHIS with their responses to our request for information about their activities, we found instances in which agencies did not report activities covered by the act or did not report similar activities consistently across facilities. These conditions resulted, in part, from APHIS not providing sufficient instructions on the research activities that federal agencies are to include in their annual reports. Additionally, we found that facilities reported species not covered by the act. As a result, the data that research facilities submit to APHIS in their annual reports may not accurately reflect the facilities’ uses of animals covered by the act. We identified three areas in which federal agencies’ annual reports were inconsistent or incomplete: birds, animal use outside the United States, and field studies. Use of Birds The Animal Welfare Act and birds Animal Welfare Act The term animal excludes birds bred for use in research. APHIS’s 2017 instructions for completing the annual report “o NOT report the use of … birds, reptiles, fish or other animals w hich are exempt from the regulation under the .” In 2002, Congress amended the definition of animal in the Animal Welfare Act to exclude birds that are bred for use in research. However, APHIS instructs facilities to not report any birds in their annual reports, regardless of whether they were bred for research. Five agencies reported to us that their research facilities used birds in fiscal years 2014 through 2016—including some not bred for research and therefore potentially covered by the act—but that they followed APHIS’s instructions to not report them. According to APHIS officials, since Congress amended the definition of animal in the act, the agency has been aware of the need to define which birds are covered by the act and should, among other things, be reported to APHIS by research facilities. The officials said that until the agency has defined birds covered by the act, they do not believe that it is appropriate to require research facilities to report their use of birds. However, as of February 2018, APHIS had not provided us with a schedule or plan for defining birds covered by the act or for developing reporting requirements for those birds. As a result, it is unclear when, or if, APHIS will require research facilities to report their use and treatment in research of birds that are covered by the Animal Welfare Act. Until APHIS develops such requirements, federal (and other) research facilities will have incomplete information about what information they should include in annual reports submitted to APHIS, and APHIS will not have assurance that annual reports from research facilities fully reflect research activities covered by the act. Animal Use outside the United States The Animal Welfare Act and reporting facilities Animal Welfare Act regulations “The reporting facility shall be that segment of the research facility, or that department, agency, or instrumentality of the United States, that uses or intends to use live animals in research, tests, experiments, or for teaching.” APHIS’s 2017 Instructions for completing the annual report The instructions do not instruct federal research facilities to report activities involving animal use outside the United States. The Animal Welfare Act regulations define a reporting facility to include a department, agency, or instrumentality of the United States. Officials from USDA’s Office of the General Counsel told us that there is no exclusion in the act or its regulations for federal research facilities that are located outside of the United States. However, APHIS does not instruct federal research facilities to report activities involving animal use outside the United States. Of the 10 agencies with federal research facilities that submitted annual reports to APHIS, we identified three through our initial contacts and follow-up interviews that conduct activities outside the United States involving animals that may be covered by the Animal Welfare Act: the Departments of Commerce and Defense and the Smithsonian Institution. We found that officials from the three agencies had a different understanding of their obligation to report those activities to APHIS. A senior official from the Department of Commerce’s National Marine Fisheries Service said that he knew of no reason to not report on studies conducted outside the United States and that the agency had reported such activities in fiscal year 2017. On the other hand, officials from the Department of Defense and the Smithsonian Institution told us that APHIS officials have instructed them not to report activities conducted outside of the United States. As a result, the Department of Defense and the Smithsonian Institution did not report animal use in their non-domestic facilities in fiscal years 2014 through 2016. With instructions from APHIS that federal research agencies report all activities covered by the Animal Welfare Act, regardless of location, APHIS and the public would have greater assurance that annual reports fully reflect activities covered by the act and that agencies are reporting such activities consistently. Animal Use in Field Studies The Animal Welfare Act and field studies Animal Welfare Act regulations “Field study means a study conducted on free-living w ild animals in their natural habitat. How ever, this term excludes any study that involves an invasive procedure, harms, or materially alters the behavior of an animal under study.” APHIS’s 2017 instructions for completing APHIS’s instructions do not sufficiently clarify the conditions under w hich a field study w ould be invasive, harmful, or materially alter behavior and, therefore, be covered under the act. APHIS exempts some research involving wild animals from the requirements of the Animal Welfare Act regulations, including annual reporting. Specifically, in promulgating the current definition of “field studies” in regulation, APHIS stated, “if the research project meets the definition of field studies, the research project would not fall under the regulation.” To qualify for this exemption, a study must take place in a free-living, wild animal’s natural habitat and not involve an invasive procedure, harm, or materially alter the behavior of an animal under study. APHIS’s instructions for annual reporting note this exemption. However, they do not sufficiently clarify the conditions under which a field study would qualify, nor do they point to any source providing clarifying language. For example, the instructions do not describe criteria research facilities could use to identify activities that are invasive, harmful, or materially alter behavior. We found that agencies have interpreted the field study exemption differently. For example, Officials from three agencies within the Department of the Interior told us that the agencies did field research with many species in fiscal years 2014 through 2016, but we found the agencies had different approaches to reporting that research to APHIS. Specifically, the U.S. Geological Survey and National Park Service reported using dozens of animal species to APHIS while the Fish and Wildlife Service did not report any. An official with the Fish and Wildlife Service explained to us that the agency did not report the animals to APHIS because they were only held temporarily. Officials from the Fish and Wildlife Service and U.S. Geological Service told us that APHIS’s guidance on field studies is confusing and causes discrepancies in reporting. NASA conducts research involving temporary capture, blood sampling, and tagging of animals to study any possible effects of NASA’s launch sites on the surrounding ecosystem, but the agency does not include these activities in its annual reports to APHIS. The National Marine Fisheries Service conducts field research also involving temporary capture, blood sampling, and tagging of marine mammals for various purposes. Some of the service’s research facilities have reported these types of activities to APHIS, and according to a service official, the other facilities plan to do so. An official from the service also told us that the agency has received inconsistent guidance from APHIS about what field research to report. The National Marine Fisheries Service’s facilities that have reported animal research to APHIS have represented a large portion of the overall number of animals that federal facilities reported in fiscal years 2014 through 2016. For example, in fiscal year 2016, the agency’s facilities accounted for nearly 16,000 of about 82,000 animals reported to APHIS by the 10 federal agencies in our review. Therefore, whether these activities should or should not be reported will have a large effect on the total number of animals that federal facilities reported using for research. APHIS officials told us that they are developing additional clarifying guidance on field studies and will publish the guidance for public comment in the third quarter of fiscal year 2018. However, APHIS has not yet released a draft of this guidance. A draft with criteria for identifying which field studies are covered by the Animal Welfare Act and therefore should be reported—for example, because the studies are considered to be invasive, harmful, or materially alter behavior—would enable APHIS to ensure that the research community’s views are incorporated. With clearer instructions that include such criteria, APHIS and the public would have greater assurance that annual reports fully reflect activities covered by the act. Federal Research Facilities We Reviewed Met Instructions to Report Information on Their Animal Care and Use Programs to NIH NIH has provided guidance to federal and nonfederal research facilities about what they are required to report on their animal use, and federal facilities we reviewed met those requirements. In order to obtain funding from the Public Health Service agencies, research facilities must obtain approval from NIH of their animal welfare assurance statement and must provide annual reports to NIH. To obtain an approved assurance, a research facility must provide NIH with information about its animal care and use program. NIH provides facilities with a sample assurance document that describes the required information, including assurances of compliance with animal welfare standards signed by appropriate officials, a roster of Institutional Animal Care and Use Committee membership, an average daily census of animals, and other information. NIH’s approval of an animal care program lasts up to 5 years, and according to NIH officials, the agency typically begins its review of a renewal after 4 years. To help facilities meet the annual report requirement, NIH provides an annual-reporting sample document that directs research facilities to update the animal care and use committee’s roster and to note any change in accreditation from the private accreditation organization AAALAC International and describe any significant changes in their animal care program, such as the species or number of animals maintained in housing. NIH officials told us the purpose of the assurances is to ensure that the proper facilities and procedures are in place to properly care for the animals, and that NIH does not use them as a public reporting tool. Health Research Extension Act of 1985 animal care committees at each entity w hich conducts biomedical and behavioral research with funds provided under this Act (including the National Institutes of Health and the national research institutes) to assure compliance w ith the guidelines established [by the Director of NIH]. NIH has procedures to ensure that facilities that seek to receive funding from Public Health Service agencies have animal care programs with active assurances. NIH provided us with its data for tracking which facilities were receiving Public Health Service funding and which facilities had approved programs. As of November 2017, according to NIH data, all of the federal facilities receiving funding from Public Health Service agencies for activities involving animals had an active assurance. Using a sample of 16 assurances from federal facilities, we found that these assurances contained information called for by NIH, including signatures from institutional officials, rosters of Institutional Animal Care and Use Committees, and animal inventories. NIH data show that all assured facilities submitted annual reports in calendar years 2014, 2015, and 2016. APHIS and NIH Publicly Share Some Federal Animal Use Information, but APHIS Does Not Describe the Quality of the Information It Shares APHIS and NIH publicly report some information about federal agencies’ use of research animals. Although the Animal Welfare Act does not require APHIS to share this information, APHIS posts the following on its website: Annual reports from research facilities. Research facilities’ annual reports include data on the species and numbers of animals held and used for research, categorized by the steps taken to minimize pain and distress to the animal. The annual reports also include the facility’s explanation of any exceptions to the Animal Welfare Act’s standards and regulations during the reporting year. As of April 2018, APHIS’s website included research facilities’ annual reports from fiscal years 1999 through 2017. National summaries of the annual reports. APHIS prepares national summaries using the annual reports submitted by research facilities. APHIS’s annual national-summary reports include data provided by research facilities on species and numbers of animals, categorized by state and by the steps taken to minimize pain and distress to the animal. As of March 2018, APHIS’s website had national summary reports for fiscal years 2008 through 2016. The national summaries do not categorize the data by types of facilities, such as federal or nonfederal research facilities. Reports of APHIS inspections. The APHIS inspection reports— typically of nonfederal facilities—could contain such information as descriptions of non-compliance, the number of animals involved in noncompliance, a correction deadline and a description of what should be done to correct the problem, and the date of the inspection. As of March 2018, APHIS’s website contained reports of inspections at three federal facilities, including a zoo and an aquarium. This number does not include ARS research facilities, which APHIS inspects as part of its 2016 memorandum of understanding with ARS. As of March 2018, APHIS’s website contained inspection reports for 19 ARS research facilities. USDA’s Chief Information Officer has provided guidance directing the department’s agencies and offices to strive to ensure and maximize, among other things, the objectivity of information disseminated to the public. To ensure objectivity, the guidance directs that USDA agencies and offices ensure that the information they disseminate is presented in an accurate, clear, complete, and unbiased manner. APHIS has not fully implemented this guidance for the animal use data it shares publicly. In particular, APHIS does not explain on its website potential limitations related to the accuracy and completeness of the annual reports that it provides to the public or in the national summaries of the annual reports that APHIS prepares. For example, APHIS does not explain that research facilities’ annual reports may contain data on animals used for activities that are not covered by the Animal Welfare Act regulations, such as excluded field studies. Additionally, APHIS does not explain that the annual reports do not include birds not bred for research—and consequently covered by the Animal Welfare Act— because APHIS has instructed facilities to not report any birds. Furthermore, APHIS does not explain that it does not validate the accuracy and completeness of agencies’ reporting. In particular, APHIS officials told us that they have the opportunity to validate reporting when they inspect nonfederal facilities, but do not have the authority to inspect federal research facilities unless invited to do so. Some stakeholders responded to our survey that they use the data that APHIS reports on animal use to identify trends and practices within the research community. By fully implementing USDA guidance by explaining what the data represent and possible issues with their quality, APHIS could have more assurance that it is providing these data to users in a manner that is as accurate, clear, complete and unbiased as possible. Users could then be better equipped to properly analyze or assess the quality of the data, interpret the annual reports, and draw conclusions based on these data. NIH posts a list of federal and nonfederal facilities with active assurances on its website. The Health Research Extension Act does not require NIH to make such information available through a public website, but NIH policy directs the agency to provide to Public Health Service agencies a list of facilities with such assurances. The list includes facilities that receive Public Health Service funding and facilities that have voluntarily requested NIH’s review and approval of their programs. Our review did not identify federal facilities that were missing from or incorrectly included in NIH’s posted list of assured facilities. NIH does not regularly post other information—such as the facilities’ average daily inventory of animals, the date they obtained an assurance, or the date they submitted their most recent annual report.—from research facilities’ assurance documents. Therefore, we did not review in detail the information that agencies provide to NIH to determine its accuracy. Stakeholder Groups Have Differing Views on Whether Agencies Should Share Additional Animal Use Information with the Public Federal agencies may have additional information about their animal use programs. However, stakeholders who responded to our survey had different views about whether federal agencies should proactively and routinely make more information on animal use available to the public on their websites or other means than the data that APHIS and NIH currently provide. Stakeholders other than animal advocacy organizations— including federal agencies, research organizations, academia, and others—generally expressed the view that federal agencies should not routinely make additional information available to the public, citing reasons including the existence of other methods to obtain this information and administrative burden. In contrast, stakeholders from animal advocacy organizations cited the need for more transparency and oversight as reasons that federal agencies should make additional information routinely available to the public, among other reasons. (See app. III for more information about stakeholders’ responses to our questions). More specifically, we asked stakeholders to provide their views on whether federal agencies should proactively and routinely report certain types of information to the public. We selected 10 types of information for stakeholders to consider, including some types of information that federal agencies may have for internal purposes and, in some instances, may provide to other agencies or organizations but that neither they nor others are required to proactively share with the public. The types of information we asked stakeholders to consider included data on vertebrate animals that are not covered by the Animal Welfare Act, internal or external inspection reports, and general descriptions of agencies’ animal use programs. See table 2 for the complete list of types of information we asked stakeholders to consider. For stakeholder groups that generally expressed the view that federal agencies should not make additional information available to the public on a proactive and routine basis, one of the most frequently cited reasons included that the public could obtain this information through other publicly available means. For example, several stakeholders said that agencies’ reports of noncompliance to APHIS or NIH and data on resource expenditures are already available via the FOIA. One federal stakeholder said that it provides the public with information about the nature and extent of field research when it is required by the Marine Mammal Protection Act of 1972 or the Endangered Species Act of 1973 to obtain permits; the permitting processes include public notice and comment. In addition, some stakeholders said that certain types of information, such as the identity of the species used and the purpose and expected benefit of specific research projects are already published in peer-reviewed journals that are accessible to the public. Several stakeholders also responded that providing additional information would impose an administrative burden on agencies. For example, several stakeholders said any potential public benefit from the additional information shared with the public would not justify the effort to collect and share the information, and one stakeholder said that providing certain types of information would reduce the time they have to do actual research. In addition, one stakeholder said that a requirement to make additional information available to the public would be in direct conflict with a 2016 law that directed NIH, the Food and Drug Administration, and USDA to look for ways to reduce administrative burdens associated with animal welfare regulations. Other less frequently cited reasons that stakeholders gave for not believing that agencies should proactively and routinely share additional information with the public included: Certain information, such as expenditures on animal use, could be difficult to collect from disparate sources. For example, one federal agency said that much of its animal use funding is allocated in different areas of research and that it would need guidance to collect data on expenditures separately from each area. Disseminating information could jeopardize the security of facilities or personnel or disclose proprietary data. For example, one stakeholder said agency reports contain key details about federal research facilities that opposition groups could use to target personnel in those facilities. Disseminating information could confuse the public unless appropriate context is provided. One stakeholder said that the passive dissemination of data on animal research on a website, without appropriate context, would potentially increase public confusion and add misplaced scrutiny on animal use in federal research facilities. For those stakeholder groups that generally expressed the view that federal agencies should make additional information available to the public on a proactive and routine basis, the most frequently cited reasons were the importance of transparency to allow the public to assess and understand animal use in federal research facilities and the need for oversight and accountability of federal agencies’ use of animals. For example, some stakeholders responded that sharing additional information with the public would aid their efforts to monitor the reduction, refinement, and replacement of animals used in federal research. One stakeholder also mentioned that sharing additional information could be easily done on a website and would give the public a more complete picture of the use of animals by federal research facilities. Several stakeholders also expressed the need for greater oversight and accountability of federal agencies’ use of animals. For example, two stakeholders said that making additional information available about the degree to which animals experience pain or distress would help them assess whether federal programs’ animal use is in compliance with specific provisions related to pain and distress in the Animal Welfare Act. Stakeholder groups less frequently cited other reasons for favoring routine reporting, such as: FOIA requests can take several months and sometimes years for agencies to fulfil. Certain information, such as the number of all vertebrate animals used by each agency—including those not reported under the Animal Welfare Act—should be easy to disseminate because federal agencies already collect or compile it for internal purposes. Additional reporting would align the federal government with other countries’ practices. For example, according to one stakeholder, the European Union categorizes and publicly releases animal use numbers that are more detailed than those reported in the United States. Conclusions APHIS and NIH routinely collect information about federal agencies’ research with vertebrate animals and provide the public with related information. Having access to this information can help the public observe trends in animal use in research and learn about facilities’ compliance with standards of humane care. Federal agencies met NIH’s requirements for reporting on their animal use, but the data federal agencies provided to APHIS were not always consistent or complete. This situation resulted in part from APHIS’s not providing sufficient instructions to federal research facilities for reporting on their use of animals covered by the Animal Welfare Act. In particular, APHIS instructs facilities to not report any birds in their annual reports, regardless of whether the birds are covered by the act. Although aware of this limitation, APHIS has not provided a schedule or plan for defining birds covered by the act or for developing reporting requirements for those birds. In addition, APHIS’s instructions have not sufficiently clarified two areas of confusion and differing understanding among federal agencies: first, activities that involve animal use outside the United States and, second, the specific conditions under which field studies are or are not covered by the act. APHIS plans to develop clarifying guidance on field studies and will publish the guidance for public comment. By defining the birds that need to be reported, by instructing federal research facilities to report research activities outside the United States, and by working with the research community to develop clear criteria for identifying field studies, APHIS would have greater assurance that the data it receives from research facilities fully reflect the activities covered by the Animal Welfare Act. APHIS has also not fully implemented the USDA’s information dissemination policy that calls for the department’s agencies to ensure information is presented in an accurate, clear, complete, and unbiased manner. In particular, APHIS does not explain issues related to the completeness and accuracy of the data it provides to the public, for example, issues such as inconsistencies in the types of field studies reported by federal agencies. By fully explaining these issues, the agency would improve users’ ability to accurately interpret and analyze the data. Recommendations for Executive Action We are making the following four recommendations to APHIS: The Administrator of APHIS should develop a timeline for (1) defining birds that are not bred for research and that are covered by the Animal Welfare Act, and (2) requiring that research facilities report to APHIS their use of birds covered by the act. (Recommendation 1) The Administrator of APHIS should instruct federal agencies to report their use of animals covered by the Animal Welfare Act in federal facilities located outside of the United States. (Recommendation 2) In developing the definition of field studies, the Administrator of APHIS should provide research facilities with clear criteria for identifying field studies that are covered by the Animal Welfare Act’s regulations and that facilities should report to APHIS as well as field studies that facilities should not report. (Recommendation 3) The Administrator of APHIS should ensure APHIS fully describes on its website how the agency compiles annual report data from research facilities, what the data represent, and any potential limitations to the data’s completeness and accuracy. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to Commerce, Defense, HHS, DHS, Interior, USDA, VA, EPA, NASA, and the Smithsonian Institution. USDA and VA provided written comments on the draft, which are presented in appendixes IV and V, respectively. In its written comments, USDA said that APHIS provided planned corrective actions and timeframes for implementing three of our four recommendations; APHIS disagreed with one recommendation. In its written comments, VA said that the report’s conclusions were consistent with our findings. Regarding our first recommendation that the Administrator of APHIS develop a timeline for (1) defining birds that are not bred for research and that are covered by the Animal Welfare Act, and (2) requiring that research facilities report to APHIS their use of birds covered by the act, USDA stated that APHIS will submit a recommendation and timeline by September 30, 2018, to USDA officials regarding the development of a definition for birds. USDA’s comments did not specifically respond to our recommendation that APHIS also develop a timeline for requiring that research facilities report their use of birds covered by the act; we continue to believe that APHIS should develop such a timeline. USDA’s written comments stated that APHIS disagreed with our second recommendation that the Administrator of APHIS should instruct federal agencies to report their use of animals covered by the Animal Welfare Act in federal facilities located outside of the United States. USDA provided several reasons for the disagreement: USDA stated that the absence of an exclusion to the requirements of the Animal Welfare Act or its regulations for federal research located outside of the United States does not create a requirement to collect information about such facilities’ use of animals. However, the Animal Welfare Act regulations define a reporting facility to include a department, agency, or instrumentality of the United States. In addition, officials from USDA’s Office of the General Counsel told us that there is no exclusion in the act or its regulations for federal research facilities that are located outside the United States. We have no reason to believe that such facilities should be excluded from the requirements of the Animal Welfare Act or its implementing regulations. We also note that in February 2018, APHIS officials told us that if federal agencies’ activities involving animals outside of the United States are in fact covered by the Animal Welfare Act based on the specific facts and circumstances of their activities, they should report those activities to APHIS. USDA’s comments stated that the collection of information related to research activities outside of the United States does not enable or inform its daily administration of the Animal Welfare Act and its charge to ensure the humane treatment of animals. Rather, USDA stated that our recommendation would impose an additional regulatory burden on federal research facilities. As stated above, we have no reason to believe that such facilities should be excluded from the requirements of the Animal Welfare Act or its implementing regulations. Without such an exclusion, the regulatory burden already exists; our recommendation would simply have APHIS instruct federal agencies to meet that regulatory requirement. Finally, USDA commented that our recommendation would place APHIS in the position of collecting different information from “reporting facilities,” as defined in the regulations, which in turn, would impact any summary presentation of information involving the use of animals. We understand that, if our recommendation were implemented, APHIS may receive “different” information from federal and nonfederal facilities; that is, federal research facilities might report activities outside of the United States while nonfederal facilities would not. However, as stated above, we have no reason to believe that such facilities should be excluded from the requirements of the Animal Welfare Act or its implementing regulations. Without such an exclusion, activities covered by the Animal Welfare Act in federal facilities located outside of the United States must already be reported. We also note that, as we state in our fourth recommendation, APHIS should inform the public about the nature of its data. That information could include describing any differences in reporting by federal and nonfederal research facilities. For the reasons given above, we continue to believe that the Administrator of APHIS should instruct federal agencies to report their use of animals in activities covered by the Animal Welfare Act in federal facilities located outside of the United States. In response to our third recommendation that the Administrator of APHIS take certain steps to clarify the definition of field studies that are covered by the Animal Welfare Act, USDA stated that APHIS agreed to issue a guidance document by December 31, 2018. We appreciate APHIS’s commitment to issuing new guidance on field studies, but note that USDA’s written comments did not directly respond to the language in our draft recommendation that called for the agency to provide research facilities with clear examples of field studies that are covered by the Animal Welfare Act regulations. We also note that the Forest Service stated in technical comments that the extensive number and variation in wildlife species preclude providing specific examples of activities that meet a prescribed definition of a field study. The Forest Service suggested that we modify our recommendation to call for APHIS to provide criteria for how research facilities should determine which studies qualify as an exempted field study. We agreed with that suggestion and modified our recommendation to call on APHIS to provide research facilities with criteria to help research facilities determine which studies are covered by the Animal Welfare Act. APHIS agreed with our fourth recommendation that the Administrator of APHIS direct the agency to fully describe animal use data on its website. USDA’s comments stated that, beginning with the fiscal year 2017 summary activities, APHIS will describe how it compiles annual report data from research facilities, what the data represent, and any potential limitations to the data’s completeness and accuracy. USDA stated that APHIS will update the website with this information by September 30, 2018. In its written comments, VA stated that our overall descriptions of its animal research program were accurate. The agency also stated that it looks forward to a time when the use of animals in research is no longer needed, but until that time, the agency will use all necessary research strategies to reduce and prevent the suffering of veterans. APHIS, HHS, and DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Commerce, the Secretary of Defense, the Secretary of Health and Human Services, the Secretary of Homeland Security, the Secretary of the Interior, the Secretary of Veterans Affairs, the Administrator of the Environmental Protection Agency, the Administrator of the National Aeronautics and Space Administration, the Secretary of the Smithsonian Institution, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact us at (202) 512-3841 or morriss@gao.gov or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Animal Species Used in Research by Federal Agencies in Fiscal Years 2014 through 2016 Federal agencies conduct research with animals for a variety of purposes, including to benefit human or animal populations. We identified 10 agencies that conducted research using vertebrate animals in fiscal years 2014, 2015, or 2016 with their own staff using their own facilities and equipment. Federal agencies also fund activities that use animals, meaning that the research is done by a nonfederal entity. However, we did not include those activities in our review. In the process of identifying federal agencies that conducted research with animals, we also identified the wide range of vertebrate animal species that these agencies used from fiscal years 2014 through 2016. In response to our survey of agencies, we learned that some agencies conducted research with a dozen or more species of animal while others conducted activities with hundreds of species. For example, NASA reported to GAO that it used 16 species while the National Museum of Natural History—one of the four animal research facilities within the Smithsonian Institution that responded to our survey—reported it conducted research on about 1,400. Table 3 shows groups of vertebrate species the 10 agencies reported to GAO that they used in research in fiscal years 2014 through 2016. Some of the species groups shown in table 3 are not covered by the Animal Welfare Act (i.e., amphibians, fish, and reptiles), while some animal species within a group may not be covered by the act. For example, farm animals are not covered by the Animal Welfare Act if researchers use them for agricultural purposes, such as improving animal nutrition, breeding management, or production efficiency, or for improving the quality of food or fiber, but are covered if researchers use them for human health purposes. Mice and rats are not covered by the Animal Welfare Act if they are of the genus Mus or Rattus and bred for use in research. Similarly, the act does not cover birds bred for use in research. Furthermore, agencies may have used animal species in a field study that is not covered by Animal Welfare Act regulations. Agencies are not required by the Animal Welfare Act to report their use of animals that are not covered by the act to the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS). Nevertheless, the agencies are required by other policies and statutes to ensure that they treat those animals humanely. Appendix II: List of Federal and Nonfederal Participants in GAO’s Survey Federal departments, agencies, and components: Animal advocacy organizations: Research and science organizations: Academic stakeholders (speaking as individuals and not on behalf of their institutions): Other stakeholders: American Association for Laboratory Animal Science AAALAC International (formerly known as the Association for Assessment and Accreditation of Laboratory Animal Care International) Appendix III: Analysis of GAO’s Survey about Whether Federal Agencies Should Publicly Provide Information on Their Use of Animals in Research, Testing, and Training As described in this report, GAO conducted a survey of federal agencies and stakeholder groups regarding their opinions on whether federal agencies should proactively and routinely and publicly share information about their animals on a website or other means. The graphics in this appendix illustrate the responses to our survey by stakeholder group. The stakeholder groups included 20 federal departments, agencies, and sub- agencies that conduct animal research on vertebrate species; eight animal advocacy organizations that advocate on behalf of animals; six research and science organizations; and five other stakeholders including individuals in academia and other knowledgeable entities. Stakeholders from federal agencies, research organizations, and academia and other entities except animal advocacy organizations generally expressed the view that federal agencies should not make additional information made routinely available to the public. (See figs. 1, 2, and 3, respectively.) In contrast, animal advocacy organizations generally expressed the view that federal agencies should make additional information routinely available to the public. (See fig. 4.) Figure 5 provides examples of stakeholders’ statements explaining their views on whether federal agencies should or should not provide additional information to the public. GAO also asked stakeholder groups in the survey about their opinion regarding whether the Animal and Plant Health Inspection Service (APHIS) should modify how it collects and posts annual report data under the Animal Welfare Act. Seventeen of 39 stakeholders responded that they would like to see changes to the way APHIS collects and posts annual report data. Specifically, all stakeholders from animal advocacy organizations and individuals in academia would like to see changes to how APHIS collects and posts annual report data while some stakeholders from federal agencies and research and science organizations also noted that they would like to see changes. Table 4 provides examples of stakeholders’ views and suggestions regarding such changes. Appendix IV: Comments from the Department of Agriculture Appendix V: Comments from the Department of Veterans Affairs Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff acknowledgments In addition to the individuals named above, Mary Denigan-Macauley (Acting Director), Joseph Cook (Assistant Director), Ross Campbell (Analyst-in-Charge), Kevin Bray, Tara Congdon, Hayden Huang, Marc Meyer, Amber Sinclair, and Rajneesh Verma made key contributions to this report.
Why GAO Did This Study Research facilities, including those managed by federal agencies, use a wide range of animals in research and related activities each year. The Animal Welfare Act and the Health Research Extension Act have varying requirements for federal agencies and others to protect the welfare of and report on the use of different research animals to APHIS and NIH. GAO was asked to review several issues related to animals used in federal research. This report examines (1) the extent to which APHIS and NIH have provided federal facilities with guidance for reporting their animal use programs, (2) the extent to which APHIS and NIH have shared agencies' animal use information with the public, and (3) stakeholder views on federal agencies' sharing additional information. GAO identified federal agencies that used vertebrate animals in research in fiscal years 2014 through 2016, reviewed their reports to APHIS and NIH, and examined publicly available data. GAO also surveyed a nongeneralizable sample of stakeholders from federal agencies and animal advocacy, research and science, and academic organizations. What GAO Found The Department of Health and Human Services' (HHS) National Institutes of Health (NIH) and the U.S. Department of Agriculture's (USDA) Animal and Plant Health Inspection Service (APHIS) have provided guidance to federal research facilities on what they must report about their animal use programs under the Health Research Extension Act and the Animal Welfare Act, respectively. Federal research facilities we reviewed met NIH's reporting instructions. However, APHIS's instructions have not ensured consistent and complete reporting in three areas: research with birds, activities outside the United States, and field studies outside a typical laboratory. By clarifying its instructions, APHIS could improve the quality of animal use data it receives from agencies. APHIS and NIH voluntarily share some information about agencies' animal research with the public. In particular, APHIS posts to its website data on agencies' annual use of animals covered by the Animal Welfare Act, and NIH publicly posts a list of research facilities with approved animal use programs. However, APHIS does not describe potential limitations related to the accuracy and completeness of the data it shares as called for by USDA guidance. For example, APHIS does not explain that the data do not include birds used for activities that are covered by the Animal Welfare Act and may include field studies that are not covered by the act. APHIS could increase the data's usefulness to the public by making such disclosures. Federal agencies may have additional information about their animal use programs, including data on vertebrate species used but not reported to APHIS; the purpose of research activities; and internal inspection reports. However, stakeholders GAO surveyed had different views on agencies' sharing such data with the public. Some stakeholders, particularly animal advocacy organizations, cited the need for more transparency and oversight while others, including federal agencies and research and science organizations, raised concerns about the additional administrative burden on agencies. Source: GAO analysis of the Animal Welfare Act and the Office of Laboratory Animal Welfare's Public Health Service Policy on Humane Care and Use of Laboratory Animals. | GAO-18-459 . a The act covers research funded by the public health service agencies of the U.S. government. What GAO Recommends GAO recommends that APHIS clarify its reporting instructions and fully describe the potential limitations of the animal use data it makes available to the public. USDA stated that APHIS will take steps to implement GAO's recommendations, with the exception of clarifying reporting instructions for activities outside the United States. GAO continues to believe that APHIS needs to ensure complete reporting of such activities by federal facilities.
gao_GAO-19-197
gao_GAO-19-197_0
Background Patient Record Matching Patient record matching is the process of comparing patient information in different health records to determine if the records refer to the same patient. This matching generally relies on the use of demographic information such as a patient’s name, date of birth (DOB), sex, Social Security number (SSN), or address, among other information. Many types of stakeholders can be involved in patient record matching. Examples of stakeholders include the following: Health care providers, such as physicians, hospitals, and their staffs may receive records from another provider that need to be matched to existing patient records. When treating a new patient, for example, a provider might obtain records from other providers that previously cared for the patient. Similarly, a provider caring for a patient with multiple chronic conditions (e.g., heart disease, diabetes) might obtain information from other providers that are also caring for the patient. The providers must ensure that the records they obtain from other providers are matched to the correct patient and therefore properly linked with the patient’s existing records. HIE organizations match patient records as part of their role in facilitating the electronic exchange of health information among hospitals, physicians, and other organizations. They can offer a range of services, such as allowing providers to access the medical records for a patient who has received care from other providers in the HIE organization’s network. They may also obtain information from hospitals when a patient is admitted or discharged, and they then notify the patient’s other providers when those events occur. In these cases, HIE organizations must accurately match records from multiple organizations to the correct patient. HIE organizations generally serve a specific state or region and match records among a network of local or state-wide providers and other entities; some, however, operate nationally. Health IT vendors also play a role in matching patient records. Some IT vendors, for example, provide record matching tools as part of their EHR systems; these tools allow providers to electronically search for patient records that are available from other providers that use the same IT vendor. Other IT vendors offer tools that allow providers or HIE organizations to leverage third-party data, such as credit-bureau data, when matching patients’ medical records. Importance of Accurate Patient Record Matching ONC and others have reported that the ability to accurately match patient medical records across different providers is a critical part of effective health information exchange, which can benefit patient care. For example, accurate record matching can help ensure that providers have current information about patients’ laboratory or other diagnostic test results; their medications; their diagnosed medical conditions, such as allergies; and their family medical histories. In contrast, when a patient’s records are not accurately matched, it can adversely affect the patient’s care. There are two ways in which records can fail to be accurately matched. Records for different patients are mistakenly matched. When medical records for different patients are mistakenly matched (known as a “false positive”), it can present safety and privacy concerns for patients. For example, a provider may inadvertently use information about the wrong patient, such as diagnoses or medication lists, to make clinical decisions. In addition, if the wrong patient’s medical information is added to a patient’s record, it could result in disclosure of that information to a provider or patient who is not authorized to view it. Records for the same patient are not matched. When medical records for the same patient are not matched (known as a “false negative”), it can affect patient care. For example, providers may not have access to a relevant part of the patient’s medical history—such as current allergies or prior diagnostic test results—which could help them avoid adverse events and also provide more efficient care, such as by not repeating laboratory tests already conducted. ONC Responsibilities and Patient Record Matching ONC leads federal efforts to promote interoperability, including setting requirements for the information that EHRs and other health IT systems should collect. ONC developed certification criteria for EHRs and other health IT systems that include the ability for health IT systems to capture and exchange various types of information, including clinical data such as information on patients’ allergies, as well as the patient’s name, sex, and date of birth. ONC also compiles an Interoperability Standards Advisory, which suggests certain standards that developers should incorporate into their products. Stakeholders Described Patient Record Matching Approaches and Associated Challenges Providers and HIE Organizations Described Using Both Manual and Automated Approaches to Patient Record Matching All seven provider representatives we interviewed described manual matching as one of the ways that they match patient records when exchanging health information with other providers. With manual matching, an individual reviews a medical record in order to match it to the correct patient. For example, an outpatient practice representative said that to match records that the practice receives by fax, a staff member must manually review information such as name and DOB to identify the correct patient and add the new information to the correct patient’s electronic record. All of the provider representatives we interviewed told us that they receive health records from other providers by fax. Six provider representatives told us they also use health IT tools to help automatically identify and match patients’ records stored in other data systems. These tools generally use algorithms that compare demographic data in a patient’s separate electronic records. For example, representatives from four of the six providers told us they used a module offered by their EHR system vendor to match records and exchange information with other providers that use the same vendor’s EHR systems. The module includes an algorithm that compares patients’ demographic information and, if the information in two or more records is identical or very similar, can automatically link the records. Automated matching can also involve some degree of manual review, as algorithms can identify potential matches by providing information about the likelihood that two records with similar information refer to the same individual. Afterwards, provider staff manually review the demographic information in the records and assess whether these potentially matching records should be linked as belonging to the same patient. Representatives from the five HIE organizations we spoke with said they use a range of automated and manual approaches to match patients’ records when exchanging information. Representatives from all five of the HIE organizations said that they use software with algorithms to locate and match records using demographic information provided by the providers in their networks. Though these HIE organizations’ algorithms vary, they all use name, sex, DOB, and address to match patients’ records. Representatives said that when the patients’ records contain similar but not identical demographic information, the HIE organizations rely on staff or additional software to review potential matches and determine whether the records belong to the same patient. For example, one HIE organization representative said that his organization leverages third-party data, such as credit databases that store past names or addresses, to update demographic information for records that cannot be matched automatically. When describing their approaches to patient record matching when exchanging information, six of the seven provider representatives said that they sometimes used HIE organizations to exchange and match records. However, none of them relied on HIE organizations as their primary way to match records and exchange health information. Five of the provider representatives we spoke with, including one provider that does not participate in an HIE organization, noted that they only exchange health information with a few providers. They explained that they were able to connect to these providers in ways other than through an HIE organization. According to stakeholders we interviewed, it is difficult to determine the accuracy of the health IT tools used to match patients’ medical records automatically. While the algorithms typically match records belonging to a patient and identify potential matches that need to be manually reviewed, users of these algorithms do not know how many matches the algorithm may have failed to make. These stakeholders expressed concern that it is not possible to assess the accuracy of algorithms without independent testing to identify matches that the algorithm may have missed. HHS stated that the proprietary nature of many patient matching algorithms makes it difficult to assess their effectiveness. Stakeholders Said That Inaccurate, Incomplete, and Inconsistently Formatted Data Can Pose Challenges for Patient Record Matching Representatives from providers, HIE organizations, and the other stakeholders we interviewed emphasized the importance of using quality patient demographic data when matching patients’ medical records. These stakeholders noted that inaccurate, incomplete, or inconsistently formatted demographic information in patients’ medical records can make it challenging to identify and match all the records belonging to a single patient. Figure 1 illustrates how the demographic information for a hypothetical patient can be recorded inaccurately, incompletely, and inconsistently across the patient’s providers. Stakeholders described the ways in which providers or their staff can collect inaccurate demographic information from patients. According to stakeholders, provider staff sometimes make transcription errors when entering information into electronic records, patients do not always provide correct information (e.g., they register with a nickname rather than a legal name), and patient demographic information can change, such as when a patient moves to a new address or changes her last name, but this information is not consistently updated in all of the patient’s medical records. Provider representatives identified several reasons that patients’ demographic information can be incomplete or contain different data elements across the medical records maintained by multiple providers. In particular, provider representatives explained that providers collect different information from their patients, and health IT systems can collect demographic data differently. Examples include the following: Two provider representatives said that their organizations do not collect patients’ SSN because many patients choose not to provide that information or the information is not available. However, other provider representatives said they do collect SSNs. A health IT vendor said that the algorithms in its software do not rely on SSN as a key factor for matching records because SSN is not consistently available. One provider representative explained that the IT system used by the provider’s laboratory does not contain fields for the same demographic information that the provider’s EHR system contains. As a result, laboratory results often contain too little information to reliably match records, even if the tests were ordered using complete information. One provider representative explained that they do not collect patients’ mothers’ maiden names, though other organizations collect and use this information for patient matching. According to stakeholders, the inconsistencies in formatting across medical records can reflect differences in health IT systems or the policies of the health care organization creating the records: A 2014 ONC report noted that one health IT system may list addresses in a single field, while another may separate street names from the city and state. A 2018 report noted that providers use different standards for recording names with spaces, hyphens, or apostrophes, and that some health IT systems include special characters in phone numbers (i.e., (123) 456-7890), whereas others only allow for numbers (i.e., 1234567890). Representatives from one HIE organization explained that providers handle missing data for fields differently; for example, one provider may enter all 9s into an SSN field when it is not available for a patient and another will enter all 0s. Provider representatives and other stakeholders identified some patient populations for which matching is particularly challenging, due in part to data issues. Three provider representatives said that medical records for newborns often contain temporary names that are not updated with the child’s legal name after it is determined, which makes it difficult to locate these records. Further, provider representatives and other stakeholders said that multiple births (e.g., twins) result in record matching challenges, as these children can have the same DOB and address, and may be named similarly. A few provider representatives said that records can be inaccurate across providers for patients from certain nationalities. For example, according to stakeholders, some east-Asian cultures use the “family name” as the first name, and some Hispanic cultures use multiple last names. Another provider representative said that a few times a month, a transgender patient’s photo ID lists the wrong gender, yet the organizational policy is to record the gender exactly as it appears on a state-issued photo ID. Stakeholders Identified Efforts Underway to Improve Patient Record Matching as Well as Additional Efforts ONC and Others Could Undertake Stakeholders Have Undertaken Efforts to Improve the Demographic Data and Methods Used to Match Records Officials from ONC, selected provider representatives, and other stakeholders we interviewed described a variety of efforts they have undertaken or are currently undertaking to improve the ability to match patients’ medical records accurately. In general, these efforts focus on improving demographic data and improving the methods used for matching. These efforts are discussed in more detail below. Efforts to Improve Demographic Data Used for Matching ONC has reported that quality demographic data is important for effectively matching patients’ medical records, and in 2017 the agency published the Patient Demographic Data Quality Framework. The Framework is a tool to help providers and other organizations assess their processes for managing data quality and improve the quality of the demographic data they use in matching. It includes, for example, questions that providers can use to identify any gaps in how they manage their demographic data. In 2016, before ONC published the Framework, the agency began a pilot study to assess how the Framework could work in a clinical setting. As part of this pilot study, ONC provided training on demographic data quality to staff from two community health centers, during which it shared best practices for collecting these data. After the training, researchers who collaborated on the pilot with ONC found that there were improvements at the community health centers in indicators of how they managed data quality. According to ONC officials, this pilot highlighted the effect that data quality and training have on effective patient record matching. In addition, officials said it underscored difficulties in implementing data quality improvement efforts when health care organizations have limited resources and high staff turnover. ONC officials plan to issue a final report on the pilot study; however, they said ONC is not currently planning to assess the impact of the Framework or to conduct future studies on how it works in clinical settings. Several stakeholders told us they have worked to improve the consistency with which they record and format demographic data in their EHRs. According to ONC officials and hospital representatives, as well as other stakeholders with whom we spoke, implementing common standards for how certain demographic data should be formatted—such as names and addresses—could improve the consistency of data across providers and thus make it easier to match records. Representatives from four hospitals told us that they collaborated with other providers in their regions to implement common standards for recording patients’ demographic data. They told us the following: In 2017, 23 providers in Texas reached agreement on, and then implemented, standards for how staff should record patients’ names, addresses, and other data in order to improve record matching and facilitate health information exchange. We spoke with representatives from three hospitals that were part of this effort, who all told us that the effort resulted in an increased ability to accurately match patients’ medical records automatically without the need to manually review the records. (See text box.) For example, representatives from one hospital said that when patient records are not matched automatically or when there are questions about the accuracy of record matching, staff must then conduct a manual review to resolve the issue. They said that they have seen a significant decrease in the need for those manual reviews since implementing the data standards. Representatives from all three hospitals estimated that the amount of manual review to resolve matching issues and match incoming records to the right patient had decreased by about 90 percent. Representatives from one hospital added that they are now better able to prevent records from being matched to the wrong patient. One children’s hospital in California worked with other local hospitals in recent years to implement a standard for how staff should record a temporary name for newborns who do not have their own name at birth. According to representatives from this hospital, after implementing this standard, clinical staff are able to more easily match patients’ records and therefore have access to real-time information on the care newborns received in other hospitals. Lessons Learned from One Regional Effort to Standardize Patient Demographic Data across Multiple Providers In 2017, 23 providers in Texas implemented agreed-upon standards for capturing patient name, address, and other data. Representatives from three participating hospitals shared with us lessons for others interested in standardizing data, such as: Allow sufficient time to get buy-in from staff and test for any downstream effects on Communicate the benefits of standardizing data to clinical and administrative staff; and Train staff on how to enter data, and then assess compliance to identify any opportunities for improvement. In a related 2017 effort, Pew Charitable Trusts sponsored a study to measure how standardizing specific types of patient demographic data could improve patient record matching. As part of this study, researchers used four data sets to test the effect that standardizing patient names, addresses, DOBs, telephone numbers, and SSNs had on record matching accuracy. As of September 2018, the full findings from this study had not been published; however, according to Pew, the findings indicated that standardizing some demographic data, such as address, shows promise for increasing the likelihood that patients’ records will be matched. Two stakeholders we spoke with have examined ways to boost patients’ ability to electronically share data with their providers using smartphone applications or other tools. According to these stakeholders, these types of tools could improve the accuracy of the demographic data providers receive from patients, reduce manual data entry errors by providers’ staff, and allow patients to update their information as changes occur, such as if they move. In 2015, the Workgroup for Electronic Data Interchange (WEDI) initiated a “Virtual Clipboard” project to explore the development of a mobile tool to automate the transmission of demographic, insurance, and clinical information to providers. WEDI representatives told us that they had engaged with stakeholders such as providers, vendors, patient advocates, and health plans about the potential benefits of such a tool, but had not yet identified organizations prepared to move forward with developing specific applications. In 2017, Pew Charitable Trusts funded a RAND study on “patient- empowered” patient record matching approaches—specifically, to identify ways that patients could play an additional role in patient record matching and to select a promising solution for further development. In its August 2018 report, RAND proposed a solution in which patients could verify their mobile phone number and other identifying information with providers and then use a smartphone application to share this information with providers. Representatives from both WEDI and Pew told us that, when developing these types of tools, it is important to consider the practical implications for the providers that would need to be able to accept data in this way. For example, Pew representatives said that it would be important to understand whether these tools present any workflow challenges in provider settings, such as with any IT tools that providers would need to access the data stored via smartphone applications, or with the steps needed to incorporate that data into their EHR systems. Representatives from both organizations also noted that not all patients would be willing or able to use these types of tools to share data with providers. In addition, RAND reported on a range of security considerations for these types of tools. For example, RAND noted that a smartphone app that gathers health data—like its proposed patient matching solution—would introduce risk because it would contain private demographic and health information and would therefore be a target for individuals looking to steal data. Assessing and Improving Matching Methods Officials from ONC and other stakeholders described various efforts to assess and improve the effectiveness of the methods used in matching patients’ medical records. These efforts include hosting competitions, conducting studies, and issuing guidance. For example, ONC officials described the following two efforts to improve patient record matching methods: In 2017, ONC held a Patient Matching Algorithm Challenge in which participants competed to develop an algorithm that most accurately matched patient records in a test data set. According to ONC officials, the goals of the exercise were to bring about greater transparency on the performance of existing patient record matching algorithms, spur the adoption of performance metrics for algorithm developers, and improve other aspects of patient record matching, such as resolving duplicate patient records. Over 140 teams used varying methods to match patient records using an ONC-provided test data set, and ONC selected six winning submissions based on various measures of matching accuracy. As of July 2018, ONC was analyzing data from the challenge to learn more about algorithm performance. Officials told us that the challenge highlighted limitations of commonly used matching algorithms and demonstrated that extensive manual review is often needed to accurately match patients’ medical records. ONC officials told also us they plan to publish a report on their analysis of the challenge data. In 2017, ONC also conducted a patient record matching Gold Standard and Algorithm Testing pilot study. According to ONC officials, there is no widely used standard for assessing the accuracy of patient record matching algorithms, so the pilot was intended to create a data set with known duplicate records (that is, multiple records for the same individual) and then use it to evaluate how well a commonly-used algorithm matched those records. ONC officials told us that the pilot demonstrated how much effort is needed to evaluate the matching algorithms providers and others use, as well as the importance of using standard metrics to assess matching accuracy. ONC expects to issue a final report on the results of the study. Among the examples other stakeholders described were the following efforts to improve patient record matching methods: In 2018, the Sequoia Project published A Framework for Cross- Organizational Patient Identity Management to provide guidance to help providers and other types of health care entities improve patient record matching across organizations. The report, for example, suggests ways organizations can improve their matching algorithms, and it identifies practices that organizations can use to improve how they use patient demographic data and other information when matching records. Representatives from the Sequoia Project told us they plan to speak with organizations that have voluntarily adopted this guidance to learn how doing so affects record matching. These representatives also said they are looking into how ONC’s Patient Demographic Data Quality Framework relates to their own framework, as it may be beneficial if there were a way to link these two efforts. HHS’s Agency for Healthcare Research and Quality funded a study that began in 2017 to evaluate patient record matching approaches, with the goal of identifying different approaches to improving the accuracy of patient record matching algorithms. As part of this ongoing study, researchers are measuring how different changes to matching methods—including changes that have and have not been recommended or evaluated previously—improve matching accuracy. The study is expected to run through 2022. According to researchers, their initial work tested the use of different combinations of demographic data elements, among other things. They identified a modest improvement in the accuracy of matching algorithms, and determined that further research was needed. In 2016, CHIME sponsored a National Patient ID Challenge that offered a monetary award for the development of a tool that matched patients’ medical records with 100 percent accuracy. Although the challenge was not specific to matching patient records across providers, several CHIME members who were involved with the challenge told us that they hoped to identify a patient record matching approach that could be widely adopted and easily integrated into existing EHR and HIE platforms without significant cost. They noted the challenge also was an opportunity to encourage organizations to develop effective matching methods, and to identify a matching method that did not rely solely on demographic patient information. CHIME assessed submissions from a range of organizations, but suspended the challenge in November 2017, reporting that the effort did not achieve the results it had sought. CHIME members said that the challenge nonetheless helped draw attention to patient record matching issues. In addition, several stakeholders have worked to improve the matching of medical records specifically for newborns and multiple-birth siblings such as twins, for whom matching can be particularly challenging: Representatives we spoke with from one children’s hospital told us they have implemented indicators in their EHR to highlight when a child has a twin or other multiple-birth sibling, so that staff know that another child has similar demographic information. Representatives said that this helps prevent medical records from one child being incorrectly matched with the medical records of a sibling. In 2017, this hospital began working with its health IT vendor to explore the broader use of a multiple birth indicator to improve the probability of accurate matching for the multiple birth population between different vendors’ EHRs. The representatives said that while there is a standard indicator that can be used for multiple births, many organizations are not aware of it. In addition, one researcher we spoke with is studying how using information such as physicians’ names and parents’ demographic data could help address record matching challenges for newborns. As noted earlier, one children’s hospital worked with other local hospitals to implement a standard for how staff record a temporary name for newborns. Stakeholders Identified Additional Efforts That ONC or Others Could Undertake to Improve Patient Record Matching Stakeholders we spoke with said more could be done to improve the ability to accurately match patients’ medical records. The stakeholders identified several efforts that could improve matching, and had varying views on the roles ONC and others should play in these efforts. Among the examples of efforts stakeholders identified that could improve matching were implementing common standards for demographic data; developing a data set to test the accuracy of matching methods; sharing best practices and other resources; implementing a national unique patient identifier; and developing a public-private collaboration effort to improve patient record matching. Multiple stakeholders noted that no single effort would be sufficient to improve matching, given the factors that contribute to matching challenges. These potential additional efforts are described below. Implementing Common Standards for Recording Patients’ Demographic Data in Health IT Systems Several stakeholders told us that implementing common standards for recording patients’ demographic data in health IT systems could improve the ability of providers to match patients’ medical records. Stakeholders said that if providers implemented such standards, it could increase the extent to which they collect the same types of demographic data or use the same format for names and addresses as other providers, for example. However, stakeholders had differing views on how to reach agreement on and implement common standards among providers, as well as how feasible it would be to do so. Some said it would be helpful if ONC established requirements regarding demographic data—such as the types of data collected, and how it is formatted—potentially through the EHR certification process. In contrast, other stakeholders saw an opportunity for industry organizations to voluntarily agree to implement standards for demographic data. Some stakeholders advocated for EHR vendors to take steps to standardize the data their products allow providers to collect. A representative with one hospital said that having demographic data standards built into EHRs could minimize the amount of time needed to train staff on how to format the data they collect—and then to monitor whether they format the data correctly. A number of stakeholders said that ONC could play a role in getting industry groups to agree on and implement common data standards. ONC officials noted that as part of their role in coordinating health IT efforts, they have worked with industry groups in a number of ways and expect to continue their coordination efforts. Some stakeholders we spoke with told us that efforts to implement common demographic standards could face challenges, such as the following: Several said it could be difficult to reach consensus across various industry organizations on what standards to adopt and implement. Multiple stakeholders noted that patient preferences could affect the effectiveness of efforts to standardize data. Patients might not always be willing to provide some types of data even if providers wanted to collect it. For example, one provider noted that patients may want to use their middle name instead of their legal name. Some stakeholders said it could be time-intensive for providers to train their staff on how to collect data in accordance with standards, or that staff might not always follow the standards. For example, a representative from one hospital that implemented demographic standards told us that they continuously train staff and perform audits to ensure that staff follow those standards. Some said that EHR systems differ in how they allow staff to record demographic data, which can affect providers’ ability to implement standards. Some stakeholders said it can be costly for providers to update or upgrade their EHRs. Stakeholders cited other potential limitations of data standardization efforts. Several, for example, said that standardizing data would not prevent inaccurate or outdated data. In addition, some stakeholders did not think that data standardization would yield significant improvements. Developing a Data Set to Test the Accuracy of Methods Used to Match Patients’ Medical Records Several stakeholders told us that developing a standard data set that organizations could use to evaluate matching methods would be helpful. Stakeholders noted that such a data set would allow health IT vendors, providers, or others to assess matching methods independently (instead of relying on vendors’ reported accuracy rates, for example) and in a standardized way (by using the same data source). While stakeholders did not always specify who should develop such a data set, an official from one stakeholder involved with patient record matching and data sharing efforts said that the most useful thing ONC could do to address patient record matching would be to develop a master data set to allow testing in a uniform way. This official added that without a way to accurately and uniformly test patient record matching methods, efforts to improve patient record matching are hindered. A number of stakeholders did not specifically mention the utility of a data set, but nonetheless highlighted the importance of testing how well matching methods work. For its part, ONC officials said that the lack of a data set for evaluating matching methods is a challenge to efforts to improve matching, and that developing such a data set would be difficult. They noted that the agency’s 2017 Patient Matching Algorithm Challenge had highlighted the difficulties of creating a test data set that closely mimics real world patient data and that could be used to assess the accuracy of matching algorithms. ONC officials cited a number of challenges to developing one test data set for assessing a range of patient matching algorithms. For example, they said the data set would need to be very large; would require an extensive and expensive effort to develop; could be difficult to implement from a practical perspective; and that, because data varies widely across patient populations and organizations, might have limited application for assessing algorithms that are designed to match specific data sets. HHS also stated that the development of a data set would need to include a “key” of known duplicate patient records—that is, an indicator of which records in the data set should be matched to the same individual. Sharing Best Practices and Other Resources Used in Matching Patients’ Medical Records According to a number of stakeholders we spoke with, more could be done to encourage the sharing of best practices and other patient record matching resources. For example, representatives from some HIEs said it would be beneficial to bring organizations together to share lessons learned and collaborate on best practices for using patient data to match records. Representatives from one industry association noted that disseminating information on patient matching errors could help organizations better understand the extent of matching errors and what causes them; for example, if information were shared about whether certain data elements are more likely to cause matching errors or problems, then organizations could work to prevent the errors or problems related to those data elements. A few stakeholders said that efforts to identify and share effective matching algorithms could expand resources to a broader range of providers. While stakeholders did not always specify who they thought should identify and share matching resources, several stakeholders saw the potential for ONC to play a role in these types of efforts. For example, representatives from one industry association said that ONC could provide information about the types of identifiers that could be used to facilitate matching, such as cell phone numbers or driver’s license numbers. These representatives also said that ONC could provide information on how to address matching patient records for children and other individuals who might not have those types of identifiers. ONC officials noted that they have shared information and resources about patient matching in a number of ways, such as through the agency’s Patient Demographic Data Quality Framework. They added that other organizations, such as the Sequoia Project and Pew Charitable Trusts, have worked to communicate best practices in this area. Implementing a National Unique Patient Identifier A number of stakeholders noted that implementing a new national, unique patient identifier specifically for use in health care settings could improve the ability to match patients’ medical records. For example, having a new unique number assigned to an individual would reduce the reliance on demographic data for record matching, according to several stakeholders. However, stakeholders had differing views on the potential benefits and feasibility of implementing a new unique patient identifier for health care: Some stakeholders said that it is unlikely that any new identifier could be implemented nationwide; they cited reasons such as the prohibition on federal funds being used to develop a national unique health care identifier, as well as potential privacy concerns. Multiple stakeholders cited potential limitations to using a national patient identifier, noting for example that—as with SSNs—patients may not be willing to share their identifier, and identifiers could still be subject to manual data entry errors, data breaches, or fraud. Some stakeholders said that a unique identifier would be the most effective way to improve matching. However, others said they did not believe a new identifier was needed, or did not think a new identifier would significantly improve matching, given the potential limitations. HHS stated that health care systems currently rely on a number of identifiers to match patient records and that a new government- generated identifier would improve matching only if other technical and non-technical challenges were solved before it was implemented. The creation, transmission, and capture of a single national patient identifier across many systems could take decades and would encounter implementation difficulties, according to HHS. In addition, a few stakeholders said that patients might be willing to voluntarily obtain a unique identifier to use across health care settings if one were available. A representative from one provider association, for example, said that people with chronic conditions who obtain care from multiple providers might opt to obtain a unique identifier, if available, to help match their records. In its 2018 report on patient-empowered approaches to matching, RAND described various considerations for implementing a voluntary unique identifier issued by a non-federal entity. The report cited, for example, one organization’s work to develop a tool to allow health care providers to offer patients a unique identifier. RAND stated that although this solution would greatly improve matching if adopted, there is uncertainty that providers or patients would adopt it. Representatives from the organization that developed this tool told us that they had tested it in one location, but that it had not yet been adopted by providers. Developing a Public-Private Collaboration Effort to Improve Patient Record Matching Multiple stakeholders we spoke with saw a need for a collaborative public-private effort to help identify and implement efforts to improve patient record matching. For example, several stakeholders saw a specific need for a national strategy or approach for addressing patient record matching issues. Representatives from the Pew Charitable Trusts, for example, stated that a national strategy—led by the private sector, with the federal government providing support—could help reach consensus on ways to improve matching. In addition, one researcher said that ONC should help facilitate a strategy for addressing patient record matching at the provider, vendor, and national levels—and that it would be beneficial for ONC to foster collaboration among private sector organizations to address matching issues. More generally, representatives from several provider associations stated that ONC could play an important role by convening stakeholders to identify ways to improve patient record matching. As noted earlier, some stakeholders said that ONC could help industry groups agree on common data standards for EHRs. While some stakeholders we spoke with said that ONC should collaborate by supporting private-sector efforts to improve matching instead of directing those efforts, others said that ONC could potentially play more of a leadership role. Representatives from one HIE, for example, said that ONC could lead an overall effort to improve patient record matching and that private-sector organizations could lead specific actions within that larger effort. For their part, ONC officials said that public and private stakeholders should play a role in efforts to improve patient record matching. According to ONC officials, while the agency does not have sufficient resources to support broad implementation of efforts to improve patient record matching, ONC has collaborated with other stakeholders on various patient record matching issues. ONC’s August 2018 Interoperability Forum included a “patient matching track” where industry stakeholders, such as providers, health IT vendors, and researchers, discussed matching challenges and potential solutions. According to ONC officials, this track covered topics such as patient-empowered solutions to matching, including smartphone applications; issues when matching patient medical records across organizations; the development of consensus on patient matching definitions and metrics; and issues when matching records for pediatric patients. The outcomes of this track, according to ONC officials, were increased awareness of a range of patient matching issues; information sharing among speakers and participants; and an opportunity to network and potentially collaborate with individuals on patient matching issues. ONC officials told us that a takeaway for them was that while various approaches to patient matching—including technical approaches such as biometrics and referential matching; efforts regarding unique identifiers; and non- technical approaches such as data quality improvement efforts—may enhance the capacity for matching, additional research is needed. ONC participated in the Sequoia Project’s development of that organization’s Framework for Cross-Organizational Patient Identity Management. During the 2018 Interoperability Forum, ONC officials and Sequoia Project representatives presented together about developing consensus on patient record matching definitions and metrics. They discussed definitions outlined in the Framework and encouraged participants to work toward consensus and transparency when measuring and reporting matching metrics, such as by forming local and national workgroups, ONC officials said. Looking forward, ONC and some stakeholders said that the agency’s current effort to establish a national framework for exchanging health information electronically is an opportunity for the agency to address patient record matching challenges. As required by the 21st Century Cures Act, ONC is taking steps to develop or support a framework for ensuring the full exchange of health information among health information networks. ONC has referred to this effort as establishing a “network of networks,” and it includes the development of a common agreement among health information networks nationally, which providers and others can use to facilitate the exchange of electronic health information, including patients’ health records. As part of this effort, in January 2018, ONC issued a draft Trusted Exchange Framework that included principles for the trusted exchange of information, as well as minimum required terms and conditions for the Common Agreement. ONC plans to provide funding for an industry entity to incorporate these terms and conditions into a single Common Agreement that participating Qualified Health Information Networks (QHIN) and their participants voluntarily agree to adopt. While it is too soon to tell how this ONC effort will be implemented, several stakeholders said that it could potentially improve patient record matching if, for example, it results in new guidance or standards about demographic data elements. One HIE organization, for example, said that it would be beneficial if this effort leverages non-governmental work on matching and synthesizes this work into guidance for the industry. According to ONC officials, the framework is expected to affect patient record matching by requiring participating QHINs to use ONC’s Patient Demographic Data Quality Framework to evaluate their data practices. The agency plans to release a second draft Trusted Exchange Framework and then release a draft Common Agreement and an updated Trusted Exchange Framework for public comment. Agency Comments We provided a draft of this report to HHS for review and comment. HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Jessica Farb, (202) 512-7114 or farbj@gao.gov. Staff Acknowledgments In addition to the contact named above, individuals making key contributions to this report include Thomas Conahan (Assistant Director), Robin Burke (Analyst-in-Charge), A. Elizabeth Dobrenz, Krister Friday, Monica Perez-Nelson, Vikki Porter, and Andrea Richardson.
Why GAO Did This Study Health care providers are increasingly sharing patients' health records electronically. When a patient's records are shared with another provider, it is important to accurately match them to the correct patient. GAO and others have reported that accurately matching patient health records is a barrier to health information exchange and that inaccurately matched records can adversely affect patient safety or privacy. At the federal level, ONC is charged with coordinating nationwide efforts to implement and use health IT. The 21st Century Cures Act included a provision for GAO to study patient record matching. In this report, GAO describes (1) stakeholders' patient record matching approaches and related challenges; and (2) efforts to improve patient record matching identified by stakeholders. To do its work, GAO reviewed reports by ONC and others about patient record matching. GAO also interviewed various stakeholders that play a role in exchanging health records, including representatives from physician practices, hospitals, health systems, health information exchange organizations, and health IT vendors. GAO also interviewed other stakeholders, such as ONC officials, provider and industry associations, and researchers. GAO selected stakeholders based on background research and input from other stakeholders, and interviewed 37 stakeholders in total. The information from stakeholders is not generalizable. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. What GAO Found Stakeholders GAO interviewed, including representatives from physician practices and hospitals, described their approaches for matching patients' records—that is, comparing patient information in different health records to determine if the records refer to the same patient. Stakeholders explained that when exchanging health information with other providers, they match patients' medical records using demographic information, such as the patient's name, date of birth, or sex. This record matching can be done manually or automatically. For example, several provider representatives said that they rely on software that automatically matches records based on the records' demographic information when receiving medical records electronically. Stakeholders said that software can also identify potential matches, which staff then manually review to determine whether the records correspond to the same patient. Stakeholders also said that inaccurate, incomplete, or inconsistently formatted demographic information in patients' records can pose challenges to accurate matching. They noted, for example, that records don't always contain correct information (e.g., a patient may provide a nickname rather than a legal name) and that health information technology (IT) systems and providers use different formats for key information such as names that contain hyphens. Stakeholders GAO interviewed identified recent or ongoing efforts to improve the data and methods used in patient record matching, such as the following: Several stakeholders told GAO they worked to improve the consistency with which they format demographic data in their electronic health records (EHR). In 2017, 23 providers in Texas implemented standards for how staff record patients' names, addresses, and other data. Representatives from three hospitals said this increased their ability to match patients' medical records automatically. For example, one hospital's representatives said they had seen a significant decrease in the need to manually review records that do not match automatically. Stakeholders also described efforts to assess and improve the effectiveness of methods used to match patient records. For example, in 2017 the Office of the National Coordinator for Health Information Technology (ONC) hosted a competition for participants to create an algorithm that most accurately matched patient records. ONC selected six winning submissions and plans to report on their analysis of the competition's data. Stakeholders said more could be done to improve patient record matching, and identified several efforts that could improve matching. For example, some said that implementing common standards for recording demographic data; sharing best practices and other resources; and developing a public-private collaboration effort could each improve matching. Stakeholders' views varied on the roles ONC and others should play in these efforts and the extent to which the efforts would improve matching. For example, some said that ONC could require demographic data standards as part of its responsibility for certifying EHR systems, while other stakeholders said that ONC could facilitate the voluntary adoption of such standards. Multiple stakeholders emphasized that no single effort would solve the challenge of patient record matching.
gao_GAO-18-672
gao_GAO-18-672_0
Background Overview of the Scorecard Process According to SBA, the purposes of the scorecard program are to monitor government-wide performance in meeting small business contracting goals and to provide accurate and transparent information through the public reporting of small business procurement data for individual agencies and government-wide. SBA uses its scorecard methodology to calculate a numeric score for each agency annually. SBA then converts those numeric scores to letter grades on an A+ through F scale. Each year, SBA negotiates small business prime contracting goals with each federal agency with procurement authority such that, in the aggregate, the federal government meets its overall 23-percent goal for the percentage of prime contract dollars awarded to small businesses. In setting annual agency goals, SBA considers prior-year achievement and other factors. In addition to an overall prime contracting goal, Congress also established statutory contracting goals for various socioeconomic subcategories of small businesses. These small business subcategories are small disadvantaged businesses, women-owned small businesses, service-disabled veteran-owned small businesses, and businesses located in Historically Underutilized Business Zones (HUBZone). SBA does not negotiate agency-specific goals for prime contracting and subcontracting achievement within each small business socioeconomic subcategory. Instead, each agency’s goal is the same as the government- wide goals. Prime contracting and subcontracting achievement goals for each subcategory are shown in table 1 below. Procurement Data Systems SBA uses two government-wide data systems maintained by the General Services Administration (GSA) to measure agencies’ small business contracting activity. SBA uses the Federal Procurement Data System- Next Generation (FPDS-NG) to calculate agencies’ prime contracting awards to small businesses. Federal agencies are required to report to FPDS-NG all contracts whose estimated value is $3,500 or more, and FPDS-NG also records whether the contract has gone to a small business. GSA requires that agencies annually certify the accuracy of data submitted. To measure subcontracting, SBA uses the Electronic Subcontracting Reporting System (eSRS), which captures data on spending on first-tier subcontracts, including spending directed to small businesses. Prime contractors that hold one or more government contracts totaling more than $700,000 are required to report their small business subcontracting activity in eSRS. Role of the OSBDUs In 1978 Congress amended the Small Business Act to require that all federal agencies with procurement powers establish an Office of Small and Disadvantaged Business Utilization (OSDBU). These offices are intended to advocate for small businesses in procurement and contracting processes, and thus work with agencies to achieve contracting goals. OSDBUs have multiple functions and duties that are codified in section 15(k) of the Small Business Act, as amended. In addition to their agency responsibilities, OSDBU directors serve with the SBA administrator or a designee on the Small Business Procurement Advisory Council, which was established in 1994. The council’s duties include identifying best practices for maximizing small business utilization in federal contracting and conducting peer reviews of each OSDBU to determine compliance with section 15(k). SBA has included the results of this peer review as part of its scorecard calculations for several years. SBA Made Several Revisions to the Scorecard for Fiscal Year 2017 but Has Not Completed a Plan to Evaluate Those Changes Scorecard Revisions Focused Largely on Mandated Changes SBA revised the scorecard methodology prior to fiscal year 2017 to make it consistent with changes required by the 2016 NDAA. Specifically, SBA reduced the proportion of the total scorecard results related to prime contracting performance from 80 percent to 50 percent and added an element to calculate changes in the number of small business prime contractors compared to the prior year. SBA officials said they considered, but did not add, a scorecard element that calculated changes in the number of small business subcontractors, which the 2016 NDAA required to be included if data were available. Officials said that unlike prime contracting data, which are validated by agencies, subcontracting data are recorded by the prime contractor and are based on contracting plans and not obligated federal funds. As a result, SBA officials said they determined that data were not available to implement this change. SBA also made other changes to the scorecard methodology, as the agency was permitted to do under the 2016 NDAA. SBA adjusted the weights of other scorecard elements, increasing subcontracting performance from 10 percent to 20 percent of the total scorecard result and increasing the peer review evaluation element from 10 percent to 20 percent. SBA also established that the new statutorily required element to assess changes in the number of prime contractors would be weighted at 10 percent. (See fig. 1 for a summary of revisions to the scorecard methodology.) Officials said they increased the subcontracting weight because it was an increasingly important area of small business procurement activity. In addition, SBA officials and other Small Business Procurement Advisory Council members revised the peer review evaluation methodology in an effort to facilitate a more in-depth review of agencies’ compliance with section 15(k) requirements. SBA included the results from this new peer review process in its revised scorecard methodology. Specifically, the council changed the peer review process in an effort to have peer reviewers make compliance determinations for categories that directly corresponded to the individual subparts of section 15(k). The prior peer review process asked reviewers to assign scores in seven areas, which the process termed “success factors.” For the fiscal year 2017 scorecard, SBA asked peer reviewers to assess and provide scores for 18 of the 21 individual subparts. Categories for the three remaining 15(k) subparts were incorporated starting with the fiscal year 2018 scorecard methodology. SBA officials said members of the Small Business Procurement Advisory Council were active participants in determining the revisions to the scorecard methodology. For example, SBA officials said the council members gave input on proposed revisions and recommended changes prior to the adoption of the new scorecard methodology. OSDBU directors also discussed potential methodological revisions in meetings of the Federal OSDBU Directors Interagency Council. SBA officials said the OSDBU directors’ input was incorporated into SBA’s revised scorecard guidance and, as a result, the criteria within the scorecard were more robust. Officials we interviewed from SBA and other agencies said the adopted scorecard revisions were the result of a consensus among Small Business Procurement Advisory Council members, although no formal votes were taken. Revisions to the scorecard methodology were outlined in a memorandum circulated to agencies in August 2016, about 8 weeks before the start of fiscal year 2017. SBA officials said that many agencies were tracking their progress toward goals using the revised methodology before results were issued. Agencies also had an opportunity to review preliminary scorecard results for fiscal year 2017 before the official scorecard results were published in May 2018. Fiscal Year 2017 Scorecard Outcomes Were Similar to Those of Prior Years Scorecard results under the revised methodology were similar to those of prior years. For example, in fiscal year 2017, the distribution of agencies’ letter grade results was similar to those of fiscal years 2014 through 2016, with between 19 and 21 of the 24 scored agencies achieving at least an A grade each year (see table 2). Prime contracting achievement. Agencies’ performance in small business prime contracting was similar in fiscal year 2017 and fiscal year 2016 (see table 3). In both years, 18 of 24 agencies met their overall prime contracting goals. In fiscal year 2017, 15 of 24 agencies met at least three of the four small business subcategory goals—one fewer than in fiscal year 2016. Subcontracting achievement. In fiscal year 2017, 15 of 24 agencies met their subcontracting goals compared to 16 of 24 in the prior year. However, among the small business subcategories, more agencies met at least three subcategory goals in 2017 (14 agencies) than in fiscal year 2016 (10 agencies) (see table 4). Peer review evaluations element. The fiscal year 2017 government- wide score for the peer review of section 15(k) compliance (a score of 19.25 out of a maximum 20.00) was nearly identical to the government- wide score for fiscal year 2016, once we adjusted for changes in the scoring scale between the 2 years. The government-wide score in fiscal year 2016 was 9.60 out of 10, which equates to 19.20 on a 20-point scale. Number of small business prime contractors. The overall number of small business prime contractors declined between fiscal years 2016 and 2017. The number of prime contractors overall decreased from 120,009 in fiscal year 2016 to 117,480 in fiscal year 2017, a decrease of approximately 2 percent. However, the 24 agencies, in aggregate, had more small business prime contractors in three of the four small business subcategories in fiscal year 2017 than in the prior year (see table 5). Comparison with prior scorecard weighting formula. We found that agencies’ numerical scores for fiscal year 2017 were generally lower under the revised scorecard methodology than they would have been under the fiscal year 2016 methodology’s weighting of scorecard elements. Twenty-two of 24 agencies had a lower score than they would have had under the prior methodology’s weighting. The revised methodology adjusted the weight of multiple scorecard elements, and there are a variety of reasons why an agency might have received a lower score than under the fiscal year 2016 methodology’s weighting. However, reducing the weight for prime contracting achievement under the revised methodology could explain at least part of the lower score for 21 of the 22 agencies. The overall median score for fiscal year 2017 was about 7 points lower than it would have been under the weighting formula used in fiscal year 2016. (The median score for fiscal year 2017 scorecards was 111 and would have been 118 under the prior methodology’s weighting formula.) SBA Said It Was Preparing but Had Not Completed a Plan to Evaluate the Effects of Scorecard Revisions In June 2018, SBA officials told us they were not preparing a plan for evaluating the effects of scorecard revisions because they thought such a plan would be premature. At that time, SBA officials said they had identified some aspects of the revised methodology for further review, including two issues related to the peer review evaluations—the peer review scoring scale and whether agencies believed SBA’s requests for supporting information were reasonable. In July 2018, however, SBA officials said that, in response to our preliminary findings, they had begun to develop a plan for evaluating the revised scorecard methodology’s effects, if any, on meeting the government-wide procurement goals. The officials did not provide us a draft plan or details about the plan. They said they expected to complete the evaluation plan by October 2018 and to complete the evaluation itself by the end of December 2018. Federal internal control standards state that management should use quality information to achieve the entity’s objectives, such as those in an agency’s strategic plan. These standards also call for management to design control activities to achieve goals and respond to risks—for example, activities to monitor performance measures and indicators. SBA’s strategic plan includes an objective to ensure federal contract and innovation set-aside goals are met or exceeded. The agency uses scorecard results to measure progress toward meeting or exceeding the statutory goal of 23 percent for overall small business prime contracting. Scorecard results are also used to measure progress toward other goals for the small business socioeconomic subcategories. We have previously identified key attributes of effective program evaluation design, which include the following: clear criteria for making comparisons that would lead to strong, defensible evaluation conclusions; an established evaluation scope that would ensure that the evaluation is tied to its research questions, effectively defines the subject matter to be assessed, and can be completed in a timely fashion to meet reporting deadlines; clear and specific research evaluation questions that use terms that can be readily defined and measured; and carefully thought-out data and analysis choices, which can enhance the quality, credibility, and usefulness of the evaluation. A comprehensive evaluation of revisions to the scorecard that includes the key attributes outlined above could aid SBA officials in determining whether the revised scorecard provides better information and whether the scorecard revisions are designed and implemented appropriately. Such an evaluation also could assist SBA in understanding whether the scorecard revisions may contribute to maximizing contract dollars awarded to small businesses, which is one of the goals in SBA’s strategic plan. In addition, the 2016 NDAA requires that SBA report to Congress by March 31, 2019, about changes stemming from the revised methodology and recommend whether the scorecard program should continue or be further modified. Such an evaluation could also be used by SBA to inform its report to Congress and future decisions about the scorecard methodology and program. SBA Uses Available Procurement Data to Calculate Scorecard Outcomes, but the Process for Producing Scorecards Has Weaknesses Subcontracting Data Have Known Limitations That May Affect the Reliability of Scorecard Calculations The two data systems SBA uses to measure agencies’ small business contracting activity—FPDS-NG and eSRS—are the best available sources of procurement data for calculating scorecard results, according to SBA. However, eSRS has limitations that agency officials cited and that we have previously identified that could hinder the reliability of scorecard results on subcontracting. Federal law prohibits SBA from requiring agencies to use alternative data collection methods for the purposes of the scorecard calculations. GSA intends to replace both systems as part of an initiative to consolidate the functions of several existing data systems, according to GSA documents. As we reported in 2014, this new system is intended to better link prime contracting and subcontracting data. Agency officials we interviewed said eSRS has limitations that make it challenging to verify the accuracy of reported subcontracting activity, and we also have identified eSRS limitations in our prior work. Prime contractors are responsible for reporting their subcontracting activity to the federal government, and the self-reported nature of these data is a limitation that could hamper the accuracy of eSRS data, agency officials said. Although prime contractors generally are required to submit a plan describing planned subcontracting activity, officials explained that eSRS did not provide a method to allow agency officials to verify that actual subcontracting activity matched the levels described in prime contractors’ plans. In addition, not all prime contractors are required to file subcontracting plans. Exceptions to the requirement include, for example, when the prime contract is for goods or services worth $700,000 or less or if the prime contractor is exempt. Small business prime contractors are one example of an exempt group that is not required to prepare subcontracting plans. SBA officials added that measuring subcontracting activity also is challenging because there are no federal funds obligated for subcontracts. Therefore, the federal government does not have a verified record of who performed subcontracting work and the amount paid. In addition, our previous work has found that eSRS was not designed to provide a list of subcontractors associated with a particular prime contract and that linking small business subcontractors to prime contracts when there is a subcontracting plan that pertains to multiple contracts is especially difficult. In addition, our previous work has identified some limitations with FPDS- NG focused on specific agencies and small business programs, although we have not more broadly assessed the reliability of the FPDS-NG data fields that SBA uses to compile scorecard results. For example, we found mismatches between certain accounting records from the Department of Veterans Affairs and data captured in FPDS-NG, and we identified challenges in using FPDS-NG data to monitor the eligibility of Alaska Native Corporations for certain small business contracts available to small disadvantaged businesses. However, officials from SBA and two departments we interviewed for this work said prime contracting data in FPDS-NG generally do not have the same weaknesses they identified with subcontracting data in eSRS. Errors in Published Scorecard Results Weaken Reliability and Perceived Integrity of Scorecard Program Scorecard results originally published by SBA on May 22, 2018, contained errors, including one agency scorecard published with an incorrect letter grade. SBA officials said they discovered the publication errors within approximately 2 days of publication and published corrected versions. However, these corrections occurred after SBA issued a public announcement highlighting the new results, and interested parties may have downloaded erroneous results prior to the corrected versions being posted on SBA’s website. We identified errors from SBA’s originally published scorecards independent of SBA’s determination that the agency had published scorecards containing errors. The errors we and SBA identified were concentrated in the scorecard for the Department of Education and the government-wide scorecard: The scorecard for the Department of Education showed an incorrect letter grade of A+, rather than the correct grade of A. The published scorecard also showed an incorrect overall numeric score. The Department of Education’s score for the peer review component of the scorecard was incorrect. The government-wide scorecard showed incorrect scores for changes in the number of women-owned small business contractors and the number of service-disabled veteran-owned small business contractors. SBA did not initially document on the corrected scorecards how they had been changed from the original scorecards. However, SBA later added documentation that the scorecards for the Department of Education and government-wide results had been corrected. SBA took this step after we inquired about the absence of documentation about revisions that had been made to the fiscal year 2017 scorecards. SBA officials said they performed accurate calculations for determining agencies’ performance and that inaccuracies in the published scorecards were the result of transcription errors associated with formatting the results for publication. Officials said SBA used new software to publish the fiscal year 2017 scorecards so that they could be accessible to visually impaired readers. Making the scorecards more accessible required some additional steps and at times required manual data entry due to limitations in SBA’s software. These additional steps resulted in errors, officials said. One set of errors—the inaccurate government-wide scores for changes in the number of women-owned small business contractors and the number of service-disabled veteran-owned small business contractors—canceled each other out and did not lead to erroneous overall scorecard results. SBA officials said they review the scorecard data and calculations before they are prepared for publication. However, the agency does not have a process to review formatted scorecards prior to publication to confirm that the version for publication matches actual calculations. Agency officials said they believed that such a process was not necessary. Additionally, agency officials said SBA has instituted a process to update previously issued scorecards to make them accessible for the visually impaired. SBA officials said they intend to review the accuracy of these updated scorecards for characteristics such as accurate letter grades as agency resources permit. Both the Office of Management and Budget and SBA have issued policies related to transparency and integrity of government data. The Office of Management and Budget has issued government-wide guidance on transparency in sharing government data and instructed federal agencies to develop their own policies. SBA’s policy on information quality says the policy is intended, in part, to ensure the integrity of information SBA disseminates. SBA’s policy also says the agency should have full, accurate, transparent documentation and should identify and disclose to users any error sources affecting data quality. In addition, federal internal control standards cite the need for management to design controls— including controls over information processing—to achieve objectives. Errors in the published scorecards may impair the other agencies’ or Congress’s access to quality information to make informed decisions and evaluate an agency’s performance in meeting small business goals. The scorecard errors that we and SBA identified after publication—and the lack of any indicator that scorecards had been corrected—also may undermine confidence in the integrity and transparency of the scorecard data. Agency Officials and Other Stakeholders Expected the Revised Scorecard to Have Little Impact on Small Business Opportunities Agency officials and representatives of small business groups we spoke with generally expected the revised scorecard methodology for fiscal year 2017 to have little impact on small business procurement opportunities. OSDBU officials in the four agencies we interviewed said their offices, in general, are not altering existing efforts at advocating for small business opportunities as a result of scorecard revisions. Some agency officials also said they would need additional years of scorecard data before making any changes to their efforts or reassessing how their priorities align with the revised scorecard’s formula. However, officials from one agency said they updated their agency’s internal monitoring of subcontracting activity as a result of the revised scorecard methodology’s increased emphasis on subcontracting measures. Officials said they updated the monitoring process so the agency would place more emphasis on small business subcontracting activity. Officials said the change to this agency’s internal monitoring process took effect for fiscal year 2018. Officials from three of the four federal departments and representatives from the three small business groups we interviewed said they had not seen any changes in opportunities for small business prime contracting as a result of the scorecard’s methodological changes. Instead, representatives from three small business groups and officials from two departments said any changes in prime contracting opportunities that might have occurred would be influenced by other government-wide procurement initiatives. Specifically, representatives from the three small business groups said the federal government’s emphasis on “category management” was resulting in fewer prime contracts available to all government contractors, including small business contractors. Under the category management initiative, the federal government groups commonly purchased goods and services into categories to streamline procurement processes with the goal of eliminating redundancies and reducing costs. However, representatives of small business groups said these policies result in fewer contract awards and opportunities for small businesses. Representatives from the three small business groups said that the new scorecard element that calculates the annual changes in the number of small business contractors could help highlight the effects of these prime contracting trends on procurement opportunities. According to agency officials and small business representatives, subcontracting opportunities are also unlikely to be impacted by the revised scorecard methodology, which increased the weight of subcontracting performance. Officials from two of the four departments we interviewed told us that their agencies have stable purchasing patterns and that subcontracting activity is not likely to change as a result of scorecard revisions. Representatives from two of the three small business groups said the influence of the scorecard revisions in incentivizing agencies to focus on subcontracting opportunities is limited by the reliability of available subcontracting data, discussed previously. For example, one agency told us that the shift from prime to subcontracting performance reduces the agency’s ability to influence scorecard outcomes because the agency has no means of validating the subcontracting data that are recorded. Similarly, representatives from two of the three small business groups said that because the data on subcontracting are entered by the prime contractors at the time of proposed contracting rather than confirmed contracting, the data do not include verification of subcontracting activity and therefore might not be an accurate measure of subcontracting activity. Representatives from agencies and small business groups said the scorecard program has generally played a role in drawing attention to agencies’ performance in identifying small business procurement opportunities. For example, SBA officials said the scorecard results provide public information about how well the government performed overall in providing small business procurement opportunities and help to ensure that all agencies are contributing toward those goals. Officials at one agency told us that the scorecard was an important factor in driving internal goals and opportunities for small businesses. Another agency said that while it had been reaching its overall prime contracting goal, its performance in certain small business subcategories was falling short of goals. As a result, the agency has directed additional outreach efforts to those types of small businesses. In addition, representatives of all three small business groups said because results are public, the scorecard has created additional pressure on agencies to meet procurement goals. Conclusions SBA uses its scorecard program to monitor federal agencies’ compliance with goals set by Congress to promote small business participation in federal contracting, and SBA has identified having agencies meet or exceed those participation benchmarks as one of its agency-wide goals in its strategic plan. The effects of recent changes to the scorecard and their potential benefits for improving federal contracting opportunities for small businesses are uncertain. SBA recently began to develop a plan for evaluating whether or how changes to the scorecard might facilitate SBA’s ability to meet government-wide procurement goals. Completing such an evaluation and making sure the evaluation plan is aligned with key attributes for effective evaluations could help SBA management: determine whether the revised scorecard provides quality information—consistent with federal internal control standards—and whether it helps meet the agency’s strategic goals; fully address whether the revisions are effective in measuring and creating small business procurement opportunities; and make a well-supported recommendation about whether to continue or modify the scorecard program. Congress required that SBA recommend by March 31, 2019, whether to continue or modify the scorecard program. In addition, the scorecard appears to have played a role in drawing attention to agencies’ performance in identifying small business procurement opportunities. However, there were errors in the initial fiscal year 2017 scorecards published on SBA’s website, and SBA did not initially take steps to notify the public after it made corrections. SBA officials said that SBA does not have a process to ensure that published scorecard results are accurate. Errors in the published scorecards and a lack of timely disclosure about corrections may impair other agencies’ or Congress’s access to quality information to make informed decisions. Recommendations We are making the following two recommendations to SBA: The SBA Administrator or her designee should complete the design and implementation of a comprehensive evaluation of the Small Business Procurement Scorecard aligned with key attributes of effective program evaluations to assess the effectiveness of the revised scorecard in measuring agency performance and promoting small business procurement opportunities. (Recommendation 1) The SBA Administrator or her designee should institute a process to review Small Business Procurement Scorecards for accuracy prior to publication and a mechanism for publicly identifying when issued scorecards have been revised. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to SBA for review and comment. In written comments, reproduced in appendix II, SBA generally agreed with both of our recommendations. Regarding our recommendation that SBA design and implement an evaluation of the revised scorecard methodology, SBA said it planned to evaluate the changes to the scorecard methodology mandated by the 2016 NDAA. As discussed in our report, in revising the scorecard, SBA also made other changes not specifically mandated by the 2016 NDAA, such as increasing the emphasis on small business subcontracting activity and incorporating a revised peer review process to facilitate a more in-depth review of agencies’ compliance with section 15(k) requirements. As stated in our report, we recommend that SBA plan and implement an evaluation of all aspects of the revised scorecard methodology. SBA also indicated that it will not complete the evaluation until after it has validated data for the fiscal year 2018 procurement scorecard. We note that SBA can prepare an evaluation plan and begin to consider potential evaluation findings using available scorecard data from fiscal year 2017. We also note that our recommendation states that SBA’s evaluation plan should be aligned with the key attributes of effective evaluation design. Regarding our recommendation that SBA institute a process to review scorecards for accuracy prior to publication and a mechanism for publicly identifying when issued scorecards have been revised, SBA said it had taken several steps to revise the processes for publishing accurate scorecard results, including adding steps to compare the prepared scorecard documents to source documents prior to publication and to annotate any score corrections that are made to published scorecards. While we have not yet had the opportunity to assess SBA’s actions, the steps SBA describes in response to our recommendation could improve other agencies’ or Congress’s access to quality information. We will send copies to the Administrator of SBA and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report describes (1) revisions to the Small Business Procurement Scorecard (scorecard) methodology for fiscal year 2017 and results of the fiscal year 2017 scorecard, as well as the extent to which the Small Business Administration (SBA) plans to evaluate the effects of revisions; (2) the extent to which SBA’s revised scorecard methodology uses relevant and reliable information and SBA publishes accurate scorecards; and (3) views of selected federal agencies and industry stakeholders on the extent to which SBA’s revised scorecard methodology may encourage agencies to expand small business procurement opportunities. To examine the changes SBA made to the Small Business Procurement Scorecard and the rationale for these changes, we reviewed relevant documents, including the National Defense Authorization Act for Fiscal Year 2016, SBA’s descriptions of the prior and revised scorecard methodology, and revised peer review guidance used for the scorecard element that assesses compliance with section 15(k) of the Small Business Act. We also interviewed officials from SBA and four other agencies about the revisions to the scorecard calculation methodology, the peer review guidance, the process for providing input on scorecard revisions, and how revisions were implemented. The four agencies (the Departments of Agriculture, Defense, Energy, and Homeland Security) represented a judgmental, nongeneralizable sample of federal agencies with procurement powers, selected based on small business procurement volume, recent improvement in scorecard results, and level of participation in discussions with SBA and other agencies about potential changes to the scorecard. We also interviewed SBA officials about their plans to evaluate the effects of scorecard revisions on small business procurement opportunities and about their plans, if any, to evaluate the revised scorecard. In addition, we reviewed federal internal control standards and GAO’s key attributes for designing effective evaluations. We analyzed the distribution of agencies’ letter grade results (A+, A, B, C, D, and F) from the fiscal year 2017 scorecard and compared this distribution to fiscal years 2014 through 2016, which used a different scorecard methodology. We also reviewed the distribution of results of fiscal year 2017 individual scorecard elements—specifically, results of prime contracting achievement, subcontracting achievement, and peer reviews—and compared this distribution to results for fiscal year 2016. We compared agencies’ prime contracting and subcontracting performance against their small business procurement goals for fiscal years 2016 and 2017. To compare peer review results across years, we made adjustments to account for changes in the value of peer review results (raised from 10 points to 20 points from fiscal years 2016 to 2017). To adjust for this difference, we doubled the value of fiscal year 2016 scores to put both years’ scores on a 20-point scale. Finally, we compared actual fiscal year 2017 scorecard results to the results if SBA had used the 2016 scorecard weighting. To do this, we increased the weighting of fiscal year 2017 prime contracting results from 50 percent to 80 percent of each agency’s total scorecard grade, decreased the weight of subcontracting results from 20 percent to 10 percent, and decreased the weight of peer review results from 20 percent to 10 percent. We also excluded results from the new scorecard element calculating changes in the number of small business contractors, which was not part of the 2016 methodology. To examine the extent to which SBA’s revised scorecard methodology considers relevant and reliable information, we interviewed officials from SBA and the Departments of Agriculture, Defense, Energy, and Homeland Security. We reviewed documents describing the prior and revised scorecard methodology. We discussed limitations, if any, in the electronic data systems that capture government-wide data on prime contracting and subcontracting (which SBA uses to calculate those respective scorecard elements). We also reviewed our prior work that assessed these data systems. To assess the data reliability of the published scorecards, we reviewed them for obvious errors and interviewed SBA officials about the cause of errors we identified. We found the scorecards to be reliable for analyzing scorecard results for fiscal year 2017. We also compared SBA’s revised scorecard methodology against the agency’s policies on information quality and against GAO’s standards for internal control in the federal government. To collect views on the extent to which SBA’s revised scorecard methodology may encourage agencies to expand small business procurement opportunities, we interviewed officials from SBA and the four selected departments cited above, as well as representatives from three organizations representing the interests of small businesses. These three organizations were selected to represent a mix of small business types: one (The American Small Business Chamber of Commerce) represented all types of small businesses; one (Women Impacting Public Policy) represented a small business socioeconomic subcategory with a 5 percent goal for prime contracting and subcontracting (as a percentage of total prime contracting and subcontracting); and one (The Task Force for Veterans’ Entrepreneurship, also known as Vet-Force) represented a small business subcategory with a 3 percent goal for prime contracting and subcontracting. We conducted this performance audit from January 2018 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Small Business Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Andy Pauline (Assistant Director), Steve Robblee (Analyst in Charge), William Chatlos, Holly Hobbs, Marc Molino, Jessica Sandler, and Jennifer Schwartz made key contributions to this report.
Why GAO Did This Study Each year SBA produces a scorecard measuring federal contract spending allocated to small businesses. The 2016 NDAA included a provision for SBA to revise the scorecard's methodology and for GAO to evaluate the effects of those revisions for fiscal year 2017. This report discusses, among other things, (1) SBA's changes to the scorecard methodology and plans, if any, to evaluate the effects of these changes, (2) the extent to which SBA has processes to disseminate reliable information, and (3) views of selected stakeholders on the scorecard's effects on small business procurement opportunities. GAO analyzed SBA's prior and revised scorecard methodology and results and interviewed officials from SBA, four other federal agencies selected based on small business procurement volume and other attributes, and three groups representing the interests of small businesses. What GAO Found For fiscal year 2017, the Small Business Administration (SBA) revised the methodology for its Small Business Procurement Scorecard, which is used to assess federal agencies' progress toward small business procurement goals. SBA made revisions to address requirements specified in the National Defense Authorization Act for Fiscal Year 2016 (2016 NDAA). SBA (1) reduced the share of the total scorecard grade devoted to prime contracting achievement, which is the dollar amount of contracts awarded directly to small businesses, and (2) added an element calculating changes in the number of small businesses receiving prime contracts. SBA made two additional revisions—with input from other agencies' representatives—to increase the share of subcontracting achievement results and peer review of required activities designed to facilitate small business procurement (see figure). In July 2018, officials said they had begun developing a plan to evaluate the effects of the revised scorecard methodology but did not provide a draft plan. Conducting a well-designed and comprehensive evaluation could aid SBA in determining whether the scorecard is an effective tool for helping to achieve the agency's strategic goals. (Scorecard elements are expressed as a percentage of total scorecard grade.) The published fiscal year 2017 scorecards originally contained errors, including an incorrect grade and numeric score for one agency, and SBA does not have a process to ensure that scorecard results are published accurately. Although SBA later corrected the errors, the agency did not initially document that scorecards had been changed, which is inconsistent with SBA's policy on information quality. SBA officials said that errors occurred in the process of formatting scorecards for publication. Errors in the published scorecards—and the initial lack of disclosure about corrections—weaken data reliability and may undermine confidence in scorecard data. Agency officials and representatives of small business groups that GAO interviewed generally expected the scorecard revisions to have little impact on small business procurement opportunities. However, one agency's officials said they would focus more on tracking subcontracting activity as a result of changes to the scorecard. What GAO Recommends GAO is recommending that SBA (1) design and implement a comprehensive evaluation to assess scorecard revisions and (2) institute a process for reviewing scorecards for accuracy prior to publication and a mechanism for disclosing corrected information. SBA generally agreed with GAO's recommendations.
gao_GAO-18-274T
gao_GAO-18-274T_0
Background MSPV-NG Program For over a decade, each of VA’s 170 medical centers used VHA’s legacy MSPV program to order medical supplies, such as bandages and scalpels. Many of those items were purchased using the Federal Supply Schedules, which provided medical centers with a great deal of flexibility. However, as we reported in 2016, this legacy program prevented VHA from standardizing items used across its medical centers and affected its ability to leverage its buying power to achieve greater cost avoidance. Standardization is a process of narrowing the range of items purchased to meet a given need, such as buying 10 varieties of bandages instead of 100, in order to improve buying power, simplify supply chain management, and provide clinical consistency. In part because of the legacy MSPV program’s limited standardization, VHA decided to transition to a new iteration, called MSPV-NG. The transition to MSPV-NG has been a major effort, involving the MSPV- NG program office, stakeholders from the VHA’s Procurement and Logistics Office and VA’s Strategic Acquisition Center (SAC)—a VA-wide contracting organization—and logistics and clinical personnel at every medical center. The program also includes hundreds of new contracts with individual supply vendors and a new set of prime vendor contracts to distribute the supplies. VA’s goals for the MSPV-NG program include (1) standardizing requirements for supply items for greater clinical consistency; (2) demonstrating cost avoidance by leveraging VA’s substantial buying power when making competitive awards; (3) achieving greater efficiency in ordering and supply chain management, including a metric of ordering 40 percent of medical centers’ supplies from the MSPV-NG formulary; and (4) involving clinicians in requirements development to ensure uniform clinical review of medical supplies. VHA launched the MSPV-NG program in December 2016, but allowed a 4-month transition period. After April 2017, medical centers could no longer use the legacy program. MSPV-NG now restricts ordering to a narrow formulary. VHA policy requires medical centers to use MSPV- NG—as opposed to other means such as open market purchase card transactions—when purchasing items that are available in the formulary. Supply Chain Practices Identified by Selected Leading Hospital Networks Leading hospital networks we spoke with have similar goals to VA in managing their supply chains, including clinical standardization and reduced costs. These hospital networks reported they analyze their spending to identify items purchased most frequently, and which ones would be the best candidates to standardize first to yield cost savings. The hospitals’ supply chain managers reported establishing consensus with clinicians through early and frequent collaboration, understanding that clinician involvement is critical to the success of any effort to standardize their medical supply chain. By following these practices, these hospital networks have reported they have achieved significant cost savings in some cases, and the potential for improved patient care, while maintaining buy-in from their clinicians. VHA’s Implementation of MSPV-NG Program Has Not Yet Achieved Its Goals VHA’s implementation of the MSPV-NG program—from its initial work to identify a list of supply requirements in early 2015, through its roll-out of the formulary to medical centers in December 2016—was not executed in line with leading practices. Specifically, VHA lacked a documented program strategy, leadership stability, and workforce capacity for the transition that, if in place, could have facilitated buy-in for the change throughout the organization. Further, the initial requirements development process and tight time frames contributed to ineffective contracting processes. As a result, VHA developed an initial formulary that did not meet the needs of the medical centers and has yet to achieve utilization and cost avoidance goals. VA made some changes in the second phase of requirements development to address deficiencies identified in the initial roll out. Key among these was to increase the level of clinical involvement, that is, to obtain input from the doctors and nurses at VA’s individual medical facilities. Despite changes aimed at improving implementation, the agency continues to face challenges that prevent the program from fully achieving its goals. VA’s Lack of an Overarching Strategy and Leadership Instability Were Obstacles to Effective Implementation of MSPV-NG VA did not document a clear overall strategy for the MSPV-NG program at the start and has not done so to date. About 6 months after our initial requests for a strategy or plan, a VHA official provided us with an October 2015 plan focusing on the mechanics of establishing the MSPV-NG formulary. However, this plan was used only within the VHA Procurement and Logistics Office and had not been approved by VHA or VA leadership. Leading practices for organizational transformation state that agencies must have well-documented plans and strategies for major initiatives (such as MSPV-NG) and communicate them clearly and consistently to all involved—which included VHA headquarters, the SAC, and all 170 medical centers. Without such a strategy, VA could not reasonably ensure that all stakeholders understood VHA’s approach for MSPV-NG and worked together in a coordinated manner to achieve program goals. In our November 2017 report, we recommended that the Director of the MSPV-NG program office should, with input from SAC, develop, document, and communicate to stakeholders an overarching strategy for the program, including how the program office will prioritize categories of supplies for future phases of requirement development and contracting. VA agreed with this recommendation and reported it would have a strategy in place by December 2017. Leadership instability and workforce challenges also made it difficult for VA to execute its transition to MSPV-NG. Our work has shown that leadership buy-in is necessary to ensure that major programs like MSPV- NG have the resources and support they need to execute their missions. Due to a combination of budget and hiring constraints, and lack of prioritization within VA, the MSPV-NG program office has never been fully staffed and has experienced instability in its leadership. As of January 2017, 24 of the office’s 40 positions were filled, and program office officials stated that this lack of staff affected their ability to implement certain aspects of the program within the planned time frames. In addition, since the inception of MSPV-NG, the program office has had four directors, two of whom were acting and two of whom were fulfilling the director position while performing other collateral duties. For instance, one of the acting MSPV-NG program office directors was on detail from a regional health network to fulfill the position, but had to abruptly leave and return to her prior position due to a federal hiring freeze. In our November 2017 report, we recommended that VHA prioritize the hiring of a MSPV- NG program director on a permanent basis. VA agreed with this recommendation and indicated a vacancy announcement will be posted by the end of 2017. The MSPV-NG Initial Requirements Development Process Had Limited Clinician Involvement and Did Not Prioritize Categories of Supplies The MSPV-NG program office initially developed requirements for items to be included in the formulary based almost exclusively on prior supply purchases, with limited clinician involvement. The program office concluded in its October 2015 formulary plan that relying on data from previous clinician purchases would be a good representation of medical centers’ needs and that clinician input would not be required for identifying which items to include in the initial formulary. Further, rather than standardizing purchases of specific categories of supplies—such as bandages or scalpels—program officials told us they identified medical and surgical items on which VA had spent $16,000 or more annually and ordered at least 12 times per year, and made those items the basis for the formulary. Officials said this analysis initially yielded a list of about 18,000 items, which the program office further refined to about 6,000 items by removing duplicate items or those that were not considered consumable commodities, such as medical equipment. This approach to requirements development stood in sharp contrast to those of the leading hospital networks we met with, which rely heavily on clinician input to help drive the standardization process and focus on individual categories of supplies that provide the best opportunities for cost savings. Requirements Development and Tight Time Frames Contributed to Ineffective Contracting Practices for Initial Formulary Based on the requirements developed by the program office, SAC began to issue competitive solicitations for the 6,000 items on the initial formulary in June 2015. Medical supply companies had responded to about 30 percent of the solicitations as of January 2016. As a result, according to SAC officials, they conducted outreach and some of these companies responded that VHA’s requirements did not appear to be based on clinical input and instead consisted of manufacturer-specific requirements that favored particular products instead of broader descriptions. Furthermore, SAC did not solicit large groups of related items, but rather issued separate solicitations for small groups of supply items—consisting of three or fewer items. This is contrary to industry practices of soliciting large groups of related supplies together. Therefore, according to SAC officials, some medical supply companies told them that submitting responses to SAC’s solicitations required more time and resources than they were willing to commit. By its April 2016 deadline for having 6,000 items on the formulary, SAC had been working on the effort for over a year and had established competitive agreements for about 200 items, representing about 3 percent of the planned items. Without contracts for the items on the formulary in place, VA delayed the launch of the MSPV-NG program until December 2016 and SAC began establishing non-competitive agreements in the last few months before the launch of MSPV-NG. As shown in figure 1, these non-competitive agreements accounted for approximately 79 percent of the items on the January 2017 version of the formulary. While this approach enabled the MSPV-NG program office to establish the formulary more quickly, it did so at the expense of one of the primary goals of the MSPV-NG program—leveraging VA’s buying power to obtain cost avoidance through competition. Initial Formulary Did Not Meet Medical Center Needs, Resulting in Low Utilization of MSPV-NG and a Missed Opportunity to Leverage VA’s Large Buying Power Once VA’s MSPV-NG initial formulary was established in December 2016, each medical center was charged with implementing it. According to logistics officials we spoke with at selected medical centers, they had varying levels of success due, in part, to incomplete guidance from the program office. Without clear guidance, many medical centers reported they were unable to find direct matches or substitutes on the MSPV-NG formulary for a substantial number of items they routinely used, which negatively impacted utilization rates for the initial formulary. In our November 2017 report, we recommended that the Director of the MSPV- NG program office provide complete guidance to medical centers for matching equivalent supply items. VA agreed with this recommendation and indicated it would provide this guidance to medical centers by December 2017. According to SAC, as of June 2017, only about a third of the items on the initial version of the formulary were being ordered in any significant quantity by medical centers, indicating that many items on the formulary were not those that are needed by medical centers. Senior VHA acquisition officials attributed this mismatch to shortcomings in their initial requirements development process as well as with VA’s purchase data. VA had set a target that medical centers would order 40 percent of their supplies from the MSPV-NG formulary, but utilization rates were below this target with a nationwide average utilization rate across medical centers of about 24 percent as of May 2017. Specifically, Chief Supply Chain Officers—who are responsible for managing the ordering and stocking of medical supplies at six selected medical centers—told us that many items they needed were not included in the MSPV-NG formulary. As such, we found that these six medical centers generally fell below VA’s stated utilization target. As shown in figure 2, among the six selected medical centers we reviewed, one met the target, while the remaining five were below 25 percent utilization. Instead of fully using MSPV-NG, the selected medical centers are purchasing many items through other means, such as purchase cards or new contracts awarded by their local contracting office, in part, because they said the formulary does not meet their needs. These approaches run counter to the goals of the MSPV-NG program and contribute to VA not making the best use of taxpayer dollars. Greater utilization of MSPV-NG is essential to VA achieving the cost avoidance goal of $150 million for its supply chain transformation effort. Under the legacy MSPV program, the National Acquisition Center tracked cost avoidance achieved by comparing prices for competitively-awarded MSPV supply contracts with prices available elsewhere. However, VHA officials stated that they are not currently tracking cost avoidance related specifically to MSPV-NG. In our November 2017 report, we recommended that the VHA Chief Procurement and Logistics Officer, in coordination with SAC, should calculate cost avoidance achieved by MSPV-NG on an ongoing basis. VA agreed with this recommendation and reported it would develop a new metric to measure cost avoidance by June 2018. VA Continues to Encounter Requirements Development and Contracting Challenges as It Works to Address MSPV-NG Shortcomings In Phase 2 of MSPV-NG, the program office has taken some steps to incorporate greater clinical involvement in subsequent requirements development, but both its requirements development and SAC’s contracting efforts have been hampered by staffing and schedule constraints. In the fall of 2016, the program office began to establish panels of clinicians to serve on MSPV-NG integrated product teams (IPT) assigned to the task of developing updated requirements for the second phase of the formulary. Program officials said they had difficulty recruiting clinicians to participate. We found that slightly more than half (20 of the 38) of the IPTs had begun their work to review items and develop updated requirements by the time the MSPV-NG program launched in December 2016. Staff on the IPTs had to complete their responsibilities by the end of March 2017 while simultaneously managing their regular workload as physicians, surgeons, or nurses. By early March 2017, the IPTs still had about 4,200 items to review. Faced with meeting this unrealistic time frame, the MSPV-NG program office had 9 IPT members travel to one location—with an additional 10 members participating virtually—to meet for 5 days to review the remaining items. Members told us that this time pressure limited the extent to which they were able to pursue the goal of standardizing supplies, and that their review ended up being more of a data validation exercise than a standardization review. VHA ultimately met this compressed timeline, but in a rushed manner that limited the impact of clinician involvement. In our November 2017 report, we recommended that the VHA Chief Procurement and Logistics Officer use input from national clinical program offices to prioritize its requirements development and standardization efforts beyond Phase 2 to focus on supply categories that offer the best opportunity for standardization and cost avoidance. VA agreed with this recommendation and stated it is in the process of finalizing guidance that will detail the importance of involving the national clinical program offices in MSPV-NG requirements development and standardization efforts. The SAC plans to replace the existing Phase 1 non-competitive agreements with competitive awards based on the Phase 2 requirements generated by the IPTs, but it may not be able to keep up with expiring agreements due to an unrealistic schedule. Because they were made on a non-competitive basis, the Phase 1 agreements were established for a period of 1 year. In order to keep the full formulary available, the SAC director said the staff must award 200 to 250 contracts before the Phase 1 agreements expire later this year. SAC officials acknowledged that it is unlikely that they will be able to award the contracts by the time the existing agreements expire. According to SAC officials, they are in the process of hiring more staff to deal with the increased workload. Further, the SAC division director told us that they canceled all outstanding Phase 2 solicitations in September 2017 due to low response rates, protests from service-disabled veteran-owned small businesses, and changes in overall MSPV-NG strategy. In our November 2017 report, we recommended that the MSPV-NG program office and SAC should establish a plan for how to mitigate the potential risk of gaps in contract coverage while SAC is still working to make competitive Phase 2 awards, which could include prioritizing supply categories that are most likely to yield cost avoidance. VA agreed with this recommendation and indicated it has developed a plan to mitigate the risk of gaps in contract coverage with short- and mid-term procurement strategies to ensure continued provision of medical and surgical supplies to VHA facilities. The department also stated that it plans to replace the current MSPV-NG contract and formulary process with a new approach where the prime vendor would develop the formulary. However, VA will likely face challenges in this new approach until it fully addresses the existing shortcomings in the MSPV-NG program. Chairman Roe, Ranking Member Walz, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about this statement, please contact Shelby S. Oakley at 202-512-4841 or OakleyS@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to the report on which this testimony is based are Lisa Gardner, Assistant Director; Emily Bond; Matthew T. Crosby; Lorraine Ettaro; Michael Grogan; Jeff Hartnett; Katherine Lenane; Teague Lyons; Roxanna Sun; and Colleen Taylor. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study VA spends hundreds of millions of dollars annually on medical supplies to meet the health care needs of about 7 million veterans. To provide a more efficient, cost-effective way for its medical centers to order supplies, the VA established the MSPV-NG program. The program's goals include involving clinicians in requirements development, leveraging buying power when making competitive awards, and consolidating supplies used across medical centers. VA began developing requirements in early 2015 and launched the program in December 2016. This testimony summarizes key information contained in GAO's November 2017 report, GAO-18-34 . Specifically, it addresses the extent to which VA's implementation of MSPV-NG has been effective in meeting program goals. GAO analyzed VA's requirements development and contracting processes, and identified key supply chain practices cited by four leading hospital networks. GAO also met with contracting and clinical officials at six medical centers, selected based on high dollar contract obligations in fiscal years 2014-2016 and geographic representation. What GAO Found The Department of Veterans Affairs (VA) established the Medical Surgical Prime Vendor-Next Generation (MSPV-NG) program to provide an efficient, cost-effective way for its facilities to order supplies, but its initial implementation did not have an overarching strategy, stable leadership, and workforce capacity that could have facilitated medical center buy-in for the change. VA also developed requirements for a broad range of MSPV-NG items with limited clinical input. Further, starting in June 2015, VA planned to award competitive contracts, but instead, 79 percent of the items available for purchase under MSPV-NG were added through non-competitive agreements. (See figure). As a result, the program did not meet the needs of medical centers, and usage remained below VA's 40 percent target. (See figure.) VA has taken steps to address some deficiencies and is developing a new approach to the program. However, VA will likely continue to face challenges in meeting its goals until it fully addresses these existing shortcomings. What GAO Recommends GAO made 10 recommendations in its November 2017 report, including that VA develop an overarching strategy, expand clinician input in requirements development, and establish a plan for awarding future competitive contracts. VA agreed with GAO's recommendations.
gao_GAO-18-562
gao_GAO-18-562_0
Background Chemical attacks have emerged as a prominent homeland security risk because of recent attacks abroad using chemical agents and the interest of ISIS in conducting and inspiring chemical attacks against the West. DHS’s OHA officials have stated that nationwide preparedness for a chemical attack is critical to prevent, protect against, mitigate, respond to, and recover from such an attack because it could occur abruptly, with many victims falling ill quickly, and with a window of opportunity of a few hours to respond effectively. Also, recent incidents in Malaysia and the United Kingdom demonstrate that chemical agents can be used to target individuals and can contaminate other individuals near the attack area. Chemicals that have been used in attacks include chlorine, sarin, and ricin, all of which can have deadly or debilitating consequences for individuals exposed to them; see figure 1. Laws and Presidential Directives Guiding DHS’s Chemical Defense Efforts Various laws guide DHS’s efforts to defend the nation from chemical threats and attacks. For example, under the Homeland Security Act of 2002, as amended, the Secretary of Homeland Security, through the Under Secretary for Science and Technology, has various responsibilities, to include conducting national research and developing, testing, evaluating, and procuring technology and systems for preventing the importation of chemical and other weapons and material; and detecting, preventing, protecting against, and responding to terrorist attacks. Under former Section 550 of the DHS Appropriations Act, 2007, DHS established the CFATS program to, among other things, identify chemical facilities and assess the security risk posed by each, categorize the facilities into risk-based tiers, and inspect the high-risk facilities to ensure compliance with regulatory requirements. DHS’s responsibilities with regard to chemical defense are also guided by various presidential directives promulgated following the September 11, 2001, terror attacks against the United States; see table 1. Our Work on Duplication, Overlap, and Fragmentation of Federal Programs In 2010, Public Law 111-139 included a provision for us to identify and report annually on programs, agencies, offices, and initiatives—either within departments or government-wide—with duplicative goals and activities. In our annual reports to Congress from 2011 through 2018 in fulfillment of this provision, we described areas in which we found evidence of duplication, overlap, and fragmentation among federal programs, including those managed by DHS. To supplement these reports, we developed a guide to identify options to reduce or better manage the negative effects of duplication, overlap, and fragmentation, and evaluate the potential trade-offs and unintended consequences of these options. In this report, we use the following definitions: Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. Overlap occurs when multiple programs have similar goals, engage in similar activities or strategies to achieve those goals, or target similar beneficiaries. Overlap may result from statutory or other limitations beyond the agency’s control. Fragmentation occurs when more than one agency (or more than one organization within an agency) is involved in the same broad area of national interest and opportunities exist to improve service delivery. DHS Has Several Chemical Defense Programs and Activities Intended to Prevent and Protect Against Chemical Attacks DHS manages several programs and activities designed to prevent and protect against domestic chemical attacks. Prior to December 2017, for example, three DHS components—OHA, S&T, and NPPD—had specific programs and activities focused on chemical defense. In December 2017, DHS created the CWMD Office, which, as discussed later in this report, consolidated the majority of OHA and some other DHS programs and activities intended to counter weapons of mass destruction such as chemical weapons. Other DHS components—such as CBP, the Coast Guard, and TSA—have chemical defense programs and activities as part of their broader missions. These components address potential chemical attacks as part of an all-hazards approach to address a wide range of threats and hazards. Appendix I discusses in greater detail DHS’s programs and activities that focus on chemical defense, and appendix II discusses DHS components that have chemical defense responsibilities as part of an all-hazards approach. Table 2 identifies the chemical defense responsibilities of each DHS component, and whether that component has a specific chemical defense program or an all-hazards approach to chemical defense. Figure 2 shows that fiscal year 2017 funding levels for three of the programs that focus on chemical defense totaled $77.3 million. Specifically, about $1.3 million in appropriated funds was available for OHA for its Chemical Defense Program activities and S&T had access to about $6.4 million in appropriated funds for its Chemical Security Analysis Center activities. The CFATS program had access to about $69.6 million in appropriated funds—or 90 percent of the $77.3 million for the three programs—to regulate high-risk facilities that produce, store, or use certain chemicals. OHA officials stated that their efforts regarding weapons of mass destruction over the last few years had focused mostly on biological threats rather than chemical threats. For example, $77.2 million in fiscal year 2017 appropriated funds supported OHA’s BioWatch Program to provide detection and early warning of the intentional release of selected aerosolized biological agents in more than 30 jurisdictions nationwide. By contrast, as stated above, OHA and S&T had access to about $7.7 million in fiscal year 2017 appropriated funds for chemical defense efforts. We could not determine the level of funding for components that treated chemical defense as part of their missions under an all-hazards approach because those components do not have chemical defense funding that can be isolated from funding for their other responsibilities. For example, among other things, CBP identifies and interdicts hazardous chemicals at and between ports of entry as part of its overall mission to protect the United States from threats entering the country. A Chemical Strategy and Implementation Plan Would Enhance DHS Efforts to Integrate and Coordinate Its Chemical Defense Programs and Activities DHS’s chemical defense programs and activities have been fragmented and not well coordinated, but DHS recently created the CWMD Office to, among other things, promote better integration and coordination among these programs and activities. While it is too early to tell the extent to which this new office will enhance this integration and coordination, developing a chemical defense strategy and related implementation plan would further assist DHS’s efforts. DHS’s Efforts to Address Chemical Attacks Have Been Fragmented and Not Well Coordinated DHS’s chemical defense programs and activities have been fragmented and not well coordinated across the department. As listed in table 2 above, we identified nine separate DHS organizational units that have roles and responsibilities that involve conducting some chemical defense programs and activities, either as a direct mission activity or as part of their broader missions under an all-hazards approach. We also found examples of components conducting similar but separate chemical defense activities without DHS-wide direction and coordination. OHA and S&T—two components with specific chemical defense programs—both conducted similar but separate projects to assist local jurisdictions with preparedness. Specifically, from fiscal years 2009 to 2017, OHA’s Chemical Defense Program conducted chemical demonstration projects in five jurisdictions—Baltimore, Maryland; Boise, Idaho; Houston, Texas; New Orleans, Louisiana; and Nassau County, New York—to assist the jurisdictions in enhancing their preparedness for a large-scale chemical terrorist attack. According to OHA officials, they worked with local officials in one jurisdiction to install and test chemical detectors without having department-wide direction on these detectors’ requirements. Also, according to S&T officials, the Chemical and Biological Defense Division worked with three jurisdictions in New York and New Jersey to help them purchase and install chemical detectors for their transit systems beginning in 2016 again without having department-wide direction on chemical detector requirements. The Secret Service, CBP, and the Coast Guard—three components with chemical defense activities that are part of their all-hazards approach—also conducted separate acquisitions of chemical detection or identification equipment, according to officials from those components. For example, according to Secret Service officials, the agency has purchased chemical detectors that agents use for personal protection of protectees and assessing the safety of designated fixed sites and temporary venues. Also, according to CBP officials, CBP has purchased chemical detectors for identifying chemical agents at ports of entry nationwide. Finally, according to Coast Guard officials, the agency has purchased chemical detectors for use in maritime locations subject to Coast Guard jurisdiction. Officials from OHA, S&T, and the CWMD Office acknowledged that chemical defense activities had been fragmented and not well- coordinated. They stated that this fragmentation occurred because DHS had no department-wide leadership and direction for chemical defense activities. We recognize that equipment, such as chemical detectors, may be designed to meet the specific needs of components when they carry out their missions under different operating conditions, such as an enclosed space by CBP or on open waterways by the Coast Guard. Nevertheless, when fragmented programs and activities that are within the same department and are responsible for the same or similar functions are executed without a mechanism to coordinate them, the department may miss opportunities to leverage resources and share information that leads to greater effectiveness. DHS Has Begun to Consolidate Some Chemical Defense Programs and Activities As discussed earlier, DHS has taken action to consolidate some chemical defense programs and activities. Specifically, in December 2017, DHS consolidated some of its chemical, biological, radiological, and nuclear defense programs and activities under the CWMD Office. The CWMD Office consolidated the Domestic Nuclear Detection Office; the majority of OHA; selected elements of the Science and Technology Directorate, such as elements involved in chemical, biological, and integrated terrorism risk assessments and material threat assessments; and certain personnel from the DHS Office of Strategy, Policy, and Plans and the Office of Operations Coordination with expertise on chemical, biological, radiological, and nuclear issues. According to officials from the CWMD Office, the fiscal year 2018 funding for the office is $457 million. Of this funding, OHA contributed about $121.6 million and the Domestic Nuclear Detection Office contributed about $335.4 million. Figure 3 shows the initial organizational structure of the CWMD Office as of June 2018. As of July 2018, according to the Assistant Secretary of CWMD, his office supported by DHS leadership is working to develop and implement its initial structure, plans, processes, and procedures. To guide the initial consolidation, officials representing the CWMD Office said they plan to use the key practices for successful transformations and reorganizations identified in our past work. For example, they noted that they intend to establish integrated strategic goals, consistent with one of these key practices—establish a coherent mission and integrated strategic goals to guide the transformation. These officials stated that the goals include those intended to enhance the nation’s ability to prevent attacks using weapons of mass destruction, including toxic chemical agents; support operational components in closing capability gaps; and invest in and develop innovative technologies to meet technical requirements and improve operations. They noted that the latter might include networked chemical detectors that could be used by various components to help them carry out their mission responsibilities in the future. However, the officials stated that all of the new office’s efforts were in the initial planning stages and none had been finalized. They further stated that the initial setup of the CWMD Office covering the efforts to consolidate OHA and the Domestic Nuclear Detection Office may not be completed until the end of fiscal year 2018. It is still too early to determine the extent to which the creation of the CWMD Office will help address the fragmentation and lack of coordination on chemical defense efforts that we have identified. Our prior work on key steps for assisting mergers and transformations shows that transformation can take years to complete. One factor that could complicate this transformation is that the consolidation of chemical defense programs and activities is limited to certain components within DHS, such as OHA, and not others, such as some parts of S&T and NPPD. Officials from the CWMD Office stated that they intend to address this issue by coordinating the office’s chemical security efforts with other DHS components that are not covered by the consolidation, such as those S&T functions that are responsible for developing chemical detector requirements. These officials also stated that they intend to address fragmentation by coordinating with and supporting DHS components that have chemical defense responsibilities as part of their missions under an all-hazards approach, such as the Federal Protective Service, CBP, TSA, the Coast Guard, and the Secret Service. Furthermore, the officials stated that they plan to coordinate DHS’s chemical defense efforts with other government agencies having chemical programs and activities at the federal and local levels. DHS’s Prior Efforts and Recent Reorganization Offer an Opportunity for More Strategic Coordination In October 2011, the Secretary of Homeland Security designated FEMA to coordinate the development of a strategy and implementation plan to enhance federal, state, local, tribal and territorial government agencies’ ability to respond to and recover from a catastrophic chemical attack. In November 2012, DHS issued a chemical response and recovery strategy that examined core capabilities and identified areas where improvements were needed. The strategy identified a need for, among other things, (1) a common set of catastrophic chemical attack planning assumptions, (2) a formally established DHS oversight body responsible for chemical incident response and recovery, (3) a more rapid way to identify the wide range of chemical agents and contaminants that comprise chemical threats, and (4) reserve capacity for mass casualty medical care. The strategy also identified the principal actions needed to fill these gaps. For example, with regard to identifying the range of chemical agents and contaminants that comprise chemical threats, the strategy focused on the capacity to screen, search for, and detect chemical hazards (and noted that this area was cross-cutting with prevention and protection). The strategy stated that, among other things, the Centers for Disease Control and Prevention, the Department of Agriculture and Food and Drug Administration, the Department of Defense, the Environmental Protection Agency, and DHS components, including the Coast Guard, provide screening, search, and detection capabilities. However, the strategy noted that “DHS does not have the requirement to test, verify, and validate commercial-off-the-shelf (COTS) chemical detection equipment purchased and fielded by its various constituent agencies and components, nor by the first responder community.” According to a November 2012 memorandum transmitting the response and recovery strategy to DHS employees, the distribution of the strategy was only to be used for internal discussion purposes and was not to be distributed outside of DHS because it had not been vetted by other federal agencies and state, local, tribal, and territorial partners. The memorandum and the strategy further stated that DHS was developing a companion strategy focused on improving the national capacity to prevent, protect against, and mitigate catastrophic chemical threats and attacks and noted that once this document was complete, DHS would engage with its partners to solicit comments and feedback. The strategy also stated that DHS intended to develop a separate implementation plan that would define potential solutions for any gaps identified, program any needed budget initiatives, and discuss programs to enhance DHS’s core capabilities and close any gaps. DHS officials representing OHA and S&T told us that DHS had intended to move forward with the companion strategy and the accompanying implementation plan but the strategy and plan were never completed because of changes in leadership and other competing priorities within DHS. At the time of our discussion and prior to the establishment of the CWMD Office, OHA officials also noted that DHS did not have a singular entity or office responsible for chemical preparedness. An official representing S&T also said that the consolidation of some chemical, biological, radiological, and nuclear efforts may help bring order to chemical defense efforts because DHS did not have an entity in charge of these efforts or a strategy for guiding them. Now that DHS has established the CWMD Office as the focal point for chemical, biological, radiological, and nuclear programs and activities, DHS has an opportunity to develop a chemical defense strategy and related implementation plan to better integrate and coordinate the department’s programs and activities to prevent, protect against, mitigate, respond to, and recover from a chemical attack. The Government Performance and Results Act of 1993 (GPRA), as updated by the GPRA Modernization Act of 2010 (GPRAMA), includes principles for agencies to focus on the performance and results of programs by putting elements of a strategy and plan in place such as (1) establishing measurable goals and related measures, (2) developing strategies and plans for achieving results, and (3) identifying the resources that will be required to achieve the goals. Although GPRAMA applies to the department or agency level, in our prior work we have reported that these provisions can serve as leading practices for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. Our past work has also shown that a strategy is a starting point and basic underpinning to better manage federal programs and activities such as DHS’s chemical defense efforts. A strategy can serve as a basis for guiding operations and can help policy makers, including congressional decision makers and agency officials, make decisions about programs and activities. It can also be useful in providing accountability and guiding resource and policy decisions, particularly in relation to issues that are national in scope and cross agency jurisdictions, such as chemical defense. When multiple agencies are working to address aspects of the same problem, there is a risk that duplication, overlap, and fragmentation among programs can result in wasting scarce funds, confuse and frustrate program customers, and limit overall program effectiveness. A strategy and implementation plan for DHS’ chemical defense programs and activities would help mitigate these risks. Specifically, a strategy and implementation plan would help DHS further define its chemical defense capability, including opportunities to leverage resources and capabilities and provide a roadmap for addressing any identified gaps. By defining DHS’s chemical defense capability, a strategy and implementation plan may also better position the CWMD Office and other components to work collaboratively and strategically with other organizations, including other federal agencies and state, local, tribal, and territorial jurisdictions. Officials from the CWMD Office agreed that the establishment of the new office was intended to provide leadership to and help guide, support, integrate, and coordinate DHS’s chemical defense efforts and that a strategy and implementation plan could help DHS better integrate and coordinate its fragmented chemical defense programs and activities. Conclusions Recent chemical attacks abroad and the threat of ISIS to use chemical weapons against the West have sparked concerns about the potential for chemical attacks occurring in the United States. DHS components have developed and implemented a number of separate chemical defense programs and activities that, according to DHS officials, have been fragmented and not well coordinated within the department. In December 2017, DHS consolidated some of its programs and activities related to weapons of mass destruction, including those related to chemical defense, by establishing the new CWMD Office. It is too early to tell whether and to what extent this office will help address fragmentation and the lack of coordination across all DHS’s weapons of mass destruction efforts, including chemical efforts. However, as part of its consolidation, the CWMD Office would benefit from developing a strategy and implementation plan to guide, support, integrate, and coordinate DHS’s programs and activities to prevent, protect against, mitigate, respond to, and recover from a chemical attack. A strategy and implementation plan would also help the CWMD Office guide DHS’s efforts to address fragmentation and coordination issues and would be consistent with the office’s aim to establish a coherent mission and integrated strategic goals. Recommendation for Executive Action The Assistant Secretary for Countering Weapons of Mass Destruction should develop a strategy and implementation plan to help the Department of Homeland Security, among other things, guide, support, integrate and coordinate its chemical defense programs and activities; leverage resources and capabilities; and provide a roadmap for addressing any identified gaps. (Recommendation 1) Agency Comments and GAO Evaluation We provided a draft of this report to DHS for review and comment. DHS provided comments, which are reproduced in full in appendix III and technical comments, which we incorporated as appropriate. DHS concurred with our recommendation and noted that the Assistant Secretary for CWMD will coordinate with the DHS Under Secretary for Strategy, Policy, and Plans and other stakeholders to develop a strategy and implementation plan that will better integrate and direct DHS chemical defense programs and activities. DHS estimated that it will complete this effort by September 2019. These actions, if fully implemented, should address the intent of this recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Department of Homeland Security Chemical Defense Programs At the time our review began, the Department of Homeland Security (DHS) had three headquarters components with programs and activities focused on chemical defense. These were the Office of Health Affairs’ (OHA) Chemical Defense Program; the Science and Technology Directorate’s (S&T) Chemical and Biological Defense Division and Chemical Security Analysis Center (CSAC); and the National Protection and Programs Directorate’s (NPPD) Chemical Facility Anti-Terrorism Standards (CFATS) program and Sector Outreach and Programs Division. Each component had dedicated funding to manage the particular chemical defense program or activity (with the exception of the Sector Outreach and Programs Division because this division funds DHS activities related to all critical infrastructure sectors, including the chemical sector). On December 7, 2017, DHS established the Countering Weapons of Mass Destruction (CWMD) Office, which incorporated most of OHA and selected elements of S&T, together with other DHS programs and activities related to countering chemical, biological, radiological, and nuclear threats. According to DHS, the CWMD Office was created to, among other things, elevate and streamline DHS’s efforts to prevent terrorists and other national security threat actors from using harmful agents, such as chemical agents, to harm Americans and U.S. interests. Office of Health Affairs, Chemical Defense Program OHA, which was subsumed by the CWMD Office in December 2017, was responsible for enhancing federal, state, and local risk awareness and planning and response mechanisms in the event of a chemical incident through the Chemical Defense Program. This program provided medical and technical expertise to OHA leadership and chemical defense stakeholders including DHS leadership, DHS components, the intelligence community, federal interagency partners, and professional and academic preparedness organizations. The program’s efforts focused on optimizing local preparedness and response to chemical incidents that exceed the local communities’ capacity and capability to act during the first critical hours by providing guidance and tools for first responders and supporting chemical exercises for preparedness. DHS’s Chief Medical Officer was responsible for managing OHA. The Chemical Defense Program expended about $8.3 million between fiscal years 2009 and 2017 in chemical demonstration projects and follow-on funding to assist five jurisdictions in their chemical preparedness: Baltimore, Maryland; Boise, Idaho; Houston, Texas; New Orleans, Louisiana; and Nassau County, New York. For example, in Baltimore, OHA assisted the Maryland Transit Administration with the selection and installation of chemical detection equipment to integrate new technology into community emergency response and planning. In the other four locales, OHA assisted these partners in conducting multiple scenarios specific to each city based on high-risk factors identified by the Chemical Terrorism Risk Assessment (CTRA), which is a risk assessment produced by CSAC every 2 years. Such scenarios included indoor and outdoor scenarios in which persons were “exposed” to either an inhalant or a substance on their skin. Figure 4 summarizes the scenarios conducted in each city and some of the lessons learned. According to OHA summary documentation, a key finding from this work was that timely decisions and actions save lives and manage resources in response to a chemical incident. Since the completion of the five-city project, OHA has been working to, among other things, continue to develop a lessons learned document based on the project, as well as a related concept of operations, that state and local jurisdictions could use to respond to chemical incidents. As of December 7, 2017, OHA was consolidated into the CWMD Office and its functions transferred to the new office, according to officials from the CWMD Office. The Chief Medical Officer is no longer responsible for managing OHA but serves as an advisor to the Assistant Secretary for Countering Weapons of Mass Destruction and as the principal advisor to the Secretary and the Administrator of FEMA on medical and public health issues related to natural disasters, acts of terrorism, and other man-made disasters, among other things. Science and Technology Directorate, Chemical Defense Activities S&T’s Homeland Security Advanced Research Projects Agency includes the Chemical and Biological Defense Division, which supports state and local jurisdictions by, for example, providing them help in modeling potential chemical attacks. The Chemical and Biological Defense Division worked with the City of New York to develop chemical detection modeling by simulating a chemical attack. As a result of the simulation, New York City officials wanted to implement mechanisms to prevent the potential consequences of a chemical attack in a large city. S&T’s Office of National Laboratories includes the CSAC, which identifies and characterizes the chemical threat against the nation through analysis and scientific assessment. CSAC is responsible for producing, among other things, the CTRA, a comprehensive evaluation of the risks associated with domestic toxic chemical releases produced every 2 years. CSAC officials chair the Interagency Chemical Risk Assessment Working Group that meets to develop the CTRA, identify chemical hazards, and produce a list of priority chemicals. This working group is comprised of DHS components, federal partners, and private industry officials that share industry information to ensure accurate and timely threat and risk information is included in the CTRA. To complement the CTRA, CSAC developed a standalone CTRA desktop tool that DHS components can use to conduct risk-based modeling of a potential chemical attack and provide results to DHS components, such as the U.S. Secret Service, for advance planning of large-scale events. In addition, CSAC conducts tailored risk assessments addressing emerging threats such as fentanyl, a synthetic opioid that has caused numerous deaths across the United States. CSAC sends these assessments, along with other intelligence and threat information, to relevant DHS components, federal agencies, state and local partners, and private entities so this information can be used in planning and decision making. Officials from eight DHS components we spoke with said they use CSAC information in their work and that CSAC products are useful. CSAC conducted two exercises, known as Jack Rabbit I and II, to experimentally characterize the effects of a large-scale chemical release and to understand the reason for the differences seen between real-world events and modeling predictions. These exercises were intended to strengthen industry standards in chemical transportation, as well as response and recovery plans. Outputs and data from these exercises have been used to write first responder guidelines for these types of events and are being taught in nationwide fire and hazmat courses. The fiscal year 2018 President’s Budget request did not ask for an appropriation to fund CSAC. However, the Consolidated Appropriations Act, 2018, did provide funding for CSAC. Furthermore, in May 2018, the Secretary delegated responsibility for conducting the non-research and development functions related to the Chemical Terrorism Risk Assessment to the CWMD Office. National Protection and Programs Directorate, Chemical Facility Anti- Terrorism Standards (CFATS) Program and Other Chemical Facility Security Activities The CFATS program uses a multitiered risk assessment process to determine a facility’s risk profile by requiring facilities in possession of specific quantities of designated chemicals of interest to complete an online questionnaire. CFATS program officials said they also use CSAC data as part of the process for making decisions about which facilities should be covered by CFATS, and their level of risk. If CFATS officials make a determination that a facility is high-risk, the facility must submit a vulnerability assessment and a site security plan or an alternative security program for DHS approval that includes security measures to meet risk- based performance standards. We previously reported on various aspects of the CFATS program and identified challenges that DHS was experiencing in implementing and managing the program. We made a number of recommendations to strengthen the program to include, among other things, that DHS verify that certain data reported by facilities is accurate, enhance its risk assessment approach to incorporate all elements of risk, conduct a peer review of the program to validate and verify DHS’s risk assessment approach, and document processes and procedures for managing compliance with site security plans. DHS agreed with all of these recommendations and has either fully implemented them or taken action to address them. The Sector Outreach and Programs Division works to enhance the security and resilience of chemical facilities that may or may not be considered high-risk under the CFATS program and plays a nonregulatory role as the sector-specific agency for the chemical sector. The Sector Outreach and Programs Division works with the chemical sector through the Chemical Sector Coordinating Council, the Chemical Government Coordinating Council, and others in a public-private partnership to share information on facility security and resilience. In addition, the division and the coordinating councils help enhance the security and resilience of chemical facilities that may or may not be considered high-risk under the CFATS program. The division and councils are to collaborate with federal agencies, chemical facilities, and state, local, tribal, and territorial entities to, among other things, assess risks and share information on chemical threats and chemical facility security and resilience. Further, the Protective Security Coordination Division in the Office of Infrastructure Protection works with facility owners and operators to conduct voluntary assessments at facilities. Appendix II: Department of Homeland Security Components’ Chemical Defense Responsibilities as Part of an All-Hazards Approach Department of Homeland Security (DHS) components conduct various prevention and protection activities related to chemical defense. These activities are managed by individual components as part of their overall mission under an all-hazards approach. U.S. Coast Guard - The Coast Guard uses fixed and portable chemical detectors to identify and interdict hazardous chemicals as part of its maritime prevention and protection activities. It also responds to hazardous material and chemical releases in U.S. waterways. The Coast Guard also staffs the 24-hour National Response Center, which is the national point of contact for reporting all oil and hazardous materials releases into the water, including chemicals that are discharged into the environment. The National Response Center also takes maritime reports of suspicious activity and security breaches at facilities regulated by the Maritime Transportation Security Act of 2002. Under this act, the Coast Guard regulates security at certain chemical facilities and other facilities possessing hazardous materials. U.S. Customs and Border Protection (CBP) - CBP interdicts hazardous chemicals at U.S. borders and ports of entry as part of its overall mission to protect the United States from threats entering the country. Among other things, CBP has deployed chemical detectors to point of entry nationwide that were intended for narcotics detection, but can also be used by CBP officers to presumptively identify a limited number of chemicals. Also, CBP’s National Targeting Center helps to screen and identify high-risk packages that may contain hazardous materials at ports of entry. In addition, CBP’s Laboratories and Scientific Services Directorate manages seven nationally accredited field laboratories, where staff detect, analyze, and identify hazardous substances, including those that could be weapons of mass destruction. When CBP officers send suspected chemical weapons, narcotics, and other hazardous materials to the labs, the labs use various confirmatory analysis technologies, such as infrared spectroscopy and mass spectrometry, to positively identify them. Also, the Directorate has a 24-hour Teleforensic Center for on-call scientific support for CBP officers who have questions on suspected chemical agents. Federal Emergency Management Agency (FEMA) - FEMA provides preparedness grants to state and local governments for any type of all-hazards preparedness activity, including chemical preparedness. According to FEMA data, in fiscal year 2016, states used about $3.5 million, local municipalities used about $48.5 million, and tribal and territorial municipalities used about $80,000 in preparedness grant funding for chemical defense including prevention and protection activities, as well as mitigation, response, and recovery efforts related to a chemical attack. Office of Intelligence and Analysis (I&A) - I&A gathers intelligence information on all homeland security threats including chemical threats. Such threat information is compiled and disseminated to relevant DHS components and federal agencies. For example, I&A works with CSAC to provide intelligence information for the CTRA and writes the threat portion of that assessment. I&A also receives information from CSAC on high-risk gaps in intelligence to help better inform chemical defense intelligence reporting. Also, the Under Secretary of I&A serves as the Vice-Chair of the Counterterrorism Advisory Board. This board is responsible for coordinating, facilitating, and sharing information regarding DHS’s activities related to mitigating current, emerging, perceived, or possible terrorist threats, including chemical threats; and providing timely and accurate advice and recommendations to the Secretary and Deputy Secretary of Homeland Security on counterterrorism issues. NPPD’s Federal Protective Service (FPS) - FPS secures federally- owned and leased space in various facilities across the country. Federal facilities are assigned a facility security level determination ranging from a Level 1 (low risk) to a Level 5 (high risk). As part of its responsibility, FPS is to conduct Facility Security Assessments of the buildings and properties it protects that cover all types of hazards including a chemical release, in accordance with Interagency Security Committee standards and guidelines. FPS is to conduct these assessments at least once every 5 years for Level 1 and 2 facilities, and at least once every 3 years for Level 3, 4, and 5 facilities. FPS conducts the assessments using a Modified Infrastructure Survey Tool. Transportation Security Administration (TSA) - TSA efforts to address the threat of chemical terrorism have been focused on the commercial transportation of bulk quantities of hazardous materials and testing related to the release of commercially transported chemicals that could be used as weapons of mass destruction. TSA’s activities with respect to hazardous materials transportation aim to reduce the vulnerability of shipments of certain hazardous materials through the voluntary implementation of operational practices by motor carriers and railroads, and ensure a secure transfer of custody of hazardous materials to and from rail cars at chemical facilities. Also, in May 2003, TSA began requiring that all commercial motor vehicle operators licensed to transport hazardous materials, including toxic chemicals, must successfully complete a comprehensive background check conducted by TSA. According to TSA documents, approximately 1.5 million of the nation’s estimated 6 million commercial drivers have successfully completed the vetting process. Additionally, TSA has also recently partnered with five mass transit and passenger rail venues, together with other DHS components such as DHS’s Science and Technology Directorate and the U.S. Secret Service, to test chemical detection technologies for such venues. In addition, TSA is responsible for the Transportation Sector Security Risk Assessment, which examines the potential threat, vulnerabilities, and consequences of a terrorist attack involving the nation’s transportation systems. This assessment’s risk calculations for several hundred specific risk scenarios, including chemical weapons attacks, are based on the elements of threat, vulnerability and consequence using a combination of subject matter expert judgments and modeling results. U.S. Secret Service - The Secret Service is responsible for protecting its protectees and designated fixed sites and temporary venues from all threats and hazards, including chemical threats. For example, the Secret Service conducts security assessments of sites, which may involve chemical detection, and coordinates with other agencies for preparedness or response to threats and hazard incidents. In addition, the Secret Service has a Hazardous Agent Mitigation Medical Emergency Response team, dedicated to responding to numerous hazards, including chemical threats and incidents. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contacts and Staff Acknowledgements GAO Contacts Staff Acknowledgements In addition to the contact named above, John Mortin (Assistant Director), Juan Tapia-Videla (Analyst-in-Charge), Michelle Fejfar, Ashley Grant, Imoni Hampton, Eric Hauswirth, Tom Lombardi, Sasan J. “Jon” Najmi, Claire Peachey, and Kay Vyas made key contributions to this report.
Why GAO Did This Study Recent chemical attacks abroad and the threat of using chemical weapons against the West by the Islamic State of Iraq and Syria (ISIS) have raised concerns about the potential for chemical attacks occurring in the United States. DHS's chemical defense responsibilities include, among others, managing and coordinating federal efforts to prevent and protect against domestic chemical attacks. GAO was asked to examine DHS's chemical defense programs and activities. This report examines (1) DHS programs and activities to prevent and protect against domestic chemical attacks and (2) the extent to which DHS has integrated and coordinated all of its chemical defense programs and activities. GAO reviewed documentation and interviewed officials from relevant DHS offices and components and reviewed DHS strategy and planning documents and federal laws and directives related to chemical defense. What GAO Found The Department of Homeland Security (DHS) manages several programs and activities designed to prevent and protect against domestic attacks using chemical agents (see figure). Some DHS components have programs that focus on chemical defense, such as the Science and Technology Directorate's (S&T) chemical hazard characterization. Others have chemical defense responsibilities as part of their broader missions, such as U.S. Customs and Border Protection (CBP), which interdicts chemical agents at the border. DHS recently consolidated some chemical defense programs and activities into a new Countering Weapons of Mass Destruction (CWMD) Office. However, GAO found and DHS officials acknowledged that DHS has not fully integrated and coordinated its chemical defense programs and activities. Several components—including CBP, U.S. Coast Guard, the Office of Health Affairs, and S&T—have conducted similar activities, such as acquiring chemical detectors or assisting local jurisdictions with preparedness, separately, without DHS-wide direction and coordination. As components carry out chemical defense activities to meet mission needs, there is a risk that DHS may miss an opportunity to leverage resources and share information that could lead to greater effectiveness addressing chemical threats. It is too early to tell the extent to which the new CWMD Office will enhance the integration of DHS's chemical defense programs and activities. Given the breadth of DHS's chemical defense responsibilities, a strategy and implementation plan would help the CWMD Office (1) mitigate the risk of fragmentation among DHS programs and activities, and (2) establish goals and identify resources to achieve these goals, consistent with the Government Performance and Results Modernization Act of 2010. This would also be consistent with a 2012 DHS effort, since abandoned, to develop a strategy and implementation plan for all chemical defense activities, from prevention to recovery. DHS officials stated the 2012 effort was not completed because of leadership changes and competing priorities. What GAO Recommends GAO recommends that the Assistant Secretary for the CWMD Office develop a strategy and implementation plan to help DHS guide, support, integrate, and coordinate chemical defense programs and activities. DHS concurred with the recommendation and identified actions to address it.
gao_GAO-18-181
gao_GAO-18-181_0
Background The Selected Reserve comprises over 811,000 full- and part-time members from the military services’ respective National Guard and reserve components, whom DOD can call to active duty to augment military forces in time of war or national emergency. DOD requires these reservists to maintain readiness by participating regularly in training to maintain the military skills needed to perform their mission. About 91 percent of the members of the Selected Reserve, or 735,876 reservists, are part-time, performing military service in addition to their civilian employment and careers. Reservists typically train for about 1 weekend a month and 2 weeks a year. Reservists may also be required to participate in longer duration training to develop and maintain specialized skills related to their military occupation, such as cyber specialists, or to perform other activities such as backfilling positions in other reserve or active units. The following are descriptions of reservists’ required training and other duties: Annual Training: All six reserve components require an annual training period, typically 2 weeks, to acquire and maintain required military skills. Inactive Duty Training: This training is commonly referred to as the “1 weekend a month” commitment, and reservists fulfill this commitment in connection with prescribed training or maintenance activities of the units to which they are assigned. Active Duty for Training: So that reservists acquire and maintain required military skills, individuals serving as reservists participate in training programs such as initial basic training and advanced individual training, and may attend full time specialized schools. The duration of Active Duty for Training varies considerably, from days to several months. Active Duty Other than Training: All six reserve components may require that reservists perform other support activities, such as backfilling a position in a reserve or active unit. For a variety of reasons, reservists may not live in the same location where they train. For example, reservists may relocate for their civilian occupation, and officials told us that as reservists are promoted, command opportunities are more geographically dispersed. As a result, travel may be necessary to facilitate their service. DOD’s six reserve components reported paying or reimbursing over $925 million in travel costs for reservists in fiscal year 2015, representing about 4.3 percent of the total obligations identified in the Reserve Personnel accounts. With an actual part-time endstrength of 742,683 reservists in fiscal year 2015, DOD spent an average cost of about $1,246 per reservist. Officials told us that DOD does not specifically collect and track data on reservists’ unreimbursed travel expenses, which are therefore unknown. Officials told us that reservists process their travel claims through DOD-wide or military-service-based electronic data systems, such as the Defense Travel Service or the Air Force’s Reserve Travel System, or sometimes using hard-copy forms, depending on the type of duty performed, the reserve component, and other factors. DOD’s Joint Travel Regulations govern the extent to which reservists are eligible to be reimbursed for travel expenses to participate in required training or in other duties. The regulations authorize the reimbursement of different types of expenses depending on the nature and duration of the assignment. Eligible reimbursements include: Per diem, which includes reimbursement for food, temporary lodging, Transportation expenses, ranging from reimbursement for mileage traveled in reservists’ private vehicles to reimbursement for commercial flights; Permanent Change of Station reimbursements related to reservists changing their home of record to the location of the assignment, such as reimbursement for the movement of household goods; and Basic Allowance for Housing, which is based on the costs of adequate rental properties for civilians with comparable income levels in the same location as the permanent duty station, which in the case of reservists is generally the location of their home; is received when reservists are in an active duty status, which includes Active Duty for Training and Active Duty Other than Training; and is determined based on the duration of reservists’ active duty assignments. Reservists also receive cash compensation for the various types of training and other duties they perform; non-cash compensation, such as access to TRICARE Reserve Select and education benefits; and deferred compensation, such as participation in the military retirement system. In addition, reservists may be able to take advantage of a federal tax deduction for out-of-pocket travel expenses associated with their service. Reservists May Incur Out-of-Pocket Travel Expenses under Certain Conditions Reservists may incur expenses under certain conditions in connection with their service that are not reimbursable under DOD’s travel regulations. Officials responsible for travel regulations and reserve policy issues told us that this can occur because: (1) the cost to attend Inactive Duty Training is a reservist’s responsibility, except in limited circumstances; and (2) DOD designates longer duration training or assignments as a Permanent Change of Station—a change in a reservist’s home of record—and not as temporary travel. Travel Expenses to Attend Inactive Duty Training Are the Responsibility of Reservists Except for in Limited Circumstances Under most circumstances, travel expenses to and from the 1 weekend a month training commitment are reservists’ responsibility with no reimbursement provided, and as a result reservists may incur unreimbursed travel expenses to attend this training. Specifically, the Joint Travel Regulations states that a reserve component member performing Inactive Duty Training ordinarily receives no travel or transportation allowances, particularly when the training duty is performed at the reservist’s assigned unit location. This principle is reflected in travel policy such as the Navy Reserve’s requirement that reservists who live more than 100 miles from their Inactive Duty Training site sign a waiver acknowledging that they will not be reimbursed for travel expenses. Navy travel policy, citing a previous version of the Joint Travel Regulations, states that as part of the requirement to perform Inactive Duty Training, “inherent to this obligation is the travel between the member’s home and the location at which the member normally performs drills” with no reimbursement provided. To mitigate expenses incurred by reservists traveling long distances, the National Defense Authorization Act for Fiscal Year 2008 established a reimbursement program for Inactive Duty Training whereby each component may, at the discretion of the service Secretary and under certain circumstances, provide reimbursement of up to $300 in expenses for each roundtrip to the training location. The Joint Travel Regulations further specifies that reservists must travel no fewer than 150 miles or greater one way from their primary residence to their normal drilling site to be eligible. DOD spent nearly $33.5 million on Inactive Duty Training travel costs in fiscal year 2015. While each service Secretary decides whether an individual component can participate in the program, the Joint Travel Regulations requires such programs to make servicemembers eligible for reimbursement when they meet one of the following criteria: They are qualified in a skill designated as critically short by the Secretary assigned to a unit of the Selected Reserve with a critical staffing shortage, or in a pay grade in the reservists’ component with a critical staffing shortage; or assigned to a unit or position that is disestablished or relocated as a result of Base Realignment and Closure or other force structure reallocation. See table 1 for scenarios illustrating reimbursement eligibility for Inactive Duty Training expenses in the Army Reserve. Three of the six reserve components have established policies to allow for reimbursement of expenses of travel related to Inactive Duty Training, according to component-specific criteria. The Marine Corps Reserve and the Air Force Reserve authorize Inactive Duty Training reimbursement for several occupations, and in the case of the Marine Corps Reserve, entire rank levels. The Army Reserve authorizes reimbursement, but according to its policy targets reimbursements to soldiers and units with the highest payoff in achieving readiness. Specifically, Army Reserve commanders establish Inactive Duty Training reimbursement policy that designates and prioritizes positions, units, and occupational specialties eligible to participate. Both Air National Guard and Army National Guard officials told us that their respective components do not authorize Inactive Duty Training reimbursement. Similarly, Navy officials told us that the Navy does not participate in the reimbursement program, primarily because under its training construct Navy reservists conduct most Inactive Duty Training at a Navy Operational Support Center close to their homes, thereby limiting the training that may occur at a further distance from their homes to a minority of sessions. Travel distances for reservists to their drilling site may have increased over time. For example, the 2012 Report of the Eleventh Quadrennial Review of Military Compensation noted that reservists traditionally lived near a reserve site or drilling location, but reported that at the time of its review more than 100,000 reservists lived more than 100 miles from their drilling locations. Further, according to a 2008 report by the Commission on the National Guard and Reserves, after Base Realignment and Closure actions some reservists may have fewer locations available to them to perform such training. As a result, reservists may be travelling greater distances to attend such training. Officials also told us that the travel distances required to attend Inactive Duty Training can be further increased as reservists progress in their careers in certain occupational specialties or ranks. For example, officials from the Marine Corps Reserve told us that as reservists are promoted to higher ranks, there are fewer positions, which can result in long-distance travel by reservists, while Army Reserve officials told us that some reservists may turn down command positions to avoid long-distance travel. DOD’s Joint Travel Regulations Treats Long- Duration Training and Other Assignments of Long-Duration as a Permanent Change of Station DOD’s Joint Travel Regulations treats Active Duty for Training and other assignments of long-duration as a Permanent Change of Station, or a change in a reservist’s home of record, generally his or her civilian home, and not as Temporary Duty. The treatment of long-duration training or other assignments as a Permanent Change of Station applies equally to reservists and active component members, as DOD travel regulations require all military personnel at a given training or assignment to be in the same status. However, officials told us that due to the interim nature of such assignments reservists are unlikely to move their families, and reservists may incur unreimbursed expenses due to the cost of maintaining two homes. For example, according to a reserve policy official, based on an internal analysis, about two-thirds of Air Reserve members on long-duration training do not move from their civilian homes. Further, the 2012 Report of the Eleventh Quadrennial Review of Military Compensation concluded that reservists would likely return to their civilian homes and employers at the conclusion of their assignments. In addition, officials stated that long-duration training is becoming more common. For example, Army language or medical training can routinely last longer than 140 days and require a Permanent Change of Station. The treatment of long-duration training and other assignments as a Permanent Change of Station and not as Temporary Duty affects the type of expenses that will be reimbursed and the Basic Allowance for Housing rate received by reservists. A Permanent Change of Station is triggered when Active Duty for Training assignments last 140 days or longer and Active Duty for Other than Training assignments last 181 days or longer. The changes in eligibility for reimbursement discussed below can affect the amounts of reservists’ unreimbursed expenses: Per diem: Reservists on training or other assignments that are treated as a Permanent Change of Station are not eligible for reimbursement of per diem expenses, including for temporary lodging and meals. Reservists are unlikely to relocate their civilian homes for such long-duration, though interim, training and assignments. They may therefore incur expenses typically associated with a Temporary Duty assignment, such as temporary lodging expenses, but for which they cannot be reimbursed. Basic Allowance for Housing: Reservists on training or other assignments that are treated as a Permanent Change of Station receive an adjusted Basic Allowance for Housing based on the location of their new duty station. This adjusted housing allowance applies regardless of whether a reservist actually moves his or her civilian home and family to the new duty station. Depending on the new duty location, a reservist may receive Basic Allowance for Housing at a higher or lower rate than the allowance amount based on the location of their civilian home. If a reservist were in Temporary Duty status—training for 139 days or fewer, or an assignment for 180 days or fewer—he or she would continue to receive Basic Allowance for Housing based on the cost of maintaining his or her civilian home. If reservists decide not to relocate themselves and their families to the location of the long-duration training or assignment, reservists may face unreimbursed costs for maintaining two homes. Once a Permanent Change of Station has been triggered, a reservist is no longer in a Temporary Duty status and may no longer receive per diem for temporary lodging. Reservists must either (1) move to government lodging and forego any Basic Allowance for Housing, or (2) receive Basic Allowance for Housing based on the location of the assignment, which may be higher or lower than the allowance based on the location of their home of record, generally their civilian home. In the first situation, reservists must maintain their civilian home without payment of a Basic Allowance for Housing, and thus may face unreimbursed costs associated with the home’s maintenance. In the second situation, reservists must maintain both their civilian home and a new home with a Basic Allowance for Housing adjusted for the location of the home at the new duty station. Unreimbursed costs may result if the Basic Allowance for Housing adjusted for the location of the new duty station is significantly lower than the housing costs in the area of the reservist’s civilian home. As shown in the 2017 illustrative example in figure 1, reservists receive different levels of payment for the temporary lodging allowance and the Basic Allowance for Housing based on the duration of their Active Duty for Training assignments. A service Secretary may grant a waiver for individuals attending a training course to maintain Temporary Duty status beyond the 140-day time limit, which normally would require a Permanent Change of Station. However, such waivers apply to all course attendees, whether they are members of the active or reserve components. DOD maintains data on the number of these waivers, but not for the discrete number of waivers for reserve component training. Individual reservists can also apply for a waiver for the rate of their Basic Allowance for Housing payment to be based on the location of their dependents, effectively allowing payment at the geographic rate of a reservist’s civilian home. However, this option is not available to reservists without dependents. DOD Has Not Fully Assessed the Potential Effect of Unreimbursed Out-of- Pocket Travel Expenses on the Retention of Reservists Within the last decade, DOD and the services have conducted a few limited assessments of the potential effect of unreimbursed out-of-pocket travel expenses incurred by reservists to perform required training and other reserve activities on retention of reservists. Although various entities have raised concerns regarding reservists’ out-of-pocket travel expenses, the available information is either anecdotal or applicable to only one reserve component or one aspect of travel policy. DOD reports have noted that such unreimbursed travel expenses, among other factors, may be a challenge for reservists and may therefore affect retention. For example, in 2008, the Commission on the National Guard and Reserves reported that travel requirements and associated costs had a negative effect on DOD’s ability to recruit and retain qualified personnel, particularly for leadership positions. In addition, in minutes of its meetings, the Air Reserve Forces Policy Committee has called for changes to the Permanent Change of Station requirement for long- duration training, noting in 2015 that it, “frequently creates financial hardship for RC Airmen who typically maintain a residence near their assigned unit or civilian employer.” Three DOD studies have explored potential links between reservists’ unreimbursed travel expenses and retention: A 2012 survey commissioned by the Army Reserve of a small sample of reservist officers potentially eligible for battalion command positions reported that unreimbursed travel costs were among several factors that could influence their decision to apply for these positions. A 2014 study commissioned by the Marine Corps found that, based on a statistical model of a sample of Marines eligible to participate in its Inactive Duty Training travel reimbursement program between May 2012 and September 2013, the program had increased the Marine Corps’ ability to fill critical positions. The study also included an assessment of the cost of increasing the level of reimbursement for Inactive Duty Training and its possible effect on staffing. A 2016 survey commissioned by the Army Reserve, drawn from a non-generalizable sample of a few thousand reservists, reported that a significant majority of respondents in 2015 viewed the Inactive Duty Training travel reimbursement program as an incentive for soldier retention. In addition, during our review, officials from most of the reserve components told us that despite the establishment of the reimbursement program for travel costs associated with Inactive Duty Training, such expenses continue to be a challenge for some reservists. In particular, officials noted that this especially affects personnel who do not qualify for reimbursement. One official noted that, in extreme cases, reservists may find that the cost to attend Inactive Duty Training may exceed drill pay, effectively requiring them to pay out-of-pocket to perform military service. While these reports and studies have alerted DOD to a potential problem, DOD has not yet assessed the effect of unreimbursed travel expenses on retention of reservists in a comprehensive manner and the related overall cost to the federal government. For example, DOD has not yet systematically collected data and assessed the potential effect of current travel reimbursement policy on retention across all services, as measured by outcomes such as fill rates for critical positions and other metrics, or collected more basic information such as the number of reservists who do not move their home during long-duration training, the distances traveled for Inactive Duty Training, and the amount of unreimbursed expenses incurred by reservists. The 2014 Marine Corps’ study on Inactive Duty Training reimbursement did explore fill rates for its potential effect on critical positions. However, its findings are not necessarily applicable to the other reserve components. In addition, DOD has not conducted an assessment on the issue of Permanent Change of Station rules for long- duration training or other assignments. While travel policy officials noted that there is no requirement for such an assessment, some agreed that more robust information would allow for a better understanding of the situation as well as any potential changes that are necessary in DOD’s travel policy. One travel official stated that until a direct connection between unreimbursed travel expenses and retention or related areas is observed within their component, change is unnecessary. As of July 2017, DOD and the reserve components were considering changes to reserve travel policy to mitigate the effect of out-of-pocket expenses on reservists. Specifically, these changes include (1) requesting that Congress increase reimbursement for Inactive Duty Training travel expenses from $300 to $500 and (2) increasing the length of time of Temporary Duty travel for training courses or other assignments before such travel is considered a Permanent Change of Station. The Marine Corps Reserve has developed a draft proposal for congressional consideration for an increase in Inactive Duty Training reimbursement, which an official stated was necessary to address the challenge of filling critical occupations. The Military Advisory Panel, which advises on defense travel issues, has considered an increase in the length of time of Temporary Duty travel for training courses or other assignments before a Permanent Change of Station would be required, but no specific proposals have been developed. Federal internal control standards state that management requires quality information to make informed decisions and evaluate an entity’s performance in achieving key objectives and addressing risk. They further require that management identify, analyze, and respond to risks related to achieving the defined objectives. However, without collecting more comprehensive information on the potential effect of the current travel policy on the retention of reservists, DOD would be considering alternative proposals with only the limited data and analysis available to date. Further, the lack of comprehensive data and analysis on the influence of current travel policies will limit DOD’s ability to reach an analytically based decision which weighs the costs and benefits of any potential changes. In deciding to continue or change current travel policies relating to travel reimbursement without the benefit of quality information, DOD risks not managing the potential influence of these policies on reservists’ retention or agency expenditures. Conclusions Reservists often maintain civilian careers and homes that in some cases can require them to travel long distances to perform their part-time military service. In some instances, such as when performing Inactive Duty Training and long-duration Active Duty for Training or other active duty assignments, such service can result in expenses that cannot be reimbursed to the reservist under DOD’s travel policy. Despite long- standing concerns that out-of-pocket travel expenses reservists incur to perform their service may be increasing, DOD does not have sufficient data and analysis on how reservists’ incurring these expenses could negatively affect DOD’s ability to achieve its mission, the overall costs and benefits of DOD’s travel policy, and how various proposed changes to the travel policy could potentially mitigate any of its possible negative effects. As a result, DOD is not well positioned to move forward with possible changes to travel policy absent further analysis. Recommendation for Executive Action We recommend that the Under Secretary of Defense for Personnel and Readiness collect quality information and conduct an analysis of the potential effects of unreimbursed travel expenses incurred by reservists to perform military service on DOD’s ability to retain reservists in the force, and respond to these risks by considering the costs and benefits of any possible actions to address the identified issues. Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with our recommendation. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Secretaries of the military departments, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (213) 830-1011 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Reserve Components’ Reported Travel Costs for Fiscal Year 2015 To determine DOD’s travel costs for the reserve components, we obtained and reviewed fiscal year 2015 execution cost data, the most recent complete data available, that each reserve component reported on travel costs for training and other activities. These costs are reported in a travel cost exhibit included in each component’s annual Reserve Personnel budget justification document. We did not include any travel costs not included in the reserve components’ Reserve Personnel accounts, such as any travel costs in the components’ respective Operations and Maintenance accounts. We did not include travel costs for the Active Guard and Reserve because individuals serving these components are responsible for the full-time administration of the reserve components and differ significantly from part-time drilling reservists in their responsibilities and associated travel. We also did not include costs for the Individual Ready Reserve because these reservists have different training patterns than other reservists. In table 2, we summarize the costs reported by DOD’s six reserve components for Annual Training, Inactive Duty Training, and all other travel costs for fiscal year 2015 by component. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Margaret Best (Assistant Director), Patricia Donahue, Mae Jones, Linda Keefer, Felicia Lopez, Carol Petersen, and Adam Smith made major contributions to this report.
Why GAO Did This Study About 91 percent of DOD's 811,000 reservists are part-time, performing military service in addition to civilian employment and careers. These reservists may have to travel to perform required military training or other duties. The National Defense Authorization Act for Fiscal Year 2017 contains a provision for GAO to review the cost of travel for members of the reserve components. This report (1) describes the conditions under which reservists may incur unreimbursed out-of-pocket travel expenses in connection with their service, and (2) addresses the extent to which DOD has assessed the effect of reservists' unreimbursed out-of-pocket travel expenses on retention. GAO reviewed DOD's Joint Travel Regulations and interviewed officials to determine conditions under which reservists might incur unreimbursed travel expenses. It also compared DOD's efforts to analyze the effect of such expenses with federal internal control standards, which state that management requires quality information to make informed decisions and evaluate an entity's performance in achieving key objectives. What GAO Found Reservists may incur unreimbursed out-of-pocket expenses under certain conditions in connection with their service. Although the Department of Defense's (DOD) six reserve components reported paying or reimbursing $925 million in travel costs for reservists in fiscal year 2015, the most recent year for which data were available, reservists may still incur various expenses that are not reimbursable under DOD's travel regulations. Officials responsible for travel regulations told us that unreimbursed travel expenses for reservists generally arise because it is DOD's policy to: (1) not provide reimbursement, except in limited circumstances, for the cost of travel to attend Inactive Duty Training (i.e., the “1 weekend a month” training commitment for reservists) and (2) consider longer duration training or assignments as a Permanent Change of Station—a change in reservists' home of record—and not as temporary travel. The National Defense Authorization Act for 2008 established a reimbursement program for Inactive Duty Training travel costs, but reservists must meet certain eligibility criteria, such as serving in a critical occupation, and not all service Secretaries have chosen to participate. Under the program, reimbursement is limited to $300 for each roundtrip to the training location. Further, DOD's policy to consider longer duration training or assignments as a Permanent Change of Station may also result in unreimbursed expenses. Specifically, according to DOD officials, reservists may have to maintain two households if, because of their part-time status, they decide not to move themselves and their families to the location of Active Duty Training for 140 days or longer, or of other active duty assignments for 181 days or longer. DOD and the services have conducted a few limited assessments of the potential effect of reservists' unreimbursed travel expenses on the retention of reservists. However, several DOD reports and studies and officials whom GAO interviewed have expressed concern that such unreimbursed expenses may, among other factors, be a challenge for reservists and may therefore negatively affect retention. For example, a 2012 survey commissioned by the Army Reserve of a small sample of reservist officers potentially eligible for battalion command positions reported that unreimbursed travel costs were among several factors that could influence their decision to apply for these positions. DOD and the reserve components are considering changes to reserve travel policy to mitigate the effect of unreimbursed expenses on reservists, by, for example, increasing the $300 limit for Inactive Duty Training reimbursement. However, without the benefit of quality information, DOD risks not managing the potential influence of these policies on reservists' retention or agency expenditures. What GAO Recommends GAO is recommending that DOD collect quality information and conduct an analysis of the potential effects of reservists' unreimbursed travel expenses on retention, and respond to these risks by considering the costs and benefits of any possible actions to address the identified issues. DOD concurred with this recommendation.
gao_GAO-19-179
gao_GAO-19-179_0
Background Over the last 3 decades employers have shifted away from sponsoring defined benefit (DB) plans and toward DC plans. This shift also transfers certain types of risk—such as investment risk—from employers to employee participants. DB plans generally offer a fixed level of monthly annuitized retirement income based upon a formula specified in the plan, which usually takes into account factors such as a participant’s salary, years of service, and age at retirement, regardless of how the plan’s investments perform. In contrast, benefit levels in DC plans—such as 401(k) plans—depend on the contributions made to the plan and the performance of the investments in individual accounts, which may fluctuate in value. As we have previously reported, some experts have suggested that the portability of DC plans make them better-suited for a mobile workforce, and that such portability may lead to early withdrawals of retirement savings. DOL reported there were 656,241 DC and 46,300 DB plans in the United States in 2016. Tax incentives are in place to encourage employers to sponsor retirement plans and employees to participate in plans. Under the Employee Retirement Income Security Act of 1974 (ERISA), employers may sponsor DC retirement plans, including 401(k) plans—the predominant type of DC plan, in which benefits are based on contributions to and the performance of the investments in participants’ individual accounts. To save in 401(k) plans, participants contribute a portion of their income into an investment account, and in traditional 401(k) plans taxes are deferred on these contributions and associated earnings, which can be withdrawn without penalty after age 59½ (if permitted by plan terms). As plan sponsors, employers may decide the amount of employer contributions (if any) and how long participants must work before having a non-forfeitable (i.e., vested) interest in their plan benefit, within limits established by federal law. Plan sponsors often contract with service providers to administer their plans and provide services such as record keeping (e.g., tracking and reporting individual account contributions); investment management (i.e., selecting and managing the securities included in a mutual fund); and custodial or trustee services for plan assets (e.g., holding the plan assets in a bank). Individuals also receive tax incentives to save for retirement outside of an employer-sponsored plan. For example, traditional IRAs provide certain individuals with a way to save pre-tax money for retirement, with withdrawals made in retirement taxed as income. In addition, Roth IRAs allow certain individuals to save after-tax money for retirement with withdrawals in retirement generally tax-free. IRAs were established under ERISA, in part, to (1) provide a way for individuals not covered by a pension plan to save for retirement; and (2) give retiring workers or individuals changing jobs a way to preserve assets from 401(k) plans by transferring their plan balances into IRAs. The Investment Company Institute (ICI) reported that 34.8 percent of households in the United States owned an IRA in 2017, a percentage that has generally remained stable since 2000. In 2017, IRA assets accounted for almost 33 percent (estimated at $9.2 trillion) of total U.S. retirement assets, followed by DC plans, which accounted for 27 percent ($7.7 trillion). Further, according to ICI, over 94 percent of funds flowing into traditional IRAs from 2000 to 2015 came from rollovers—primarily from 401(k) plans. Oversight of IRAs and 401(k) Plans IRS, within the Department of the Treasury, is responsible for enforcing IRA tax laws, while IRS and DOL share responsibility for overseeing prohibited transactions relating to IRAs. IRS also works with DOL’s Employee Benefits Security Administration (EBSA) to enforce laws governing 401(k) plans. IRS is primarily responsible for interpreting and enforcing provisions of the Internal Revenue Code (IRC) that apply to tax- preferred retirement savings. EBSA enforces ERISA’s reporting and disclosure and fiduciary responsibility provisions, which, among other things, include requirements related to the type and extent of information that a plan sponsor must provide to plan participants. Employers sponsoring employee benefit plans subject to ERISA, such as a 401(k) plans, generally must file detailed information about their plan each year. The Form 5500 serves as the primary source of information collected by the federal government regarding the operation, funding, expenses, and investments of employee benefit plans. The Form 5500 includes information about the financial condition and operation of their plans, among other things. EBSA uses the Form 5500 to monitor and enforce plan administrators and other fiduciaries, and service providers’ responsibilities under Title I of ERISA. IRS uses the form to enforce standards that relate to, among other things, how employees become eligible to participate in benefit plans, and how they become eligible to earn rights to benefits. Permitted Early Withdrawals of Retirement Savings In certain instances, sponsors of 401(k) plans may allow participants to access their tax-preferred retirement savings prior to retirement. Plan sponsors have flexibility under federal law and regulations to choose whether to allow plan participants access to their retirement savings prior to retirement and what forms of access to allow. Typically, plans allow participants to access their savings in one or more of the following forms: Loans: Plans may allow participants to take loans and limit the number of loans allowed. If the plan provides for loans, the maximum amount that the plan can permit as a loan generally cannot exceed the lesser of (1) the greater of 50 percent of the vested account balance, or $10,000 or (2) $50,000 less the excess of the highest outstanding balance of loans during the 1-year period ending on the day before the day on which a new loan is made over the outstanding balance of loans on the day the new loan is made. Plan loans are generally not treated as early withdrawals unless they are not repaid within the terms specified under the plan. Hardship withdrawals: Plans may allow participants facing a hardship to take a withdrawal on account of an immediate and heavy financial need, and if the withdrawal is necessary to satisfy the financial need. Though plan sponsors can decide whether to offer hardship withdrawals and approve applications for hardship withdrawals, IRS regulations provide “safe harbor” criteria regarding circumstances when a withdrawal is deemed to be on account of an immediate heavy financial need. IRS regulations allow certain expenses to qualify under the safe harbor including: (1) certain medical expenses; (2) costs directly relating to the purchase of a principal residence; (3) tuition and related educational fees and expenses for the participant, and their spouse, children, dependents or beneficiary; (4) payments necessary to prevent eviction from, or foreclosure on, a principal residence; (5) certain burial or funeral expenses; and (6) certain expenses for the repair of damage to the employee’s principal residence. Plans that provide for hardship withdrawals generally specify what information participants must provide to the plan sponsor to demonstrate a hardship meets the definition of an immediate and heavy financial need. Early withdrawals of retirement savings may have short-term and long- term impacts on participants’ ability to accumulate retirement savings. In the short term, IRA owners and participants in 401(k) plans who received a withdrawal before reaching age 59½ generally pay an additional 10 percent tax for early distributions in addition to income taxes on the taxable portion of the distribution amount. The IRC exempts certain distributions from the additional tax, but the exceptions vary among 401(k) plans and IRAs. Early withdrawals of any type can result in the permanent removal of assets from retirement accounts thereby reducing the amounts participants can accumulate before retirement, including the loss of compounded interest or other earnings on the amounts over the participant’s career. Disposition of Account Balances at Job Separation According to DOL’s Bureau of Labor Statistics (BLS), U.S. workers are likely to have multiple jobs in their careers as average employee tenure has decreased. In 2017, BLS reported that from 1978 to 2014, workers held an average of 12 jobs between the ages of 18 and 50. BLS also reported in 2016 that the median job tenure for a worker was just over 4 years. Employees who separate from a job bear responsibility for deciding what to do with their accumulated assets in their former employer’s plan. Recent research estimated that 10 million people with a retirement plan change jobs each year, many of whom faced a decision on how to treat their account balance at job separation. Plan administrators must provide a tax notice detailing participants’ options for handling the balance of their accounts. When plan participants separate from their employers, they generally have one of three options: 1. They may leave the balance in the plan, 2. They may ask their employer to roll the money directly into a new qualified employer plan or IRA (known as a direct rollover), or 3. They may request a distribution. Once the participant receives the distribution he or she can (1) within 60 days, roll the distribution into a new qualified employer plan or IRA (in which case the money would remain tax-preferred); or (2) keep the distributed amount, and pay any income taxes or additional taxes associated with the distribution (known as a cashout). Sponsors of 401(k) plans may cash out or transfer separating participant accounts if an account balance falls below a certain threshold. The Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA) amended the IRC to provide certain protections for separating participants with account balances between $1,000 and $5,000 by requiring, in the absence of participant direction, plan sponsors to either keep the account in the plan or to transfer the account balance to an IRA to preserve its tax-preferred status. Plan sponsors may not distribute accounts with balances of more than $5,000 without participant direction, but have discretion to distribute account balances of $1,000 or less. Additional Tax Consequences for Early Withdrawals The IRC imposes an additional 10 percent tax (in addition to ordinary income tax) on certain early withdrawals from qualified retirement plans, which includes IRAs and 401(k) plans in an effort to discourage the use of plan funds for purposes other than retirement and ensure the favorable tax treatment for plan funds is used to provide retirement income. Employers are required to withhold 20 percent of the amount cashed out to cover anticipated income taxes unless the participant pursues a direct rollover into another qualified plan or IRA. Employee Financial Literacy and Financial Wellness Research has found that many employees are concerned about their level of savings and ability to manage their retirement accounts, and some employers provide educational services to improve employees’ financial wellness and financial literacy and encourage them to save for retirement. A 2017 survey on employee financial wellness in the workplace found more than one-half of workers experienced financial stress and that insufficient emergency savings was a top concern for employees. Research has also found that limited financial literacy is widespread among Americans over age 50, and those who lack financial knowledge are less likely to successfully plan for retirement. In 2018, the Federal Reserve reported that three-fifths of non-retirees with participant-directed retirement accounts had little to no comfort managing their own investments. As we have previously reported, some employers have developed comprehensive programs aimed at overall improvement in employees’ financial health. These programs, often called financial wellness programs, may help employees with budgeting, emergency savings, and credit management, in addition to the traditional information and assistance provided for retirement and health benefits. At Least $69 Billion Dollars in 2013 Left Retirement Accounts Early, Mostly from Individual Retirement Accounts In 2013, individuals ages 25 to 55 withdrew at least $68.7 billion early from their retirement accounts. Of this amount, IRA owners in this age group withdrew the largest share (about 57 percent) and 401(k) plan participants in this age group withdrew the rest (about 43 percent) However, a total amount withdrawn from 401(k) plans cannot be determined due to data limitations. Nearly $40 Billion Withdrawn Early from IRAs in 2013 IRA withdrawals were the largest source of early withdrawals of retirement savings, accounting for an estimated $39.5 billion of the total $68.7 billion in early withdrawals made by individuals ages 25 to 55 in 2013. According to IRS estimates, 12 percent of IRA owners in this age group withdrew money early that year from their IRAs in 2013. The amount they withdrew early comprised a small percentage of their total IRA assets. Specifically, in 2013, the amount of early withdrawals was equivalent to 3 percent of the cohort’s total IRA assets and, according to IRS estimates, the total amount withdrawn by this cohort exceeded their total contributions to IRAs in that year. At Least $29 Billion Withdrawn Early from 401(k) Plans in 2013 At least $29.2 billion left 401(k) plans in 2013 in the form of hardship withdrawals, cashouts at job separation, and unrepaid plan loans, according to our analysis of 2013 SIPP data and data from DOL’s Form 5500. Specifically, we found that: Hardship withdrawals were the largest source of early withdrawals from 401(k) plans with an estimated 4 percent (+/- 0.25) of plan participants ages 25 to 55 withdrawing an aggregate $18.5 billion in 2013. The amount of hardship withdrawals was equivalent to 0.5 percent (+/- 0.06) of the cohort’s total plan assets and 8 percent (+/- 0.9) of the cohort’s plan contributions made in 2013. Cashouts of account balances of $1,000 or more at job separation were the second largest source of early withdrawals from 401(k) plans. In 2013, an estimated 1.1 percent (+/- 0.11) of plan participants ages 25 to 55 withdrew an aggregate $9.8 billion from their plans that they did not roll into another qualified plan or IRA. Additionally, 86 percent (+/- 2.9) of these participants taking a cashout of $1,000 or more did not roll over the amount in 2013. The amounts cashed out and not rolled over were equivalent to 0.3 percent (+/- 0.05) of the cohort’s total plan assets and 4 percent (+/- 0.75) of the cohort’s total contributions made in 2013. Loan defaults accounted for at least $800 million withdrawn from 401(k) plans in 2013; however, the amount of distributions of unpaid plan loans is likely larger as DOL data cannot be used to quantify plan loan offsets that are deducted from participants’ account balances after they leave a plan. As a result, the amount of loan offsets among terminating participants ages 25 to 55 cannot be determined with certainty. Specifically, DOL’s Form 5500 instructions require plan sponsors to report unpaid loan balances in two separate places on the Form 5500, depending on whether the loan holder is an active or a terminated participant. For active participants, plan sponsors report loan defaults as a single line item on the Form 5500 (i.e., the $800 million in 2013 listed above). For terminated participants, plan sponsors report unrepaid plan loan balances as benefits paid directly to participants—a category that also includes rollovers to employer plans and IRAs. According to a DOL official, as a result of this commingling of benefits on this line item, isolating the amount of loan offsets for terminated participants using the Form 5500 data is not possible. Without better data of the amount of unrepaid plan loans, the amount of loan offsets and the characteristics of plan participants who did not repay their plan loans at job separation cannot be determined. Additional Tax Consequences of Early Withdrawals Also Contributed to Reductions in Overall Savings IRA owners and plan participants taking early withdrawals paid $6.2 billion as a result of the additional 10 percent tax for early distributions in 2013, according to IRS estimates. Although the taxes are generally treated separately from the amounts withdrawn, IRA owners and plan participants are expected to pay any applicable taxes resulting from the additional 10 percent tax when filing their income taxes for the tax year in which the withdrawal occurred. Certain Characteristics Were Associated With Higher Incidence of Early Withdrawals Individuals with certain demographic and economic characteristics that we analyzed had higher incidence of early withdrawals of retirement savings, according to our analysis of SIPP data. The characteristics described below reflect statistically significant differences between comparison groups (a full listing of all demographic groups can be found in appendix III). Age. The incidence of IRA withdrawals was higher among individuals ages 45 to 54 (8 percent) than individuals ages 25 to 34 and 35 to 44. Education. Individuals with a high school education or less had higher incidence of cashouts (97 percent) and hardship withdrawals (7 percent) than individuals with some college or some graduate school education. Family size. Individuals in families of seven or more (8 percent) or in families of five to six (7 percent) had higher incidence of hardship withdrawals than individuals in smaller family groups we analyzed. Individuals living alone had higher incidence of IRA withdrawals than individuals living in the larger family groups. Marital status. Widowed, divorced, or separated individuals had higher incidence of IRA withdrawals (11 percent) and hardship withdrawals (7 percent) than married or never married individuals. Race. The incidence of hardship withdrawals among African American (10 percent) and Hispanic individuals (6 percent) was higher than among individuals who were White, Asian, or Other. Residence. The incidence of IRA withdrawals and hardship withdrawals was higher among individuals living in nonmetropolitan areas (7 percent and 6 percent, respectively) than among individuals living in metropolitan areas. Similarly, individuals with certain economic characteristics that we analyzed had higher incidence of early withdrawals of retirement savings, according to our analysis of SIPP data. The characteristics described below reflect statistically significant differences between comparison groups (a full listing of all demographic groups can be found in appendix III). Employer size. Individuals working for employers with fewer than 25 employees had higher incidence of IRA withdrawals (9 percent) than individuals working for employers with higher number of employees. Employment. Individuals working fewer than 35 hours per week had higher incidence of IRA withdrawals (7 percent) than employees working 35 hours or more. Household debt. Individuals with household debt of $5,000 up to $20,000 had higher incidence of IRA withdrawals (14 percent) than individuals with other debt amounts. Household income. Individuals with household income of less than $25,000 or $25,000 up to $50,000 had higher incidence of IRA withdrawals (12 percent and 9 percent, respectively) and hardship withdrawals (9 percent and 7 percent, respectively) than individuals with higher income amounts. Personal cash reserves. Individuals with personal cash reserves of less than $1,000 had higher incidence of IRA withdrawals (10 percent) and hardship withdrawals (6 percent) than individuals with larger reserves. Retirement assets. Individuals with combined IRA and 401(k) plan assets valued at less than $5,000 had higher incidence of hardship withdrawals (7 percent) than individuals with higher valued assets. Tenure in retirement plan. Individuals with fewer than 3 years in their retirement plan had higher incidence of hardship withdrawals (6 percent) than individuals with longer tenures. Plan Rule Flexibilities and Use of Retirement Assets for Pressing Financial Needs Said to Result in Early Withdrawals Stakeholders Said Plan Rules Governing Early Withdrawals May Lead to Reduced Savings for Some Participants Stakeholders we interviewed said that plan rules related to the disposition of account balances at job separation can lead participants to remove more than they need, up to and including their entire balance. We previously reported U.S. workers are likely to change jobs multiple times in a career. Plan sponsors may cash out balances of $1,000 or less at job separation, although they are not required to do so. As a result, plan participants with such balances, including younger employees and others with short job tenures, risk having their account balances distributed in full each time they change jobs. As shown in table 1, a separating employee must take multiple steps to ensure that an account balance remains tax-preferred. Participants who take a distribution from a plan with the intent of rolling it into another qualified plan or IRA must acquire additional funds to complete the rollover and avoid adverse tax consequences. Plan sponsors are required to withhold 20 percent of the account balance to pay anticipated taxes on the distribution. As a result, the sponsor then sends 80 percent of the account balance to the participant, who must acquire outside funds to compensate for the 20 percent withheld or forgo the preferential tax treatment of that portion of their account balance. For example, a participant seeking to roll over a retirement account with a $10,000 balance would receive an $8,000 distribution after tax withholding, requiring them to locate an additional $2,000 to complete the rollover within the 60-day period to avoid a taxable distribution of the withheld amount. If participants can replace the 20 percent withheld and complete the rollover within the 60-day period, they do not owe taxes on the distribution. Stakeholders said that the complexity of rolling a 401(k) account balance from one employer to another may encourage participants to take the relatively simpler route of rolling their balance into an IRA or cashing out altogether. They noted that separating participants had many questions when evaluating their options and had difficulty understanding the notice provided. For example, participants may not fully understand how the decisions made at job separation can have a significant impact on their current tax situation and eventual retirement security. One plan sponsor, describing concerns about giving investment advice, said she watched participants make what she judged to be poor choices with their account balances and felt helpless to intervene. Stakeholders also noted that the lack of a standardized rollover process sometimes bred mistrust among employers and complicated separating participants’ ability to successfully facilitate a rollover between plans. For example, one stakeholder told us that some plans were hesitant to accept funds from other employer plans fearing that the funds might come from plans that have failed to comply with plan qualification requirements and could create problems for the receiving plan later on. Another stakeholder suggested that the requirement for plan sponsors to provide a notice to separating participants likely caused more participants to take the distribution. Stakeholders described loans as a useful source of funds in times of need and a way to avoid more expensive options, such as high-interest credit cards. They also noted that certain plan loan policies could lead to early withdrawals of retirement savings. (See fig. 1.) Loan repayment at job separation: Stakeholders said loan repayment policies can increase the incidence of defaults on outstanding loans. When participants do not repay their loan after separating from a job, the outstanding balance is treated as a distribution, which may subject it to income tax liability and, possibly, an additional 10 percent tax for early distributions. According to stakeholders, the process of changing jobs can inadvertently lead to a distribution of a participant’s outstanding loan balance, when the participant could have otherwise repaid the loan. Extended loan repayment periods: Some plan sponsors allow participants to take loans to purchase a home. Stakeholders told us that the amounts of these home loans tended to be larger than general purpose loans and had longer repayment periods that these extended from 15 to 30 years. A stakeholder further noted that these loans could make it more likely that participants would have larger balances to repay if they lost or changed jobs. Multiple loans: While some plan sponsors noted that their plans limited the number of loans participants can take from their retirement plan, others do not. Some plan sponsors limited participants to between one and three simultaneous loans, and one plan administrator indicated that 92 percent of their plan-sponsor clients allowed no more than two simultaneous loans. Other plan sponsors placed no limit on the number of participant loans or limited loans to one or two per calendar year, in which case a participant could take out a new loan at the start of a calendar year regardless of whether or not outstanding loans had been repaid. Stakeholders described some participants as “serial” borrowers, who take out multiple loans and have less disposable income as a result of ongoing loan payments. One plan administrator stated that repeat borrowing from 401(k) plans was common, and some participants took out new loans to pay off old loans. Other loan restrictions: Allowing no loans or one total outstanding loan can cause participants facing economic shocks to take a hardship withdrawal, resulting in the permanent removal of their savings and subjecting them to income tax liability and, possibly, an additional 10 percent tax for early distributions and a suspension on contributions. Minimum loan amounts: Minimum loan amounts may result in participants borrowing more than they need to cover planned expenses. For example, a participant may have a $500 expense for which they seek a loan, but may have to borrow $1,000 due to plan loan minimums. Stakeholders Said Participants Take Early Withdrawals for Pressing Financial Needs Stakeholders said that plan participants take plan loans and hardship withdrawals for pressing financial needs. Many plan sponsors we interviewed said they used the IRS safe harbor exclusively as criteria when reviewing a participant’s application for a hardship withdrawal. Stakeholders said the top two reasons participants took hardship withdrawals were to prevent imminent eviction or foreclosure and to cover out-of-pocket medical costs not covered by health insurance. Participants generally took loans to reduce debt, for emergencies, or to purchase a primary residence. Stakeholders also said that participants who experienced economic shocks stemming from job loss made early withdrawals. They said retirement plans often served as a form of insurance for those between jobs or facing a sudden economic shock and participants accessed their retirement accounts because, for many, they were the only source of savings. They cited personal debt, health care costs, and education as significant factors that affected employees across all income levels. Stakeholders said some participants also used their retirement savings to pay for anticipated expenses. Two plan administrators said education expenses were one of the reasons participants took hardship withdrawals. They said that participants accessed their retirement savings to address the cost of higher education, including paying off their own student loan debt or financing the college costs for family members. For example, plan administrators told us that some participants saved with the expectation of taking a hardship withdrawal to pay for college tuition. Other participants utilized hardship withdrawals to purchase a primary residence. Reasons for IRA Withdrawals Are Not Reported to IRS IRA owners generally may take withdrawals at any time and IRS does not analyze the limited information it receives on the reasons for IRA withdrawals. IRA owners can withdraw any amount up to their entire account balance at any time. In addition, IRAs have certain exceptions from the additional 10 percent tax for early distributions. For example, IRA withdrawals taken for qualified higher education expenses, certain health insurance premiums, and qualified “first-time” home purchases (up to $10,000) are excepted from the additional 10 percent tax. IRA owners who make an IRA distribution receive a Form1099-R or similar statement from their provider. On the Form 1099-R, IRA providers generally identify whether the withdrawal, among other things, can be categorized as a normal distribution, an early distribution, or a direct distribution to a qualified plan or IRA. For an early distribution, the IRA provider may identify whether a known exception to the additional 10 percent tax applies. For their part, IRA owners are required to report early withdrawals on their income tax returns, as well as the reason for any exception from the additional 10 percent tax for a limited number of items. In written responses to questions, an IRS official indicated that IRS collected data on the exemption reason codes, but did not use them. Stakeholders Suggested Strategies to Balance Access to Early Withdrawals with the Need to Build Long-term Retirement Savings Some Plan Sponsors Have Implemented Policies to Preserve the Benefits of Early Withdrawals While Reducing Their Long-term Effects Preserving 401(k) Account Balances at Job Separation Some plan sponsors we interviewed had policies in place that may reduce the long-term impact of early withdrawals of retirement savings taken at job separation. Policies suggested by plan sponsors included: Providing a periodic installment distribution option: Although some plan sponsors may require participants wanting a distribution to take their full account balance at job separation, other plan sponsors provided participants with an option of receiving their account balance in periodic installments. For example, one plan sponsor gives separating participants an option to receive periodic installment distributions at intervals determined by the participants. This plan sponsor said separating participants could select distributions on a monthly, quarterly, semi-annual or annual basis. These participants could also elect to stop distributions at any time, preserving the remaining balance in the employer’s plan. The plan sponsor said the plan adopted this option to help separating participants address any current financial needs, while preserving some of the account balance for retirement. Another plan sponsor adopted a similar policy to address the cyclical nature of the employer’s business, which can result in participants being terminated and rehired within one year. Offering partial distributions: One plan sponsor provided separated participants with the option of receiving a one-time, partial distribution. If a participant opted for partial distribution, the plan sponsor issued the distribution for the requested sum and preserved the remainder of the account balance in the plan. The plan sponsor adopted the partial distribution policy to provide separating participants with choices for preserving account balances, while simultaneously providing access to address any immediate financial needs. Providing plan loan repayment options for separated participants: Some plan sponsors allowed former participants to continue making loan repayments after job separation. Loan repayments after job separation reduce the loan default risk and associated tax implications for participants. Some plan sponsors said that separating participants who have the option to continue repaying an outstanding loan balance generally have three options: (1) to continue repaying the outstanding loan, (2) to repay the entire balance of the loan at separation within a set repayment period, or (3) not to repay the loan. Those participants who continue repaying their loans after separation generally have the option to set up automatic debit payments to facilitate the repayment. Those separated participants who do not set up loan repayment terms within established timeframes, or do not make a payment after the loan repayment plan has been established, default on their loan and face the associated tax consequences, including, possibly, an additional 10 percent tax for early distributions. Setting Limits on Plan Loans Some plan sponsors we spoke with placed certain limits on participant loan activity, which may reduce the incidence of loan defaults (see fig. 2). Limiting loan amounts to participant contributions: Some plan sponsors said they limited plan loans to participant contributions and any investment earnings from those contributions to reduce early withdrawals of retirement savings. For example, one plan sponsor’s policy limited the amount a participant could borrow from their plan to 50 percent of participant contributions and earnings, compared to 50 percent of the total account balance. Implementing a waiting period after loan repayment before a participant can access a new loan: Some plan sponsors said they had implemented a waiting period between plan loans, in which a participant, having fully paid off the previous loan, was temporarily ineligible to apply for another. Among plan sponsors who implemented a waiting period, the length varied from 21 days to 30 days. Reducing the number of outstanding loans: Some plan sponsors we spoke with limited the number of outstanding plan loans to either one or two loans. One plan sponsor had previously allowed one new loan each calendar year, but subsequently revised plan policy to allow participants to have a total of two outstanding loans. The plan sponsor said the rationale was to balance limiting participant loan behavior with the ability of participants to access their account balance. Reducing Impact of Economic Shocks Some plan sponsors said they had expanded the definition of immediate and heavy financial need beyond the IRS safe harbor to better align with the economic needs of their participants. For example, one plan sponsor approved a hardship withdrawal to help a participant pay expenses related to a divorce settlement. Another plan sponsor developed an expanded list of qualifying hardships, including past-due car, mortgage, or rent payments; and payday loan obligations. Some plan sponsors implemented loan programs outside their plan, contracting with third-party vendors to provide short-term loans to employees. For example, one plan sponsor instituted a loan program that allowed employees to borrow up to $5,000 from a third-party vendor that would be repaid through payroll deduction. This plan sponsor said the loan program featured an 8 to 12 percent interest rate, and approval was not based on a participant’s credit history. The plan sponsor also observed that they had fewer 401(k) loan applications since the third- party loan program was implemented. A second plan sponsor instituted a similar loan program that allowed employees to borrow up to $500 interest free from a third-party vendor. According to this sponsor, to qualify for a loan, an employee must demonstrate financial hardship and have no outstanding plan loans, and is required to attend a financial counseling course if their loans are approved. Improving Participants’ Financial Wellness Some plan sponsors said they have provided workplace-based financial wellness resources for their participants to improve their financial literacy. Some implemented optional financial wellness programs that covered topics such as investment education, how plan loans work, and the importance of saving for emergencies. These plan sponsors told us they offered on-site financial counseling with representatives of the plan administrator to help provide guidance on financial decision-making; however, other plan sponsors said that—despite their investment in participant-specific financial education—participation in these programs was low. Stakeholders Suggested Strategies That Could Preserve the Benefits of Early Withdrawals While Reducing Their Long-term Effects Stakeholders suggested strategies that they believed could help mitigate the long-term effects of early withdrawals of retirement savings on IRA owners and plan participants. They noted that any of these proposed strategies, if implemented, could (1) increase the costs of administering IRAs and plans, (2) require changes to federal law or regulations, and (3) involve tradeoffs between providing access to retirement savings and preserving savings for retirement. Strategies for IRAs Stakeholders suggested several strategies that, if implemented, could help reduce early withdrawals from IRAs. These strategies centered on modifying existing rules to reduce early withdrawals from IRAs (and subsequently the amount paid as a result of the additional 10 percent tax for early distributions). Specifically, stakeholders suggested: Raising the age at which the additional 10 percent tax applies: Some stakeholders noted that raising the age at which the additional 10 percent tax for early distributions applies from 59½ to 62 would align it with the earliest age of eligibility to claim Social Security and may encourage individuals to consider a more comprehensive retirement distribution strategy. However, other stakeholders cautioned that it could have drawbacks for employees in certain situations. For example, individuals who lose a job late in their careers could face additional tax consequences for accessing an IRA before reaching the age 62. In addition, one stakeholder said some individuals may shift to a part-time work schedule later in their careers as they transition to retirement and plan on taking IRA withdrawals to compensate for their lower wages. Allowing individuals to roll existing plan loans into an IRA: Some stakeholders said that allowing individuals to include an existing plan loan as part of a rollover into an IRA, although currently not allowed, would likely reduce plan loan defaults by giving individuals a way to continue repaying the loan balance. One stakeholder suggested that rolling an existing plan loan into an IRA could be administratively challenging for IRA providers, but doing so to repay the loan may ultimately preserve retirement savings. Allowing IRA loans: While currently a prohibited transaction that could lead to the cessation of an IRA, some stakeholders suggested that IRA loans could theoretically reduce the amounts being permanently removed from the retirement system through early IRA withdrawals. One stakeholder said an IRA loan would present a good alternative to an early withdrawal from an IRA account because it would give the account holder access to the balance, defer any tax implications, and improve the likelihood the loaned amount would ultimately be repaid. However, another stakeholder said that allowing IRA loans could increase early withdrawals, given the limited oversight of IRAs, as well as additional administrative costs and challenges for IRA providers. Strategies for 401(k) Plans Stakeholders suggested several strategies that, if implemented, could reduce the effect of cashouts at job separation from 401(k) plans. Simplifying the rollover process: Stakeholders proposed two modifications to the current rollover process that they believe could make the process more seamless and reduce the incidence of cashouts. First, stakeholders suggested that a third-party entity tasked with facilitating rollovers between employer plans for a separating participant would likely reduce the incidence of cashouts at job separation. Such an entity could automatically route a participant’s account balance from the former plan to a new one. One stakeholder said having a third-party entity facilitate the rollover would eliminate the need for a plan participant to negotiate the process. Such a service, however, would likely come at cost that may likely be passed onto participants. Stakeholders also suggested direct rollovers of account balances between plans could further reduce the incidence of cashouts. One stakeholder, however, cautioned that direct rollovers could have downsides for some participants. For example, participants who prefer to keep their balance in their former employer’s plan but provide no direction to the plan sponsor may inadvertently find their account balance rolled into a new employer’s plan. Restricting cashouts to participant contributions only: Some stakeholders suggested limiting the assets a participant may access at job separation. For example, some stakeholders said that participants should not be allowed to cash out vested plan sponsor contributions, thus preserving those contributions and their earnings for retirement. However, this strategy could result in participants overseeing and monitoring several retirement accounts. Stakeholders suggested several strategies that, if implemented, could limit the adverse effect of hardship withdrawals on retirement savings. Narrowing the IRS safe harbor: Although some plan sponsors are expanding the reasons for a hardship to align with perceived employee needs, some stakeholders said narrowing the IRS safe harbor would likely reduce the incidence of early withdrawals. For example, some stakeholders suggested narrowing the definition of a hardship to exclude the purchase of a primary residence or for postsecondary education costs. In addition, one stakeholder said alternatives exist to finance home purchases (mortgages) and postsecondary education (student loans). Stakeholders noted that eliminating the purchase of a primary residence and postsecondary education costs from the IRS safe harbor would make hardship withdrawals a tool more strictly used to avoid sudden and unforeseen economic shocks. In combination with the two exclusions, one stakeholder suggested consideration be given to either reducing or eliminating the additional 10 percent tax for early distributions that may apply to hardship withdrawals. Replacing hardship withdrawals with hardship loans: Stakeholders said replacing a hardship withdrawal, which permanently removes money from the retirement system, with a no-interest hardship loan, which would be repaid to the account, would reduce early withdrawals. Under this suggestion, if the loan were not repaid within this predetermined time frame, the remaining loan balance could be considered a deemed distribution and treated as income (similar to the way a hardship withdrawal is treated now). Incorporating emergency savings features into 401(k) plans: Stakeholders said incorporating an emergency savings account into the 401(k) plan structure may help participants absorb economic shocks and better prepare for both short-term financial needs and long-term retirement planning. (See fig. 3.) In addition, stakeholders said participants with emergency savings accounts could be better prepared to avoid high interest rate credit options, such as credit cards or payday loans, in the event of an economic shock. Stakeholders had several ideas for implementing emergency savings accounts. For example, one stakeholder suggested that, were it allowed, plan sponsors could revise automatic account features to include automatic contributions to an emergency savings account. Some stakeholders also said emergency savings accounts could be funded with after-tax participant contributions to eliminate the tax implications when withdrawing money from the account. However, another stakeholder said emergency savings contributions could reduce contributions to a 401(k) plan. Conclusions In the United States, the amount of aggregate savings in retirement accounts continues to grow, with nearly $17 trillion invested in 401(k) plans and IRAs. Early access to retirement savings in these plans may incentivize plan participation, increase participant contributions, and provide participants with a way to address their financial needs. However, billions of dollars continue to leave the retirement system early. Although these withdrawals represent a small percentage of overall assets in these accounts, they can erode or even deplete an individual’s retirement savings, especially if the retirement account represents their sole source of savings. Employers have implemented plan policies that seek to balance the short- term benefits of providing participants early access to their accounts with the long-term need to build retirement savings. However, the way plan sponsors treat outstanding loans after a participant separates from employment has the potential to adversely affect retirement savings. In the event of unexpected job loss or separation, plan loans can leave participants liable for additional taxes. Currently, the incidence and amount of loan offsets in 401(k) plans cannot be determined due to the way DOL collects data from plan sponsors. Additional information on loan offsets would provide insight into how plan loan features might affect long-term retirement savings. Without clear data on the incidence of these loan offsets, which plan sponsors are generally required to include, (but not itemize) on the Form 5500, the overall extent of unrepaid plan loans in 401(k) plans cannot be known. Recommendation for Executive Action To better identify the incidence and amount of loan offsets in 401(k) plans nationwide, we recommend that the Secretary of Labor direct the Assistant Secretary for EBSA, in coordination with IRS, to revise the Form 5500 to require plan sponsors to report qualified plan loan offsets as a separate line item distinct from other types of distributions. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this product to the Department of Labor, the Department of the Treasury, and the Internal Revenue Service for review and comment. In its written comments, reproduced in appendixes IV and V, respectively, DOL and IRS generally agreed with our findings, but neither agreed nor disagreed with our recommendation. DOL said it would consider our recommendation as part of its overall evaluation of the Form 5500, and IRS said it would work with DOL as it responds to our recommendation. The Department of Treasury provided no formal written comments. In addition, DOL, IRS, Treasury and two third-party subject matter experts provided technical comments, which we incorporated in the report, as appropriate As agreed with your staff, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology The objectives of this study were to determine: (1) what are the incidence and amount of retirement savings being withdrawn early; (2) what is known about the factors that might lead individuals to access their retirement savings early; and (3) what strategies or policies, if any, might reduce the incidence and amount of early withdrawals of retirement savings. Data Analysis To examine the incidence and amount of early withdrawals from individual retirement accounts (IRA) and 401(k) plans, we analyzed the most recent nationally representative data available in three relevant federal data sources, focusing our analysis on individuals in their prime working years (ages 25 to 55), when possible. For consistency, we analyzed data from 2013 from each data source because it was the most recent year that data were available for all types of early withdrawals we examined. We adjusted all dollar-value estimates derived from each data source for inflation and reported them in constant 2017 dollars. We determined that the data from these sources were sufficiently reliable for the purposes of our report. First, to examine recent incidence and amount of early withdrawals from IRAs and the associated tax consequences for individuals ages 25 to 55, we analyzed IRS estimates based on tax returns as filed by taxpayers before enforcement activity published by the Internal Revenue Service’s (IRS) Statistics of Income Division for tax year 2013. Specifically, we analyzed the number of taxpayers reporting early withdrawals from their IRAs in 2013 and the aggregate amount of these withdrawals. To provide additional context on the scope of these early withdrawals, we analyzed the age cohort’s total IRA contributions and the end-of-year fair market value of the IRAs, and compared these amounts to the aggregate amount withdrawn. To examine the incidence and amount of taxes paid as a result of the additional 10 percent tax for early distributions, we analyzed estimates on the additional 10 percent tax paid on qualified retirement plans in 2013. Although IRS did not delineate these data by age, we used these data as proxy because IRS assesses the additional 10 percent tax on distributions to taxpayers who have not reached age 59½. Given the delay between a withdrawal date and the date of the tax filing, it is possible that some of the taxes were paid in the year following the withdrawal. We reviewed technical documentation and developed the 95 percent confidence intervals that correspond to these estimates. Second, to examine the incidence and amount of early withdrawals from 401(k) plans, we analyzed data included in the 2014 panel of the U.S. Census Bureau’s Survey of Income and Program Participation (SIPP)—a nationally representative survey of household income, finances, and use of federal social safety net programs—along with retirement account contribution and withdrawal data included in the SIPP’s Social Security Administration (SSA) Supplement on Retirement, Pensions, and Related Content. Specifically, we developed percentage and dollar-value estimates of the incidence and amount of lump sum payments received and hardship withdrawals taken by participants in 401(k) plans in 2013. Because the SIPP is based upon a complex probability sample, we used Balanced Repeated Replication methods with a Fay adjustment to derive all percentage, dollar-total, and dollar-ratio estimates and their 95 percent confidence intervals. To better understand the characteristics of individuals who received a lump sum and/or took a hardship withdrawal in 2013, we analyzed a range of selected individual and household demographic variables and identified characteristics associated with a higher incidence of withdrawals. We applied domain estimation methods to make estimates for these subpopulations. (For a list of variables used and the results of our analysis, please see appendix III.) We attempted to develop a multiple regression model to estimate the unique association between each characteristic and withdrawals, but determined that the SIPP did not measure key variables in enough detail to develop persuasive causal explanations. The sample size of respondents receiving lump sums was too small to precisely estimate the partial correlations of many demographic variables at once. Even with adequate sample sizes, associations between broad demographic variables, such as age and income, likely reflected underlying causes, such as retirement and financial planning strategies, which SIPP did not measure in detail. Third, to examine the incidence and amount of unrepaid plan loans from 401(k) plans, we analyzed the latest filing of annual plan data that plan sponsors reported on the Form 5500 to the Department of Labor (DOL) for the 2013 plan year. We looked at unrepaid plan loans reported by sponsors of large plans (Schedule H) and small plans (Schedule I). For each schedule, we analyzed two variables related to unrepaid plan loans: (1) deemed distributions of participant loans (which captures the amount of loan defaults by active participants) and (2) benefits distributed directly to participants (which includes plan loan offsets for a variety of reasons, including plan loans that remain unpaid after a participant separates from a plan). Because plan sponsors report data in aggregate and do not differentiate by participant age, we calculated and reported the aggregate of loan defaults identified as deemed distributions in both schedules. We could not determine the amount of plan loan offsets based on the way that plan sponsors are required to report them. Specifically, plan sponsors are required to treat unrepaid loans occurring after a participant separates from a plan as reductions or offsets in plan assets, and are required to report them as part of a larger commingled category of offsets that also includes large-dollar items like rollovers of account balances to another qualified plan or IRA. As a result, we were unable to isolate and report the amount of this category of unrepaid plan loans. Literature Search To identify what is known about the factors that might lead individuals to access their 401(k) plans and IRAs and what strategies or policies might reduce the early withdrawal of retirement savings, we performed a literature search using multiple databases to locate documents regarding early withdrawals of retirement savings published since 2008 and to identify experts for interviews. The search yielded a wide variety of scholarly articles, published articles from various think tank organizations, congressional testimonies, and news reports. We reviewed these studies and identified factors that lead individuals to withdraw retirement savings early, as well as potential strategies or policies that might reduce this behavior. The search also helped us identify additional potential interviewees. Interviews To answer our second and third objectives, we visited four metropolitan areas and conducted 51 interviews with a wide range of stakeholders that we identified in the literature. In some cases, to accommodate stakeholder schedules, we conducted phone interviews or accepted written responses. Specifically, we interviewed human resource professionals from 22 private-sector companies (including 4 written responses), representatives from 8 plan administrators, 13 retirement research experts (including 1 written response), representatives from 4 industry associations, representatives from 2 participant advocacy organizations, and representatives from 2 financial technology companies. We conducted in-person interviews at four sites to collect information from three different groups: (1) human resource officials in private-sector companies, (2) top 20 plan administrators or recordkeepers, and (3) retirement research experts. We selected site visit locations in four metropolitan locations that were home to representatives of each group. To select companies for potential interviews, we reached out to a broad sample of Fortune 500 companies that offered a 401(k) plan to employees and varied by geographic location, industry, and number of employees. We selected plan administrators based on Pensions and Investments rankings for assets under management and number of individual accounts. We selected retirement research experts who had published research on early withdrawals from retirement savings, as well as experts that we had interviewed in our prior work. Based on these criteria, we conducted site visits in Boston, Massachusetts; Chicago, Illinois; the San Francisco Bay Area, California; and Seattle, Washington. We held interviews with parties in each category who responded affirmatively to our request. In each interview, we solicited names of additional stakeholders to interview. We also interviewed representatives of organizations, such as financial technology companies, participant advocacy organizations, industry associations, and plan administrators focused on small businesses, whose work we deemed relevant to our study. We developed a common question set for each stakeholder category that we interviewed. We based our interview questions on our literature review, research objectives, and the kind of information we were soliciting from each stakeholder category. In each interview, we asked follow-up questions based on the specific responses provided by interviewees. In our company interviews, we asked how companies administered retirement benefits for employees; company policies and procedures regarding separating employees and the disposition of their retirement accounts; company policies regarding plan loans, hardship withdrawals, and rollovers from other 401(k) plans; and company strategies to reduce early withdrawals from retirement savings. In our interviews with plan administrators, we asked about factors that led individuals to access their retirement savings early, how plan providers interacted with companies and separating employees, available data on loans and hardship withdrawals from client retirement plans, and potential strategies to reduce the incidence and amount of early withdrawals. In our interviews with retirement research experts, financial technology companies, participant advocacy organizations, and industry associations we asked about factors that led individuals to make early withdrawals from their retirement savings and any potential strategies that may reduce the incidence and amount of early withdrawals. In our interviews with plan administrators and retirement research experts, we also provided a supplementary table outlining 37 potential strategies to reduce early withdrawals from retirement savings. We asked interviewees to comment on the strengths and weaknesses of each strategy in terms of its potential to reduce early withdrawals, and gave them opportunity to provide other potential strategies not listed in the tables. We developed the list of strategies based on the results of our literature review. Some interviewees also provided us with additional data and documents to assist our research. For example, some companies and plan administrators we interviewed provided quantitative data on the number of plan participants, the average cashout or rollover amounts, the percentage of participants who took loans or hardship withdrawals from their retirement accounts, and known reasons for these withdrawals. Some research experts also provided us with documentation, including published articles and white papers that supplemented our interviews and literature review. All data collected through these methods are nongeneralizable and reflect the views and experiences of the respondents and not the entire population of their respective constituent groups. Analysis of Interview Responses To answer our second and third objectives, we analyzed the content of our stakeholder interview responses and corroborated our analysis with information obtained from our literature review and quantitative information provided by our interviewees. To examine what is known about the factors leading individuals to access retirement savings early, we catalogued common factors that stakeholders identified as contributing to early withdrawals from retirement savings. We also collected information on plan rules governing early participant withdrawals of retirement savings. To identify potential strategies or policies that might reduce the incidence and amount of early withdrawals, we analyzed interview responses and catalogued (1) company practices that employers identified as having an effect in reducing early withdrawals and (2) strategies that stakeholders suggested that could achieve a similar outcome. GAO is not endorsing or recommending any strategy in this report, and has not evaluated these strategies for their behavioral or other effects on retirement savings or on tax revenues. Appendix II: Selected Provisions Related to Early Withdrawals from 401(k) Plans and Individual Retirement Accounts (IRAs) Appendix II: Selected Provisions Related to Early Withdrawals from 401(k) Plans and Individual Retirement Accounts (IRAs) Requirements Provides an exception for distributions for qualified higher education expenses and for qualified “first-time” home purchases made before age 59½ from the additional 10 percent tax for early distributions Defines “qualified first-time homebuyer distribution” and “first-time homebuyer,” and prescribes the lifetime dollar limit on such distributions, among other things. Allows eligible individuals to make tax-deductible contributions to individual retirement accounts, subject to limits based, for example, on income and pension coverage. Provides for the loss of exemption for an IRA if the IRA owner engages in a prohibited transaction, which results in the IRA being treated as distributing all of its assets to the IRA owner at the fair market value on the first day of the year in which the transaction occurred. Defines a prohibited transaction to include the lending of money or other extension of credit between a plan and a disqualified person. Requirements Allows eligible individuals to make contributions to a Roth IRA that are not tax- deductible. Distributions from the account can generally be treated as a qualified distribution if a distribution is made on or after the Roth IRA owner reaches age 59½ and the distributions is made after the 5-taxable year period beginning when the account was initially opened. Defines a prohibited transaction to include the lending of money or other extension of credit between a plan and a disqualified person. Appendix III: Estimated Incidence of Certain Early Withdrawals of Retirement Savings Appendix III: Estimated Incidence of Certain Early Withdrawals of Retirement Savings 401(k) plans 401(k) plans ($1000 or more) Category 401(k) plans 401(k) plans ($1000 or more) Category 401(k) plans 401(k) plans ($1000 or more) Legend: * Sampling error was too large to report an estimate. Appendix V: Comments from the Internal Revenue Service Appendix VI: GAO Contact and Staff Acknowledgment GAO Contact: Staff Acknowledgment: In addition to the contact named above, Dave Lehrer (Assistant Director); Jonathan S. McMurray (Analyst-in-Charge); Gustavo O. Fernandez; Sean Miskell; Jeff Tessin; and Adam Wendel made key contributions to this report. James Bennett, Holly Dye, Sara Edmondson, Sarah Gilliland, Sheila R. McCoy, Ed Nannenhorn, Katya Rodriguez, MaryLynn Sergent, Linda Siegel, Rachel Stoiko, Frank Todisco, and Sonya Vartivarian also provided support. Related GAO Reports The Nation’s Fiscal Health: Action Is Needed to Address the Federal Government’s Future. GAO-18-299SP. Washington, D.C.: June 21, 2018. The Nation’s Retirement System: A Comprehensive Re-evaluation is Needed to Better Promote Future Retirement Security. GAO-18-111SP. Washington, D.C.: October 18, 2017. Retirement Security: Improved Guidance Could Help Account Owners Understand the Risks of Investing in Unconventional Assets. GAO-17-102. Washington, D.C.: December 8, 2016. 401K Plans: Effects of Eligibility and Vesting Policies on Workers’ Retirement Savings. GAO-17-69. Washington, D.C.: October 21, 2016. Retirement Security: Low Defined Contribution Savings May Pose Challenges. GAO-16-408. Washington, D.C.: May 5, 2016. Retirement Security: Shorter Life Expectancy Reduces Projected Lifetime Benefits for Lower Earners. GAO-16-354. Washington, D.C.: March 25, 2016. Social Security’s Future: Answers to Key Questions. GAO-16-75SP. Washington, D.C.: October 27, 2015. Retirement Security: Federal Action Could Help State Efforts to Expand Private Sector Coverage. GAO-15-556. Washington, D.C.: September 10, 2015. Highlights of a Forum: Financial Literacy: The Role of the Workplace. GAO-15-639SP. Washington, D.C.: July 7, 2015. 401(K) Plans: Greater Protections Needed for Forced Transfers and Inactive Accounts. GAO-15-73. Washington, D.C.: November 21, 2014. Older Americans: Inability to Repay Student Loans May Affect Financial Security of a Small Percentage of Retirees. GAO-14-866T. Washington, D.C.: September 10, 2014. Financial Literacy: Overview of Federal Activities, Programs, and Challenges. GAO-14-556T. Washington, D.C.: April 30, 2014. Retirement Security: Trends in Marriage and Work Patterns May Increase Economic Vulnerability for Some Retirees. GAO-14-33. Washington, D.C.: January 15, 2014. 401(K) Plans: Labor and IRS Could Improve the Rollover Process for Participants. GAO-13-30. Washington, D.C.: March 7, 2013. Retirement Security: Women Still Face Challenges. GAO-12-699. Washington, D.C.: July 19, 2012. 401(K) Plans: Policy Changes Could Reduce the Long-term Effects of Leakage on Workers’ Retirement Savings. GAO-09-715. Washington, D.C: August 28, 2009.
Why GAO Did This Study Federal law encourages individuals to save for retirement through tax incentives for 401(k) plans and IRAs—the predominant forms of retirement savings in the United States. In 2017, U.S. plans and IRAs reportedly held investments worth nearly $17 trillion dollars. Federal law also allows individuals to withdraw assets from these accounts under certain circumstances. DOL and IRS oversee 401(k) plans, and collect annual plan data—including financial information—on the Form 5500. For both IRAs and 401(k) plans, GAO was asked to examine: (1) the incidence and amount of early withdrawals; (2) factors that might lead individuals to access retirement savings early; and (3) policies and strategies that might reduce the incidence and amounts of early withdrawals. To answer these questions, GAO analyzed data from IRS, the Census Bureau, and DOL from 2013 (the most recent complete data available); and interviewed a diverse range of stakeholders identified in the literature, including representatives of companies sponsoring 401(k) plans, plan administrators, subject matter experts, industry representatives, and participant advocates. What GAO Found In 2013 individuals in their prime working years (ages 25 to 55) removed at least $69 billion (+/- $3.5 billion) of their retirement savings early, according to GAO's analysis of 2013 Internal Revenue Service (IRS) and Department of Labor (DOL) data. Withdrawals from individual retirement accounts (IRA)—$39.5 billion (+/- $2.1 billion)—accounted for much of the money removed early, were equivalent to 3 percent (+/- 0.15 percent) of the age group's total IRA assets, and exceeded their IRA contributions in 2013. Participants in employer-sponsored plans, like 401(k) plans, withdrew at least $29.2 billion (+/- $2.8 billion) early as hardship withdrawals, lump sum payments made at job separation (known as cashouts), and loan balances that borrowers did not repay. Hardship withdrawals in 2013 were equivalent to about 0.5 percent (+/-0.06 percent) of the age group's total plan assets and about 8 percent (+/- 0.9 percent) of their contributions. However, the incidence and amount of certain unrepaid plan loans cannot be determined because the Form 5500—the federal government's primary source of information on employee benefit plans—does not capture these data. Stakeholders GAO interviewed identified flexibilities in plan rules and individuals' pressing financial needs, such as out-of-pocket medical costs, as factors affecting early withdrawals of retirement savings. Stakeholders said that certain plan rules, such as setting high minimum loan thresholds, may cause individuals to take out more of their savings than they need. Stakeholders also identified several elements of the job separation process affecting early withdrawals, such as difficulties transferring account balances to a new plan and plans requiring the immediate repayment of outstanding loans, as relevant factors. Stakeholders GAO interviewed suggested strategies they believed could balance early access to accounts with the need to build long-term retirement savings. For example, plan sponsors said allowing individuals to continue to repay plan loans after job separation, restricting participant access to plan sponsor contributions, allowing partial distributions at job separation, and building emergency savings features into plan designs, could help preserve retirement savings (see figure). However, they noted, each strategy involves tradeoffs, and the strategies' broader implications require further study. What GAO Recommends GAO recommends that, as part of revising the Form 5500, DOL and IRS require plan sponsors to report the incidence and amount of all 401(k) plan loans that are not repaid. DOL and IRS neither agreed nor disagreed with our recommendation.
gao_GAO-18-313
gao_GAO-18-313_0
Background Remittance Transfer Methods Remittances can be sent through money transmitters and depository institutions, among other organizations. A typical remittance sent through a bank may be in the thousands of dollars, while the typical remittance sent by money transmitters is usually in the hundreds of dollars. International remittances through money transmitters and banks may include cash-to-cash money transfers, international wire transfers, some prepaid money card transfers, and automated clearinghouse transactions. Transfers through money transmitters. Historically, many consumers have chosen to send remittances through money transmitters due to convenience, cost, familiarity, or tradition. Money transmitters typically work through agents—separate business entities generally authorized to, among other things, send and receive money transfers. Most remittance transfers are initiated in person at retail outlets that offer these services. Money transmitters generally operate through their own retail storefronts, or through grocery stores, financial services outlets, convenience stores, and other retailers that serve as agents. In one type of common money transmitter transaction—known as a cash-to-cash transfer—a sender walks into a money transmitter agent location and provides cash to cover the transfer amount and fees. Generally, for transfers at or above $3,000, senders must provide basic information about themselves (typically a name and address, among other information) at the time of the transfer request. The agent processes the transaction, and the money transmitter’s headquarters screens it for BSA compliance. The money is then transferred to a recipient, usually through a distributor agent in the destination country. The money may be wired through the money transmitter’s bank to the distributor agent’s bank (see fig. 1), or transferred by other means to a specified agent in the recipient’s country. The distributor agent pays out cash to the recipient in either U.S. dollars or local currency. Money transmitters also offer other transfer methods, including online or mobile technology, prepaid money cards or international money orders sent by U.S. Postal Service, cash courier services, or informal value transfer systems such as hawala. Transfers through banks. Another method which remittance senders use to send funds is through bank to bank transfers. Figure 2 is an example of a simple funds transfer between two customers with only the remittance sender’s and remittance recipient’s banks involved. If a remittance sender’s bank does not have a direct relationship with the remittance recipient’s bank, the bank-to-bank transfer scenario becomes more complicated. In such cases, one or more financial institutions may rely upon correspondent banking relationships to complete the transaction, as illustrated in figure 3. Federal and State Oversight of Money Transmitters and Banks Both federal and state agencies oversee money transmitters and banks. In general, money transmitters must register with FinCEN and provide information on their structure and ownership. According to Treasury, in all states except one, money transmitters are required to obtain licenses from states in which they are incorporated or conducting business. Banks are supervised by state and federal banking regulators according to how they are chartered, and the banks provide related information when obtaining their charter. The key federal banking regulators include OCC, FDIC, the Federal Reserve, and National Credit Union Administration (NCUA). FinCEN often works with federal and state regulators. For example, as administrator of the BSA, FinCEN issues BSA regulations and has delegated examination authority for BSA compliance to the federal banking regulators for banks within their jurisdictions. Further, the federal banking regulators have issued regulations requiring institutions under their supervision to establish and maintain a BSA compliance program. FinCEN has also delegated examination authority for BSA compliance for money transmitters to the Internal Revenue Service (IRS). Money transmitters are subject to the BSA but are not examined by federal regulators for safety and soundness. To ensure consistency in the application of BSA requirements, in 2005 the federal banking regulators collaborated with FinCEN on a BSA examination manual that was issued by the Federal Financial Institutions Examination Council for federal bank examiners conducting BSA examinations of banks. Similarly, in 2008 FinCEN issued a BSA examination manual to guide reviews of money transmitters, including reviews by the IRS and state regulators. The manual for BSA examinations of banks was updated in 2014 to further clarify supervisory expectations and regulatory changes. FinCEN has authority for enforcement and compliance under the BSA and may impose civil penalties and seek injunctions to compel compliance. In addition, each of the federal banking regulators has the authority to initiate enforcement actions against supervised institutions for violations of law and also impose civil money penalties for BSA violations. Under the BSA, the IRS also has authority for investigating criminal violations. The U.S. Department of Justice prosecutes violations of federal criminal money laundering statutes and violations of the BSA, and several law enforcement agencies can conduct BSA-related criminal investigations. Components of Anti- Money Laundering Compliance Programs for Money Transmitters and Banks under the Bank Secrecy Act Money transmitters and banks are subject to requirements under the BSA. They are generally required to design and implement a written anti- money laundering (AML) program, report certain transactions to Treasury, and meet recordkeeping and identity documentation requirements for funds transfers of $3,000 or more. All financial institutions subject to the BSA—including banks and money transmitters—are required to establish an anti-money laundering program. At a minimum, each AML program must establish written AML compliance policies, procedures, and internal designate an individual to coordinate and monitor day-to-day provide training for appropriate personnel; and provide for an independent audit function to test for compliance. Bank Secrecy Act anti-money laundering (BSA/AML) regulations require that each financial institution tailor a compliance program that is specific to its own risks based on factors such as the products and services offered, customers, and locations served. BSA/AML compliance programs are expected to address the following: Customer Identification Program. Banks must have written procedures for opening accounts and must specify what identifying information they will obtain from each customer. At a minimum, the bank must obtain the following identifying information from each customer before opening the account: name, date of birth, address, and identification number. In addition, banks’ Customer Identification Programs must also include risk-based procedures for verifying the identity of each customer to the extent reasonable and practicable. Customer Due Diligence. These procedures enable banks to predict, with relative certainty, the types of transactions in which a customer is likely to engage, which assists banks in determining when transactions are potentially suspicious. Banks must document their process for performing Customer Due Diligence. Enhanced Due Diligence. Customers who banks determine may pose a higher risk for money laundering or terrorist financing are subject to these procedures. Enhanced Due Diligence for higher-risk customers helps banks understand these customers’ anticipated transactions and implement an appropriate suspicious activity monitoring system. Banks review higher-risk customers and their transactions more closely at account opening and more frequently throughout the term of their relationship with the bank. Suspicious Activity Monitoring. Banks and money transmitters must also have policies and procedures in place to monitor and identify unusual activity. They generally use two types of monitoring systems to identify or alert staff of unusual activity: manual transaction monitoring systems, which involve manual review of transaction summary reports to identify suspicious transactions, and automated monitoring systems that use computer algorithms to identify patterns of unusual activity. Large-volume banks typically use automated monitoring systems. Banks and money transmitters also must comply with certain reporting requirements, including: Currency Transaction Report. Banks and money transmitters must electronically file this type of report for each transaction in currency— such as a deposit, withdrawal, exchange, or other payment or transfer—of more than $10,000. Suspicious Activity Report. Banks and money transmitters are required to electronically file this type of report when (1) a transaction involves or aggregates at least $5,000 in funds or other assets (for banks) or at least $2,000 in funds or other assets (for money transmitters), and (2) the institution knows, suspects, or has reason to suspect that the transaction is suspicious. Remittances to Case- Study Countries Remittances from the United States are an important source of funds for our case-study countries—Haiti, Liberia, Nepal, and Somalia. The Organisation for Economic Co-operation and Development identified these countries as fragile states because of weak capacity to carry out basic governance functions, among other things, and their vulnerability to internal and external shocks such as economic crises or natural disasters. Haiti. Currently the poorest country in the western hemisphere, Haiti has experienced political instability for most of its history. In January 2010, a catastrophic earthquake killed an estimated 300,000 people and left close to 1.5 million people homeless. Haiti has a population of approximately 11 million, of which roughly 25 percent live on less than the international poverty line of $1.90 per day. Nearly 701,000 Haitians live in the United States. In 2015, estimated remittances from the United States to Haiti totaled roughly $1.3 billion, or about 61 percent of Haiti’s overall remittances. Official development assistance for Haiti in 2015 totaled slightly more than $1 billion. Liberia. In 2003, Liberia officially ended a 14-year period of civil war but continued to face challenges with rebuilding its economy, particularly following the Ebola epidemic in 2014. Liberia has a population of nearly 5 million people, of which roughly 39 percent live on less than $1.90 per day. There are roughly 79,000 Liberians in the United States. In 2015, remittances from the United States to Liberia were estimated to be roughly $328 million, which represented over half of that country’s estimated total remittances. In 2015, Liberia reported roughly $1.1 billion in official development assistance. Nepal. In 2006, Nepal ended a 10-year civil war between Maoist and government forces, which led to a peace accord, and ultimately a constitution that came into effect 9 years later. In April 2015, Nepal was struck by a 7.8 magnitude earthquake, which resulted in widespread destruction and left at least 2 million people in need of food assistance from the World Food Programme 6 weeks following the earthquake. Nepal has a population of nearly 29 million people, of which 15 percent live on less than $1.90 per day. In 2015, the foreign- born population of Nepalese in the United States was nearly 125,000, and roughly $320 million in remittances flowed from the United States to Nepal. For 2015, Nepal received over $1.2 billion in official development assistance. Somalia. Since 1969, Somalia has endured political instability and civil conflict, and is the third largest source of refugees, after Syria and Afghanistan. According to a 2017 State report, Somalia remained a safe haven for terrorists who used their relative freedom of movement to obtain resources and funds to recruit fighters, and plan and mount operations within Somalia and neighboring countries. Somalia has an estimated population of over 11 million people, of which about half the population live on less than $1.90 per day, and roughly 82,000 Somalis reside in the United States. Oxfam estimated global remittances to Somalia in 2015 at $1.3 billion, of which $215 million originated from the United States. In 2015, Somalia received nearly $1.3 billion in official development assistance. Figure 4 shows the estimated U.S. remittances to each of our case-study countries as a total amount in U.S. dollars and as a percentage of the country’s GDP. Stakeholders Identified Money Transmitters’ Loss of Banking Access as a Key Challenge, Although Remittances to Fragile Countries Continue to Flow Money transmitters serving Haiti, Liberia, Nepal, and especially Somalia reported losing bank accounts or having restrictions placed on them, which some banks confirmed. As a result, some money transmitters have relied on non-banking channels, such as cash couriers, to transfer remittances. All of the 12 money transmitters we interviewed reported losing some banking relationships in the last 10 years. Some money transmitters, including all 4 that served Somalia, said they relied on non- banking channels, such as moving cash, to transfer funds, which increased their operational costs and exposure to risks. Further, in our interviews some banks reported that they had closed the accounts of money transmitters because of the high cost of due diligence actions they considered necessary to minimize the risk of fines under BSA/AML regulations. Treasury officials noted that despite information that some money transmitters have lost banking accounts, Treasury sees no evidence that the volume of remittances is falling or that costs of sending remittances are rising. In addition, U.S.-based remittance senders who send money to our case-study countries reported no significant difficulties in using money transmitters to remit funds. All Money Transmitters We Interviewed Had Lost Bank Accounts, Which for Many Resulted in Higher Costs and a Shift to Non- Banking Channels All 12 money transmitters we interviewed reported that they or their agents had lost accounts with banks during the last 10 years. All 4 Somali money transmitters and many agents of the 2 Haitian money transmitters we spoke with had lost bank accounts and were facilitating remittance transfers without using bank accounts. Additionally, all 4 large money transmitters that process transfers globally (including to our case-study countries of Haiti, Liberia, and Nepal) also reported that their agents had lost accounts. Almost all of the money transmitters said they also faced difficulties in getting new accounts. Somali money transmitters were most affected by the loss of bank accounts, as 2 of the 4 Somali money transmitters had lost all corporate accounts. While some money transmitters said the banks that closed their accounts did not provide a reason, in other cases, money transmitters said the banks told them that they had received pressure from regulators to terminate money transmitter accounts. As a result of losing access to bank accounts, several money transmitters, including all of the Somali money transmitters, reported that they were using non-banking channels to transfer funds. In some cases the money transmitter was forced to conduct operations in cash, which has increased the risk of theft and forfeitures, and led to increased risk for agents and couriers. Nine of the money transmitters that we interviewed, including 3 of the 4 Somali money transmitters, some agents of one Haitian money transmitter, and some agents of the 4 larger money transmitters, rely on couriers or armored trucks to transport cash domestically (to the money transmitter’s main offices or bank) or internationally (see fig. 5). Money transmitters use cash couriers either because the money transmitter or their agents had lost bank accounts or because it was cheaper to use armored trucks than banks to move funds. In addition to the safety risks money transmitters face when they only accept cash, customers who remit large sums of money also face safety risks because they must transport cash to the money transmitter. For example, in our interviews with remittance senders to Somalia, some of them shared concerns about having to carry cash to money transmitters. Money transmitters we interviewed reported increased costs associated with moving cash and bank fees. For example, one Haitian money transmitter reported that use of couriers and trucks has increased its cost of moving money from its agents to its primary bank account by about $75,000 per month (increasing from approximately $15,000 per month using bank transfers to move funds, to $90,000 per month with the addition of couriers and trucks). Two of the money transmitters we spoke to stated that they did not have options other than to pay any fees the bank required due to the difficulty in finding new bank accounts. Money transmitters with access to bank accounts reported that bank charges for services such as cash counting, wire transfers, and monthly compliance fees had in some cases doubled or tripled, or were so high that it was less expensive to use a cash courier. For example, some money transmitters stated that their banks charged a monthly fee for compliance related costs that ranged from $100 a month to several thousand dollars a month. Over half of the money transmitters we interviewed said the loss of bank accounts limits their growth potential. The 4 larger money transmitters reported that in some cases, the relationship between the agent and money transmitter was terminated, either by the agent or the money transmitter, if the agent no longer had a bank account. In other cases, some large money transmitters compensated for their agents’ lost bank accounts by using armored vehicles to transfer cash from the agents’ locations to the bank. However, the agents need to have a high volume of transactions in order to make the expense of a cash courier worthwhile. The money transmitters that we spoke with said that they have not passed their increased operational and banking costs on to remittance senders. Most said that they have not increased their fees for sending remittances or have increased fees only slightly. Some of the money transmitters said that they have compensated for higher costs by finding cost-savings in other areas or that they have reduced their profit margin. Some Banks Reported Closing or Denying Accounts for Money Transmitters and Foreign Correspondent Banks, Citing Insufficient Profit to Offset Risks and Costs Most of the banks we interviewed expressed concerns regarding account holders who are money transmitters because they tend to be low-profit, high-risk clients. Some banks in our survey reported that constraints in accessing domestic and foreign correspondent banks were also a reason for restricting the number or percentage of money transmitter accounts. Banks have closed accounts of money transmitters serving our case- study countries. Some banks we surveyed reported terminating accounts of money transmitters who transfer funds to Haiti, Nepal, and Somalia. While 7 of the 193 banks that responded to our survey noted that during the 3-year period from 2014 to 2016 they provided services to money transmitters that facilitated transfers to at least one of our case-study countries, 3 of these 7 banks also reported closing at least one account of a money transmitter serving at least one of the case-study countries. Risks associated with the countries or regions that the money transmitter served was given as one reason (among others) for the closure of the account by 2 out of the 3 banks. Money transmitters are generally low-profit clients for banks. Most of the banks we interviewed that currently offer money transmitter services stated that BSA/AML compliance costs have significantly increased in the last 10 years due to the need to hire additional staff and upgrade information systems to conduct electronic monitoring of all transactions that are processed through their system. Some banks indicated in our survey and interviews that the revenue from money transmitter accounts was at times not sufficient to offset the costs of BSA/AML compliance, leading to terminations and restrictions on money transmitter accounts. A few banks we interviewed stated that they do not allow money transmitters to open accounts because of the BSA/AML compliance resources they require. Moreover, according to one credit union we interviewed, money transmitters require labor- intensive banking services—such as counting cash and processing checks—that are more expensive for the banks than providing basic services to businesses that are not cash intensive. Banks expressed concerns over the adequacy of money transmitters’ ability to conduct due diligence on the money transmitter’s customers. In our survey, one bank stated that being unable to verify the identity of beneficiaries, the source of the funds, or the subsequent use of the funds was a challenge the bank faced in managing accounts for money transmitters that remit to fragile countries such as Haiti, Liberia, Nepal, and Somalia. Another bank in our survey noted that it closed some money transmitter accounts because it was unable to get any detail on the purpose of individual remittances. In addition, another bank noted that unlike bank clients, money transmitters’ customers may not have ongoing relationships with them, so money transmitters tend to know less about their customers than banks know about theirs. A few banks we interviewed expressed concern that they would be held responsible if, despite the bank carrying out due diligence, authorities detect an illicit transaction has been processed through the bank on behalf of a money transmitter. In addition, one extra-large bank indicated that differences in state regulators’ assessments of money transmitters are a challenge for the bank. Banks we surveyed reported reduced access to correspondent banks. Banks responding to our survey cited reduced access to correspondent banks as a reason for restricting the number of money transmitter accounts. Out of the 193 banks that answered our survey, 30 indicated they have relied on a correspondent bank to transfer funds to our case-study countries (25 to Haiti, 16 to Liberia, 23 to Nepal, and 9 to Somalia). While not specific to our case-study countries, of the 29 banks in our survey that said they had restricted the number or percentage of money transmitter accounts, 8 said that they did so because of difficulty in maintaining correspondent banking relationships, while 3 said they did so due to loss of a correspondent banking relationship. The absence of direct relations with foreign banks can cause electronic money transfers to take longer to process or in some cases to be rejected. One bank official told us that the reduction in correspondent banking relations may not stop funds from being transferred but may increase the cost or time to process the transfer. However, one bank that responded to our survey identified multiple transactions with our case- study countries in recent years that were terminated because a correspondent bank could not be located or had closed. Customer due diligence is a challenge for correspondent banks. Some banks told us that exposure to risk related to the customers of banks they serve was a key challenge to providing foreign correspondent banking services. Some banks expressed concern that violations of anti-money laundering and terrorism financing guidelines by a customer’s customer may result in fines for the bank even when the bank has conducted enhanced due diligence and monitoring of transactions. Two extra-large banks that do not provide foreign correspondent banking services cited due diligence concerns as one reason they choose not to offer such services. Some of the banks that provide correspondent banking services said they conduct more due diligence on the customers of the banks they serve than regulatory guidance requires. Several of the correspondent banks noted that this additional due diligence was challenging to conduct due to the distance between the correspondent bank and the customers of the banks they serve. For example, one bank told us that the farther removed a customer is from being its direct customer, the greater the risk to the bank due to a lack of confidence in the originating institution’s procedures to conduct due diligence on its customers. Banks identified country-level risk as a factor. For banks that responded to our survey, country-level risk was noted as a factor in account closures. Two out of the three banks that had closed accounts for money transmitters serving at least one of our case- study countries noted that risks associated with the countries or regions that the money transmitter served was a contributing reason for the account closures. Additionally, in our interviews with extra-large banks that serve as a correspondent bank for foreign banks all said that they consider risk related to the country served by a foreign bank when deciding whether to allow the foreign bank to open and maintain accounts. However, most of these extra-large banks also said that the country or region where a foreign bank is located is only one of several factors in determining whether the foreign bank is considered high risk. One of the extra-large banks noted that Somalia was an exception because the lack of a banking infrastructure, which compounded concerns that money transmitters serving Somalia pose a higher risk to the bank. While banks in general told us that they did not make exit decisions regarding correspondent banking at the country level, seven of the eight extra-large banks we interviewed did not currently have correspondent banking relationships with any of our case-study countries, and the one remaining bank served only one country (Haiti). Two of the extra-large banks mentioned closing correspondent banking relationships during the last 10 years in Haiti, Nepal, or Somalia. One extra-large bank indicated that, with the exception of Somalia, funds can still be sent to foreign countries with limited correspondent banking access through banking channels; however, the transaction may need to be routed through multiple banks in order to be processed. According to Treasury Officials, Remittance Flows to Fragile Countries Have Not Declined; U.S.- Based Remittance Senders Report Being Generally Satisfied with Their Ability to Remit Treasury officials reported that remittances continue to flow to fragile countries even though money transmitters face challenges, including some evidence of money transmitter bank account closures. Furthermore, U.S.-based individuals we interviewed who send remittances to Haiti, Liberia, Nepal, and Somalia told us that they are still able to send funds to these countries using money transmitters. Treasury reported money transmitters’ banking access difficulties have not affected the estimated volume of remittance flows to fragile countries. Treasury has collected information through engagement with money transmitters and banks about closures of money transmitter bank accounts and foreign correspondent banking relationships. Treasury officials indicated that remittance flows to fragile countries have not been impacted by such account closures. According to Treasury officials, World Bank estimates of remittance flows show that the volume of international transfers from the United States has continued to increase. At the same time, World Bank data indicate that the global average cost of sending remittances has continued to decrease. In regards to our case study countries, Treasury officials noted that they were not aware of any decrease in remittance volume to any of these fragile countries. Citing these trends, and anecdotal evidence from Treasury’s engagement with banks, the officials stated that there are no clear systemic impacts on the flow of remittances from closures of money transmitter bank accounts and correspondent banking relations. Treasury officials added that the scope of money transmitter bank account closures is largely unknown, but they acknowledged that such closures can be a significant challenge for money transmitters that serve certain regions or countries, including Somalia. Regarding a possible reduction in the number of correspondent banks, which can make it more challenging to transfer remittances, Treasury officials noted that to the extent there has been consolidation in this sector, it could be a natural process unrelated to correspondent banking risk management processes. Moreover, if consolidation results in stronger banking institutions and lower compliance costs, that would be a positive development for the sector, according to these officials. Treasury officials noted unique challenges in remitting funds to Somalia. Officials acknowledged that U.S.-based money transmitters transferring funds to Somalia have lost accounts with U.S.-based banks. According to Treasury, Somalia’s financial system is uniquely underdeveloped, as the country has not had a functioning government for about 20 years, and the terrorist financing threat is pronounced. Officials said that some Somali money transmitters have in the past moved money to assist al-Shabaab, a terrorist organization, increasing the need for stringent controls specific to anti-money laundering and combating terrorist financing efforts. As a result of these and other factors, Treasury officials stated that difficulties remitting to Somalia are not generalizable to other countries. Further, Treasury officials said they were aware that some Somali money transmitters have resorted to non-banking channels by carrying cash overseas. They noted that although physically moving cash is risky, it is not unlawful. Additionally, Treasury officials stated that the use of cash couriers to remit funds has not been a concern for regulators because this practice has not increased the remittance fees that money transmitters charge their consumers. Reasons Senders Reported General Satisfaction with Money Transmitters The remittance senders for Haiti, Liberia, Nepal, and Somalia told us that they are generally satisfied using money transmitters over other methods to transfer money abroad because money transmitters quickly deliver the funds to recipients; are cheaper than banks; can be used even if the recipient lacks a bank account; and tend to have more locations in recipient countries compared to banks. specialized Somali money transmitters cost less than transmitters that serve many countries, and overseas agents of the Somali money transmitters are knowledgeable about the communities where they operate and have earned the trust of the community members. U.S.-based remittance senders we interviewed are generally satisfied with their money transmitters. The U.S.-based remittance senders we spoke with from each of our case-study countries reported that they frequently use money transmitters and have not encountered major difficulties in sending remittances. In general, these senders expressed satisfaction with their money transmitters and stated that they had not experienced major problems in sending money via money transmitters. Senders told us that they generally preferred using money transmitters because money transmitters were cheaper than banks and were quicker in delivering the funds. In addition, money transmitters were often more accessible for recipients collecting the remittances because the money transmitters had more locations than banks in recipient countries. However, some remittance senders told us that they experienced delays or were unable to send large amounts of money through money transmitters. In addition, some Somali senders told us that they were dissatisfied with being unable to use personal checks or online methods due to a requirement to pay in cash. U.S. agencies, including Treasury, Federal Deposit Insurance Corporation (FDIC), the Federal Reserve, and National Credit Union Administration (NCUA), have issued guidance to the financial institutions they regulate to clarify expectations for providing banking services to money transmitters. In addition, Treasury’s Office of Technical Assistance (OTA) is engaged in long-term capacity building efforts in Haiti, Liberia, and Somalia to improve those countries’ weak financial institutions and regulatory mechanisms, factors that may cause banks to consider money transmitters remitting to these countries to be more risky clients. However, agency officials disagreed with some suggestions for government action proposed by banks and others because such actions would contravene agencies’ Bank Secrecy Act anti-money laundering (BSA/AML) compliance goals. Agencies Have Taken Certain Steps That May Address Money Transmitters’ Difficulties in Maintaining Banking Access Treasury and Other Agencies Have Issued Guidance Intended to Prevent Widespread Termination of Banking Services for Money Transmitters, Among Other Goals Treasury, including FinCEN and OCC, as well as FDIC, the Federal Reserve, and NCUA have issued various guidance documents intended to ensure BSA/AML compliance while mitigating negative impacts on money transmitter banking access. Since 2011, Group of Twenty (G20) leaders, including the U.S. government, have committed to increasing financial inclusion through actions aimed at reducing the global average cost of sending remittances to 5 percent. According to Treasury officials, financial inclusion and BSA/AML compliance are complementary goals. In published statements, Treasury has affirmed that money transmitters provide essential financial services, including to low-income people who are less likely or unable to make use of traditional banking services to support family members abroad. Treasury has also acknowledged that leaving money transmitters without access to banking channels can lead to an overall reduction in financial sector transparency to the extent that money transmitters resort to non-banking channels for transferring funds. Nonetheless, Treasury officials we spoke to noted that in implementing BSA/AML regulations, banks retain the flexibility to make business decisions such as which clients to accept, since banks are in the best position to know whether they are able to implement controls to manage the risk associated with any given client. These officials indicated that Treasury pursues market-driven solutions and cannot order banks to open or maintain accounts. Treasury officials noted that Treasury works through existing multilateral bodies to promote policies that will support market driven solutions to banking access challenges and deepen financial inclusion globally. To clarify how banks assess BSA/AML risks posed by money transmitters and foreign banks, Treasury and other regulators have issued various guidance documents that, among other things, describe best practices for assessing such risks (see table 1). Some of the guidance emphasizes that risk should be assessed on a case-by-case basis and should not be applied broadly to a class of customers when making decisions to open or close accounts. The agencies issuing these guidance documents have taken some steps to assess the impact of guidance on bank behavior. For example, Treasury officials told us that Treasury periodically engages with banks and money transmitters on an ad hoc basis to learn their views and gain insight into their concerns. According to Federal Reserve officials, anecdotal information suggests that some money transmitters lost bank accounts after issuance of the 2005 joint guidance summarized above in table 1, and that outcome was contrary to the regulators’ intent. To address concerns about the guidance, according to these officials, Treasury held several public discussions on money transmitter account terminations. OCC officials stated that they have not conducted a separate assessment of the effects of their October 2016 correspondent banking guidance on banks’ risk assessment practices. However, they noted that OCC examiners evaluate banks’ policies, procedures, and processes for risk reevaluation, including processes for assessing individual foreign correspondent bank customer risks, as a part of OCC’s regular bank examination process. Bank officials we spoke to noted that while the guidance from regulators provides broad direction for banks’ risk assessments of foreign banks and money transmitter clients, the guidance does not provide specific details to clarify how banks can ensure BSA/AML compliance for specific higher- risk clients. Treasury Is Providing Technical Assistance to Build Financial Capacity in Haiti, Liberia, and Somalia According to Treasury officials, there is no feasible short-term solution to address the loss of banking services facing money transmitters involved in transferring funds to certain fragile countries, especially Somalia. These officials explained that U.S. banks may be reluctant to transfer funds to fragile countries because key governmental and financial institutions in these countries have weak oversight and therefore may face difficulties in detecting and preventing money laundering and terrorism financing. As of September 2017, Treasury’s OTA is providing capacity building support to fragile countries, including Haiti, Liberia, and Somalia, with some of its efforts aimed at addressing long-term factors affecting these countries’ BSA/AML supervisory capability. Table 2 identifies and describes the status of OTA projects in our case- study countries of Haiti, Liberia, and Somalia. OTA does not currently have a project in Nepal. U.S. Agency Officials Disagreed with Several Actions Proposed by Banks and Others, for Reasons Including Agencies’ BSA/AML Compliance Goals Banks, money transmitters, trade associations, and state regulators we interviewed, as well as third parties such as the World Bank and Center for Global Development, have proposed several actions to address banking access challenges money transmitters face in transferring funds through banks from the United States to fragile countries. Use of public sector transfer methods. Most banks we spoke to mentioned regulatory risk as a challenge to creating or maintaining money transmitter accounts. These banks stated that the ultimate risk for conducting transactions for money transmitter accounts falls on the bank, and that banks face substantial risk of regulatory action for such transactions. Therefore, one extra-large bank and one credit union we spoke to suggested using public sector transfer methods such as the Fedwire Funds Service (Fedwire) or FedGlobal Automated Clearing House Payments (FedGlobal) to process remittances to fragile countries, thereby mitigating the regulatory risk posed to banks that transfer such funds. Providing regulatory immunity, given appropriate oversight. To mitigate the regulatory risk to banks posed by money transmitter clients that send remittances to fragile countries, one extra-large bank, one credit union, and several money transmitters we spoke to suggested that regulators provide forms of regulatory immunity or regulator assurances that banks would not face enforcement actions if they carried out a specified level of due diligence to process remittances to fragile countries. Issuing more specific guidance. About half of the banks we spoke to mentioned fear of regulatory scrutiny due to ambiguities in regulatory agencies’ guidance or examiner practices. This fear of regulatory scrutiny served as a disincentive for these banks to maintain money transmitter accounts. While officials from about half of the banks we spoke to stated that additional guidance issued by Treasury and other agencies was helpful to clarify regulatory expectations and that examiner practices were consistent with guidance, others stated that they were uncertain about how much due diligence constituted enough for regulatory purposes, because regulations incorporated ambiguous language or because examiner practices exceeded regulations. These bank officials suggested that regulators could provide more specific guidance for banks on risk management, for instance, by including example scenarios and answers to frequently asked questions. The World Bank recommended in 2015 that regulators provide banks with additional guidance on assessing the risk of different money transmitter clients. U.S. agency officials stated that they disagreed with implementing these proposals for reasons specific to each one, as discussed below. Use of public sector transfer methods. Treasury officials told us that they prefer market-based solutions to the challenges of transferring remittances to fragile countries, rather than a solution in which the U.S. government assumes the risk in transferring these remittances, such as using the Federal Reserve to directly transfer payments from money transmitters. Federal Reserve officials told us that Fedwire is reserved for domestic wire transfers, and while the Federal Reserve continues to evaluate the scope of the FedGlobal service, no decisions have been made to expand the service to additional countries at this time. Federal Reserve officials told us they seek to increase remittance flows to the countries the program already serves. Providing regulatory immunity, given appropriate oversight. Treasury officials told us that while they would need to see the suggested duration and conditions pertaining to any proposal for regulatory immunity or exemptions in order to judge its feasibility, implementing this suggestion could raise a number of legal and policy concerns. Officials told us that while Treasury has the authority to provide regulatory exemptions, creating particular conditions for regulatory immunity would stray from Treasury’s intended risk-based approach to BSA/AML compliance, and bad actors might take advantage of any such exemptions for criminal activity. Issuing more specific guidance. OCC informed us that it is not currently considering implementing more specific guidance. Treasury officials told us that existing guidance clarifies that Treasury does not have a zero tolerance approach to BSA/AML compliance and that Treasury does not expect banks to know their customers’ customers. These officials told us that they prefer not to issue further amplifying guidance with very specific examples as to what constitutes “compliance” by financial institutions, because Treasury does not wish to institute a “check the boxes” approach to regulatory compliance. Existing U.S. Agency Information on Remittances Does Not Allow Treasury to Assess the Effects of Money Transmitters’ Loss of Banking Access on Remittance Flows to Fragile Countries Treasury cannot assess the effects of money transmitters’ loss of banking access on remittance flows because existing data do not allow Treasury to identify remittances transferred through banking and non-banking channels. Recent efforts to collect international remittance data from banks and credit unions do not include transfers these institutions make on behalf of money transmitters. Since these data collection efforts are designed to protect U.S. consumers, the remittance data that banks and credit unions report are limited to remittances individual consumers send directly through these institutions. Additionally, a few state regulators recently began requiring money transmitters to report remittance data by destination country, but these data do not distinguish money transmitters’ use of banking and non-banking channels to transfer funds. Finally, while Treasury has a long-standing effort to collect information on travelers transporting cash from U.S. ports of exit, this information does not to identify cash transported for remittances. Without information on remittances sent through banking and non-banking channels, Treasury cannot assess the effects of money transmitter and foreign bank account closures on remittances, especially shifts in remittance transfers from banking to non-banking channels for fragile countries. Non-banking channels are generally less transparent than banking channels and thus more susceptible to the risk of money laundering and other illicit financial transactions. Remittance Data from Financial Institutions Do Not Capture Money Transmitters’ Use of Banking Channels to Transfer Funds Banks and Credit Unions Do Not Report on Remittances They Transfer for Money Transmitters Federal regulators recently began collecting data on international remittances from banks and credit unions by requiring these institutions to provide more information in pre-existing routine reports. However, these reports do not require banks and credit unions to include information on remittance transfers these institutions make on behalf of money transmitters, among other business clients. According to officials from the Office of the Comptroller of the Currency (OCC) and from the Consumer Financial Protection Bureau, the additional reporting requirements for remittances were intended to help regulators monitor compliance with rules aimed at protecting U.S. consumers who use remittance services offered by banks and credit unions. Furthermore, banks and credit unions are not required to report on destination countries for remittance flows. Specifically: Beginning in 2014, Federal banking regulators—FDIC, the Federal Reserve, and OCC— required banks to provide data on international remittances in regular reporting known as the Consolidated Reports of Condition and Income (Call Reports). These reports, which are required on a quarterly basis from FDIC-insured banks, generally include banks’ financial information such as assets and liabilities, and are submitted through the Federal Financial Institutions Examination Council, a coordinating body. Specifically, the agencies required banks to indicate whether they offered consumers mechanisms, including international wire transfers, international automated clearinghouse transactions, or other propriety services, to send international remittances. The Consumer Financial Protection Bureau uses the remittance data in Call Reports to better understand the effects of its rules regarding remittance transfers including its rules on disclosure, error resolution, and cancellation rights. Additionally, according to bureau officials, they also use the data for other purposes, for example, to monitor markets and to identify banks for remittance exams and, if needed, additional supervision. The Call Reports do not require a bank to report remittances for which the bank is providing such service to business customers, including money transmitters. According to OCC officials, because the remittance regulation that the Consumer Protection Financial Bureau enforces originated in response to consumer-focused legislation, a bank is required to report only those remittances for which the bank is the direct service provider to the individual consumer. Consequently, remittances reported in the Call Reports do not include remittances for which the banks served as a correspondent bank or as a provider for a money transmitter. Furthermore, banks are not required to report remittance data by destination country. In 2013, the National Credit Union Administration (NCUA) began requiring credit unions to provide data on the number of remittance transactions, but not data on the dollar amount transferred, in their Call Reports to NCUA. Similarly, and consistent with its treatment of banks, the Consumer Financial Protection Bureau uses the remittance data submitted by credit unions in Call Reports, for example, to better understand the effects of its rules and for market monitoring. The credit unions are also not required to include transactions they process on behalf of business clients, such as money transmitters, and do not provide remittance data by destination country. Money Transmitters Are Not Required to Report Whether the Remittances they Transfer are Through Banking or Other Channels In 2017 some states began collecting remittance data from money transmitters by state and destination country through the Money Services Business Call Report. The purpose of these reports is to enhance and standardize the information available to state financial regulators concerning the activities of their Money Services Business licensees to effectively supervise these organizations. However, money transmitters are not required to distinguish whether the remittances they transferred were sent through banking or other channels. Additionally, while these reports collect remittance data by destination country, these data are not comprehensive because, according to the Nationwide Multistate Listing System, as of the first quarter of 2018, about half the states (24) had adopted the reports for money transmitters and of these 12 states had made it mandatory to report the remittances by destination country. Due to a lack of reporting on money transmitters’ use of banking channels to transfer remittances, Treasury cannot assess the extent of the decline in money transmitters’ use of banking channels to transfer remittances to fragile countries, including the four we selected as case-study countries: Haiti, Liberia, Nepal, and Somalia. U.S. Agency Efforts to Collect Data on Physical Transportation of Cash Are Not Designed to Track Flow of Remittances through Non-Banking Channels While Treasury has a long-standing effort to collect information on travelers transporting cash from U.S. ports of exit, this information is not designed to enable Treasury to identify cash transported for remittances or the intended final destination of the cash. For financial transfers through non-banking channels, Treasury requires persons or businesses to report the export of currency and monetary instruments at ports of exit, which include remittances sent through money transmitters carried out in cash. Specifically, Treasury requires persons or businesses, including money transmitters, who physically transport currency or other monetary instruments exceeding $10,000 at one time, from the United States to any place outside of the United States, to file a Report of International Transportation of Currency or Monetary Instruments (CMIR) with U.S. Customs and Border Protection at the port of departure. The CMIR collects information such as the name of the person or business on whose behalf the importation or exportation of funds was conducted, the date, the amount of currency, U.S. port or city of arrival or departure, and country of origin or destination, among other information. The forms are filled out manually by individuals carrying cash. U.S. Customs and Border Protection officers collect the forms at ports of exit, and that agency’s contractors manually enter the data reported on these forms into a central database. Money transmitters and their agents who carry cash in excess of $10,000 from the United States are required to submit the CMIR to U.S. Customs and Border Protection upon departure. Thus, to some extent, CMIR data include data on remittances transferred by money transmitters in cash; however, the CMIR is not intended to capture information specific to remittances, and thus its usefulness is limited for agencies in tracking the flow of remittances through non-banking channels. First, the destination country reported on the CMIR may not be the final destination of the cash or other monetary instrument being transported. For example, money transmitters we interviewed told us that they use cash couriers to transfer funds to Somalia via the United Arab Emirates, where the funds may enter a clearinghouse that can transfer the funds to Somalia. While the ultimate destination of the remittances is Somalia, the CMIR may list the United Arab Emirates as the destination because it is the first destination out of the United States. Second, FinCEN officials acknowledged they do not know the extent of underreporting in general with regard to the CMIR; however, money transmitters we interviewed indicated that they have incentives to file CMIR for their own protection in case they have to file an insurance claim. Finally, CMIR does not ask if the currency or monetary instruments are remittances, which makes it difficult if not impossible to separate out the data on remittances from the overall data. Existing data do not enable Treasury to identify remittances transferred by money transmitters through banking and non-banking channels. Non- banking channels are generally less transparent than banking channels and thus more susceptible to the risk of money laundering and terrorist financing. FinCEN’s mission is to safeguard the financial system from illicit use, combat money laundering, and promote national security by, among other things, receiving and maintaining financial transactions data and analyzing that data for law enforcement purposes. Additionally, federal standards for internal control state that agency managers should comprehensively identify risks and analyze them for their possible effects. A lack of data on remittances sent through banking and non-banking channels limits the ability of Treasury to assess the effects of money transmitter and foreign bank account closures on remittances, in particular shifts of remittances to non-banking channels for fragile countries. The risks associated with shifts of remittances to non-banking channels may vary by country and are likely greater for fragile countries such as Somalia where the United States has concerns about terrorism financing. Conclusions Remittances continue to flow to fragile countries, but the loss of banking services for money transmitters, as well as a decline in foreign banking relationships, has likely resulted in shifts to non-banking channels for remittances to some of these countries. While money transmitters who have lost bank accounts may adapt by moving remittances in cash or other non-banking channels, the lack of a bank account presents operational risks for these organizations. Moreover, the flow of funds such as remittances from banking to non-banking channels decreases the transparency of these transactions. While U.S. regulators have issued guidance to banks indicating that they should not terminate accounts of money transmitters without a case-by-case assessment, several banks we contacted remain apprehensive and are reluctant to incur additional costs for low-profit customers such as money transmitters. At the same time, senders of remittances still prefer to use money transmitters to send funds, which the senders regard as a critical lifeline for family and friends in fragile countries. Although federal and state regulators have undertaken recent efforts to obtain remittance data from financial institutions such as banks and money transmitters, these efforts are designed for consumer protection and the regulatory supervision of financial institutions, rather than to track remittances sent by money transmitters using banking channels. As a result, the available data are not sufficient for the purposes of tracking changes in money transmitters’ use of banks to transfer funds. Similarly, while Treasury has a long- standing effort to collect information on large amounts of cash physically transported by travelers at U.S. ports of exit, this information collection is not intended to track the flow of remittances through non-banking channels. Consequently, to the extent money transmitters losing banking access switch to non-bank methods to transport remittances, Treasury may not be able to monitor these remittance flows. This, in turn could increase the risk of terrorism financing or money laundering, especially for remittances to fragile countries where risks related to illicit use of funds are considered higher. Recommendation for Executive Action We are making one recommendation to Treasury. The Secretary of Treasury should assess the extent to which shifts in remittance flows from banking to non-banking channels for fragile countries may affect Treasury’s ability to monitor for money laundering and terrorist financing and, if necessary, should identify corrective actions. Agency Comments We provided a draft of this product for comment to Treasury, FDIC, the Federal Reserve, CFPB, U.S. Customs and Border Protection, Commerce, NCUA, State, and USAID. Treasury, FDIC, the Federal Reserve, CFPB, and U.S. Customs and Border Protection, provided technical comments, which we have incorporated, as appropriate. We requested that Treasury provide a response to our recommendation, but Treasury declined to do so. Commerce, NCUA, State, and USAID, did not provide comments on the draft of this report. We are sending copies of this report to the appropriate congressional committees; the Secretary of the Treasury; the Chairman of the Federal Deposit Insurance Corporation; the Chair of the Board of Governors of the Federal Reserve System; the Acting Director of the Consumer Financial Protection Bureau; the Secretaries of Commerce, Homeland Security, and State; the Administrators of the U.S. Agency for International Development and the National Credit Union Administration; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-9601, or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report examines (1) what stakeholders believe are the challenges facing money transmitters in remitting funds from the United States to selected fragile countries, (2) what actions U.S. agencies have taken to address identified challenges, and (3) U.S. efforts to assess the effects of such challenges on remittance flows from the United States to fragile countries. To address the objectives, we identified four case-study countries: Haiti, Liberia, Nepal, and Somalia. We selected these countries based on their inclusion in the Organisation for Economic Co-operation and Development’s States of Fragility reports from 2013 to 2015. In addition, we limited our selection to countries that have a foreign-born population of 50,000 or more living in the United States. Finally, we considered the size of estimated total remittances from the United States relative to the recipient countries’ gross domestic products (GDP). We rank ordered the 17 countries that met these criteria and selected the top four. For our first objective, to understand the challenges that stakeholders believe money transmitters face in remitting funds from the United States to fragile countries, we surveyed banks and interviewed U.S. agency officials, money transmitters, banks, credit unions, and remittance senders. To obtain insights from U.S agency officials, we interviewed and received written responses from officials of the Department of the Treasury (Treasury)—including the Office of Technical Assistance (OTA), the Financial Crimes Enforcement Network (FinCEN), the Office of Terrorism and Financial Intelligence, and the Office of the Comptroller of the Currency (OCC). To obtain insights from money transmitters, we used the World Bank’s Remittance Prices Worldwide database to select U.S.-based money transmitters serving our case-study countries. The World Bank database includes a sample of money transmitters, which the World Bank reported it selected to cover the maximum remittance market share possible and survey a minimum aggregated market share of 80 percent for each country. We attempted to contact the 18 money transmitters that the World Bank identified as the major service providers for our case-study countries. We interviewed 12 of these 18 money transmitters, of which 8 provided services to only one of our case-study countries (2 money transmitters provided services to Haiti, 4 provided services to Somalia, and 2 provided services to Nepal) and 4 provided remittance services from the United States to at least three of our case-study countries. To obtain insights from individuals that remit to fragile states, we conducted six small-group interviews, and one additional interview, of individuals that remit to our selected case-study countries. From 3 to 6 individuals participated in our small group interviews. We interviewed one Haitian small group, one Liberian small group, one Nepali small group, and three Somali small groups. To set up these interviews, we identified community-based organizations (CBOs) and other groups that work with remittance senders to these countries and obtained contact information for these groups. We identified the CBOs through searching Internal Revenue Service (IRS) lists of tax- exempt community organizations for the names of our case-study countries or their populations. To focus our search efforts, we concentrated on the five areas in the United States with the largest populations of immigrants from each case-study country. The five areas were identified using information on immigrant populations from the U.S. Census Bureau’s 2015 American Community Survey 1-year Public Use Microdata Samples. We sent emails outlining our research goals and soliciting interest in participating in interviews to 287 CBOs and related groups and obtained positive responses from 46. Of the 46 that responded positively, we were able to schedule meetings with seven CBOs covering the four case-study countries. The groups that agreed to participate in our interviews cannot be considered representative of all CBOs and remittance senders to the four selected countries, and their views and insights are not generalizable to those of all individuals that remit to these four countries. We asked the CBO points-of-contact to invite individuals with experience remitting funds to the case-study countries to participate in telephone interviews. We pre-tested our methodology by emailing contacts at the CBOs and requesting they provide feedback on the questions. We also pre-tested the questions with a group located in Virginia because the location was close to the GAO headquarters and allowed for in-person testing. In the interviews, we asked semi-structured questions about the ease or difficulty of remitting funds to the participants’ home countries, the costs of remitting, and any recent changes they had noticed. We asked the participants to provide us with their personal experiences rather than to speak for their CBO, group, or community. We used two methods—a web-based survey of a nationally representative sample of banks and semi-structured interviews of bank officials—to examine what banks identify as challenges, if any, in offering bank accounts for money transmitters and correspondent banks serving fragile countries. In the survey, we asked banks about limitations and terminations of accounts related to BSA/AML risk, the types of customer categories being limited or terminated, and the factors influencing these decisions. We administered the survey from July 2017 to September 2017, and collected information for the 3-year time period of January 1, 2014 to December 31, 2016. Aggregate responses for the close-ended survey questions that are related to this report are included in appendix II. The survey also collected information for two additional GAO reports: one reviewing closure of bank branches along the southwest border of the United States, and another assessing the causes of bank account terminations involving money transmitters. To identify the universe of banks, we used the bank asset data from FDIC’s Statistics on Depository Institutions database. Our initial population list contained 5,922 banks downloaded from FDIC’s Statistics on Depository Institutions database as of December 31, 2016. We stratified the population into five sampling strata, and used a stratified random sample. In order to meet the sampling needs of related reviews, we used a hybrid stratification scheme. First, banks that did not operate in the Southwest border region were stratified into four asset sizes (small, medium, large, and extra-large). Next, by using FDIC’s Summary of Deposit database we identified 115 Southwest border banks as of June 30, 2016. Our initial sample size allocation was designed to achieve a stratum-level margin of error no greater than plus or minus 10 percentage points for an attribute level at the 95 percent level of confidence. Based upon prior surveys of financial institutions, we assumed a response rate of 75 percent to determine the sample size for the asset size strata. Because there are only 17 extra-large banks in the population, we included all of them in the sample. We also included the entire population of 115 Southwest border banks as a separate certainty stratum. We reviewed the initial population list of banks in order to identify nontraditional banks not eligible for this survey. We treated nontraditional banks as out-of- scope. In addition, during the administration of our survey, we identified 27 banks that were either no longer in business or that had been bought and acquired by another bank, as well as 2 additional banks that were nontraditional banks and, therefore, not eligible for this survey. We treated these sample cases as out-of-scope; this adjusted our population of banks to 5,805 and reduced our sample size to 406. We obtained a weighted survey response rate of 46.5 percent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. For survey questions that are not statistically reliable, we present only the number of responses to each survey question and the results are not generalizable to the population of banks. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question or sources of information available to respondents can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing the results to minimize such nonsampling error. We conducted pretests with four banks. We selected these banks to achieve variation in geographic location and asset size (small, medium, large, extra-large). The pretests of the survey were conducted to ensure that the survey questions were clear, to obtain any suggestions for clarification, and to determine whether representatives would be able to provide responses to questions with minimal burden. To supplement the results of the survey, we conducted interviews with eight extra-large banks regarding correspondent banking and money transmitter accounts and with two credit unions regarding money transmitter accounts. We selected the eight banks to interview using the following criteria: (1) the bank was in the extra-large asset size group (banks with greater than $50 billion in assets), and (2) the bank was mentioned by at least one of the money transmitters that we interviewed as terminating accounts with them or the bank was listed in an internal Treasury study on correspondent banking. Of the banks in the extra- large asset size group, 7 were mentioned in our interviews with money transmitters as having closed accounts with them. Nearly all of these banks, plus one additional bank were also mentioned as correspondent banks in the Treasury study. In addition, we selected two credit unions to interview based on information from our interviews with money transmitters. Money transmitters identified four credit unions in our interviews; of these, we selected for interviews two that were mentioned as closing accounts with money transmitters. We did not contact the other two credit unions that currently have money transmitter accounts. The results of the survey and the interviews only provide illustrative examples and are not generalizable to all banks or credit unions. For our second objective, we analyzed U.S. agency information and documentation about relevant projects and activities. We also interviewed officials and obtained relevant guidance documents from Treasury, including OCC, OTA, FinCEN, and Terrorism and Financial Intelligence; the Federal Deposit Insurance Corporation (FDIC); the U.S. Department of State; the U.S. Agency for International Development; the Board of Governors of the Federal Reserve System (Federal Reserve); and the National Credit Union Administration (NCUA). Additionally, we also interviewed officials from the World Bank and International Monetary Fund to understand the data, methodology, and findings contained within reports by those organizations, as well as to understand the International Monetary Fund’s role in technical assistance in our case-study countries. To gather information on solutions proposed by banks and others to address challenges money transmitters face in transferring funds through banks from the United States to fragile countries, we interviewed banks and credit unions as noted above. We also reviewed reports by the World Bank, the Center for Global Development, and Oxfam to gather recommendations addressing challenges in transferring remittances to fragile countries. We interviewed officials from Treasury, FDIC, the Federal Reserve, and the U.S. Agency for International Development to gain their perspectives on these proposed solutions. For our third objective on U.S. agencies’ efforts to assess the effects of challenges facing U.S. money transmitters on remittance flows to fragile countries, we interviewed agency officials and analyzed available data on flows going through banking and non-banking channels. For available data on flows through the banking channel, we analyzed the Consolidated Reports of Condition and Income (Call Report) data from the Federal Financial Institutions Examination Council, which started collecting these data in 2014. These remittance data are reported on a semiannual basis. We also reviewed Call Report data on remittances for credit unions, which started to be collected in 2013, as well as data collected from Money Service Businesses, which some states started collecting in 2017. For data on remittance flows through non-banking channels, we obtained and analyzed data on filings of FinCEN’s Form 105 – Report of International Transportation of Currency or Monetary Instruments. This report is required of individuals who physically transport currency or other monetary instruments exceeding $10,000 at one time from the United States to any place outside the United States, or into the United States from any place outside the United States. The paper form is collected by the Department of Homeland Security’s U.S. Customs and Border Protection at the port of entry or departure. We obtained the tabulated Form 105 data from FinCEN by arrival country, state of U.S. exit port, and for calendar years 2006 through 2016. We also interviewed officials and obtained written responses from FinCEN and the Federal Financial Institutions Examination Council. We compared the results of our data analysis and information from interviews with agency officials against FinCEN’s mission to safeguard the financial system from illicit use by, among other things, obtaining and analyzing financial transactions data. Additionally, we also compared the results of our analysis and information obtained from agencies against the federal standards for internal control, which state that agency managers should comprehensively identify risks and analyze them for their possible effects. We conducted this performance audit from September 2016 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Results from the GAO Survey of Banks on Money Transmitter Account Terminations From July 2017 to September 2017, we administered a web-based survey to a nationally representative sample of banks. In the survey, we asked banks about the number of account terminations for reasons related to Bank Secrecy Act anti-money laundering (BSA/AML) risk; whether banks are terminating, limiting, or not offering accounts to certain types of customer categories; and the factors influencing these decisions. We collected information for the 3-year period from January 1, 2014, to December 31, 2016. We obtained a weighted survey response rate of 46.5 percent. The survey included 44 questions, 16 of which were directly applicable to the research objectives in this report. Responses to the questions that were directly applicable to the research objectives in this report are shown below (see tables 3 through 16). When our estimates are from a generalizable sample, we express our confidence in the precision of our particular estimates as 95 percent confidence intervals. Survey results presented in this appendix are aggregated for banks of all asset sizes, unless otherwise noted. Results for some of the survey questions were not statistically reliable. In those cases we present only the number of responses to each survey question. These results are not generalizable to the population of banks. Our survey included closed- and open-ended questions. We do not provide information on responses provided to the open-ended questions. For a more detailed discussion of our survey methodology, see appendix I. The following open-ended question was only asked to banks that responded “Yes” to question 33: Please provide any additional comments or challenges the bank may face in managing accounts for money transmitters that remit to fragile countries such as Haiti, Liberia, Nepal or Somalia. (Question 36) The following open-ended question was only asked to banks that responded “Yes” to question 37: Please provide any additional comments on how changes (increase or decrease) in correspondent banking services facilitating the transfer of funds to Haiti, Liberia, Nepal or Somalia has impacted your bank’s ability to provide services to money transmitters. (Question 41) The following open-ended question was only asked to banks that responded “Yes” to using a correspondent bank to facilitate the transfer of funds Somalia (question 38, response d): If your bank relied on a respondent bank to facilitate the transfer of funds to Somalia, in what country was the respondent bank located? (Question 39) Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Thomas Melito, (202) 512-9601, or melitot@gao.gov. Staff Acknowledgments In addition to the contact named above, Mona Sehgal (Assistant Director), Kyerion Printup (Analyst-in-Charge), Sushmita Srikanth, Madeline Messick, Ming Chen, Lilia Chaidez, Natarajan Subramanian, Carl Barden, James Dalkin, David Dayton, Martin De Alteriis, Mark Dowling, Rebecca Gambler, Tonita Gillich, Stefanie Jonkman, Christopher Keblitis, Jill Lacey, Michael Moran, Verginie Tarpinian, and Patricia Weng made key contributions to this report.
Why GAO Did This Study The United States is the largest source of remittances, with an estimated $67 billion sent globally in 2016, according to the World Bank. Many individuals send remittances through money transmitters, a type of business that facilitates global money transfers. Recent reports found that some money transmitters have lost access to banking services due to derisking—the practice of banks restricting services to customers to, in part, avoid perceived regulatory concerns about facilitating criminal activity. GAO was asked to review the possible effects of derisking on remittances to fragile countries. This report examines (1) what stakeholders believe are the challenges facing money transmitters in remitting funds from the United States to selected fragile countries, (2) actions U.S. agencies have taken to address identified challenges, and (3) U.S. efforts to assess the effects of such challenges on remittance flows to fragile countries. GAO selected four case-study countries—Haiti, Liberia, Nepal, and Somalia—based on factors including the large size of U.S. remittance flows to them. GAO interviewed U.S.-based money transmitters, banks, U.S. agencies, and individuals remitting to these countries and also surveyed banks. What GAO Found Stakeholders, including money transmitters, banks, and U.S. Department of the Treasury (Treasury) officials, reported a loss of banking access for money transmitters as a key challenge, although remittances continue to flow to selected fragile countries. All 12 of the money transmitters GAO interviewed, which served Haiti, Liberia, Nepal, and particularly Somalia, reported losing some banking relationships during the last 10 years. As a result, 9 of the 12 money transmitters reported using channels outside the banking system (hereafter referred to as non-banking channels), such as cash couriers, to move funds domestically or, in the case of Somalia, for cross-border transfer of remittances (see figure). Several banks reported that they had closed the accounts of money transmitters because of the high cost of due diligence actions they considered necessary to minimize the risk of fines under Bank Secrecy Act regulations. Treasury officials noted that despite some money transmitters losing bank accounts, they see no evidence that the volume of remittances is falling. Example of a Cash-to-Cash Remittance Transfer Using a Cash Courier U.S. agencies have taken steps that may mitigate money transmitters' loss of banking access. For example, several agencies have issued guidance to clarify expectations for providing banking services to money transmitters. In addition, Treasury is implementing projects to strengthen financial institutions in some fragile countries. However, U.S. agencies disagreed with other suggestions, such as immunity from enforcement actions for banks serving money transmitters, since those actions could adversely affect goals related to preventing money laundering and terrorism financing. Treasury cannot assess the effects of money transmitters' loss of banking access on remittance flows because existing data do not allow Treasury to identify remittances transferred through banking and non-banking channels. Remittance data that U.S. agencies collect from banks do not include transfers that banks make on behalf of money transmitters. Additionally, the information Treasury collects on transportation of cash from U.S. ports of exit does not identify remittances sent as cash. Therefore, Treasury cannot assess the extent to which money transmitters are shifting from banking to non-banking channels to transfer funds due to loss of banking access. Non-banking channels are generally less transparent than banking channels and thus more susceptible to the risk of money laundering and terrorism financing. What GAO Recommends Treasury should assess the extent to which shifts in remittance flows to non-banking channels for fragile countries may affect Treasury's ability to monitor for financial crimes and, if necessary, should identify corrective actions. GAO requested comments from Treasury on the recommendation, but none were provided.
gao_GAO-18-608
gao_GAO-18-608_0
Background Roles and Responsibilities Several U.S. agencies have roles and responsibilities related to the screening and vetting of NIV applicants, as shown in table 1. Key Visa Adjudication Process Terms Validity period: The length of time during which a nonimmigrant visa (NIV) is valid for use by a foreign national seeking to travel to a U.S. port of entry and apply for admission into the United States. Entries: The number of applications for admission into the country permitted under a single NIV. Reciprocity arrangements: An understanding or arrangement between the U.S. government and another country on the length of time visas issued by either or both nations are valid for admission. There are many NIVs, and for the purposes of this report, we have placed the majority of NIVs into one of seven groups, as shown in table 2. The validity period and number of entries varies depending on (1) the particular NIV and (2) reciprocity arrangement with an individual’s country of nationality, among other factors. For example, a foreign national of one country may be issued a tourist visa valid for 1 year that allows for a single U.S. entry, while a foreign national of another country may be issued a tourist visa valid for 5 years and that permits multiple entries. However, the authorized period of stay—that is, the amount of time that the nonimmigrant is permitted to remain in the United States after being admitted—has no relation to the validity period. For more information on the various NIVs, see appendix I. NIV Adjudication Process State is generally responsible for the adjudication of NIV applications, and manages the NIV application process, including the consular officer corps and its functions at more than 220 visa-issuing posts overseas. Depending on various factors, such as the particular NIV sought, the applicant’s background, and visa demand, State officials noted that the length of the visa adjudication process can vary from a single day to months. This screening and vetting process for determining who will be issued or refused a visa contains several steps, as shown in figure 1: Petitions. Prior to State’s adjudication process, some NIVs require applicants to first obtain an approved petition from U.S. Citizenship and Immigration Services (USCIS), as shown in table 3. For example, applicants seeking an employment-based NIV or a U.S. citizen’s foreign national fiancé(e) seeking U.S. entry to conclude a valid marriage, must obtain an approved petition from USCIS prior to applying for their NIV. The petitioner (i.e., a U.S. citizen, organization or business entity) completes the petition on behalf of the applicant (i.e., the beneficiary), and the petition would be submitted to a U.S.-based USCIS service center for adjudication. USCIS Background Checks. As part of the adjudication process for visas requiring a USCIS-approved petition before the NIV application is submitted to State, USCIS conducts background checks on U.S.- based petitioners and foreign beneficiaries. For example, petitioner and beneficiary information is screened against TECS—DHS’s principal law enforcement and antiterrorism database that includes enforcement, inspection, and operational records. Further, for U.S. citizens petitioning for a K-1 visa on behalf of their fiancé(e), an FBI fingerprint check may also be required of the U.S. citizen petitioner. If the background checks identify a potential match to derogatory information, the background check unit at the USCIS service center that received the petition is to conduct further research to confirm the match, such as running checks against other government systems and collaborating with other government agencies. If all background check hits have been resolved and documented, and there is no reason not to proceed, USCIS will adjudicate the petition. In fiscal year 2017, USCIS reported that it received about 640,000 petitions for NIVs, and approved over 550,000. NIV Application. After having obtained USCIS approval of the NIV petition, as applicable, the foreign national begins the consular process by completing an online NIV application, known as a DS-160. Upon submitting an application, the applicant can schedule an interview at a post overseas and pays the processing fee. Key Visa Adjudication Process Terms Inadmissible: Individuals are inadmissible to the United States if they fall within the classes of foreign nationals defined as such under the Immigration and Nationality Act (INA), as amended, Pub. L. No. 82-414, tit. II, ch. 2, § 212(a), 66 Stat. 163, 182-87 (1952) (classified, as amended, at 8 U.S.C. § 1182(a)), such as foreign nationals who have engaged in terrorist or criminal activities or previously violated U.S. immigration law. If a visa applicant is found inadmissible, and has not obtained a waiver from the Department of Homeland Security, the applicant would be statutorily ineligible for a visa. Ineligible: An individual is ineligible for a visa if it appears to the Department of State consular officer, based on the application or supporting documentation, that the applicant is not qualified to receive a visa under any provision of law. If the consular officer decides that an applicant is ineligible for visa issuance, the refusal may be based on statutory grounds of inadmissibility under INA § 212(a), or may be due to the individual’s failure to otherwise satisfy the applicable eligibility requirements for the particular visa, as defined in the INA. For example, a consular officer may refuse a J-1 exchange visitor visa to an applicant coming to the United States to perform services as a member of the medical profession if the applicant does not either demonstrate competency in oral and written English or hold a degree from an accredited school of medicine, as required of such visa applicants under INA § 212(j). eligibility concerns related to visa applicants. Prior to adjudicating the visa application, consular officers must review all such security check results. Some applicants are not subjected to all of the security checks depending on certain characteristics, such as age and visa category. For example, State does not generally require that fingerprints be collected for applicants who are either under 14 years old or over 79 years old, or for foreign government officials seeking certain visas. As needed, some applicants undergo an interagency review process called a security advisory opinion (SAO), which is a multi-agency, U.S-based review process for certain NIV applicants. For example, SAOs are mandatory in cases of certain security check hits, a foreign national’s background, or a foreign national’s intention while in the United States. In addition, consular officers have the discretion to request an SAO for any visa applicant. Through the SAO process, consular officers send additional information on applicants to U.S.- based agencies, who review that information against their holdings. Department of State data indicate that consular officers made over 180,000 requests for SAOs for NIV applicants in fiscal year 2017. Adjudication. If the consular officer determines that the applicant is eligible for the visa on the basis of the application, supporting documentation, and other relevant information such as statements made in an interview, he or she will take the applicant’s passport for final processing, but the visa cannot be printed until all security checks have been returned and reviewed. If the consular officer determines that the applicant is inadmissible to the United States or otherwise ineligible under the applicable visa eligibility criteria, he or she informs the applicant that the visa has been refused, and identifies the provision(s) of law under which the visa was refused. Recurrent vetting. In March 2010, shortly after the December 2009 attempted bombing by a foreign national traveling to the United States on a valid visa, CBP began vetting individuals with NIVs on a recurrent basis. This program has led State to revoke visas after they have been issued when information was later discovered that rendered the individual inadmissible to the United States or otherwise ineligible for the visa. In addition, CBP analysts may take other actions as needed after identifying new derogatory information, such as recommending that the airline deny boarding to the traveler because the traveler is likely to be deemed inadmissible upon arrival in the United States (known as a no-board recommendation) or making a referral to ICE, which may seek to remove the individual if already within the United States. According to NCTC, KFE also conducts recurrent vetting of NIV holders against emerging threat information. Number of NIV Adjudications and Refusal Rates Increased Through Fiscal Year 2016, and Declined in Fiscal Year 2017; NIV Application Characteristics Vary Number of NIV Applications Adjudicated Increased Annually from Fiscal Years 2012 through 2016 and Declined in Fiscal Year 2017 The total number of NIV applications that consular officers adjudicated annually (or, NIV adjudications) peaked at about 13.4 million in fiscal year 2016, which was an increase of approximately 30 percent since fiscal year 2012. In fiscal year 2017, NIV adjudications decreased by about 880,000 adjudications, or about 7 percent. Figure 2 shows the number of applications adjudicated each year from fiscal year 2012 through 2017. Appendix II includes additional data on NIV adjudications related to this and the other figures in this report. Annual Monthly Trends. State data from fiscal years 2012 through 2016 indicate that NIV adjudications generally followed an annual cycle, ebbing during certain months during the fiscal year; however, adjudications in fiscal year 2017 departed slightly from this trend. Specifically, from fiscal years 2012 through 2016, the number of NIV adjudications typically peaked in the summer months. State officials noted that the summer peak is generally due to international students who are applying for their visas for the coming academic year. However, in fiscal year 2017, the summer months did not experience a similar increase from previous months, departing from the trend over the previous five fiscal years, according to State data. Instead, NIV adjudications peaked in December of fiscal year 2017. State officials attributed some of the decline in fiscal year 2017 to a decrease in Chinese NIV applicants, which we discuss later in this report. Figure 3 shows monthly NIV adjudications for fiscal years 2012 through 2017. Most NIV Adjudications from Fiscal Years 2012 through 2017 Were for Tourist and Business Visitor Visas, and Approximately Half of All Applicants Came from Six Countries State data on NIV applications adjudicated from fiscal years 2012 through 2017 indicate that the number of adjudications by visa group, applicant’s country of nationality, and location of adjudication were generally consistent, with some exceptions. Visa Group. From fiscal years 2012 through 2017, about 80 percent of NIV adjudications were for tourist and business visitors as shown in figure 4. The next largest groups were visas for students and exchange visitors and temporary workers, which accounted for an average of 9 percent and 6 percent, respectively, of all adjudications during this time period. Although adjudications for visas in some categories increased, others decreased over time. For example, as shown in figure 5, NIV adjudications for temporary workers increased by approximately 50 percent from fiscal years 2012 through 2017 (592,000 to 885,000). During the same time period, adjudications for tourist and business visitors also increased by approximately 20 percent overall (from 8.18 million to 9.97 million), but decreased from fiscal years 2016 to 2017. However, NIV adjudications for student and exchange visitor visas decreased by about 2 percent from fiscal years 2012 through 2017 (1.01 million to 993,000) overall, but experienced a peak in fiscal year 2015 of 1.2 million. Appendix I includes additional information on NIV adjudication by visa group from fiscal years 2012 through 2017. State officials identified reasons to explain these trends: Temporary Workers. Although there was an increase in adjudications across all types of temporary worker visas, the largest percentage increase was for H-2A visas, which are for foreign workers seeking to perform agricultural services of a temporary or seasonal nature. Specifically, adjudications of H-2A visas increased by 140 percent from fiscal years 2012 to 2017 (from about 71,000 to 170,000). State officials noted that H-2A visas are not numerically limited by statute. Further, State officials stated that they believe U.S. employers are increasingly less likely to hire workers without lawful status and are petitioning for lawfully admitted workers, which in part led to an increase in H-2A visa demand. Tourist and Business Visitors. State officials partly attributed the overall changes to tourist and business visitor visas to the extension of the validity period of such visas for Chinese nationals, which represented the largest single country of nationality for tourist and business visitor visas in fiscal year 2017 (17.7 percent). In November 2014, the United States and the People’s Republic of China reciprocally increased the validity periods of multiple-entry tourist and business visitor visas issued to each other’s citizens for up to 10 years. The change in policy was intended to support improved trade, investment, and business by facilitating travel between the two countries. According to State officials, extending validity periods can create an initial increase in demand for such visas, followed by a period of stabilization or even decline as NIV holders would be required to apply for renewal less frequently. According to State officials, in early fiscal year 2015, the increase in the validity period to 10 years for such visas created a spike in Chinese demand in fiscal year 2015, and by fiscal year 2016, the initial demand for these visas had been met and Chinese economic growth was simultaneously slowing, resulting in fewer adjudications for such visas in fiscal year 2017. State data for this time period indicate that the number of adjudications for tourist and business visitor visas for Chinese nationals increased from 1.58 million in fiscal year 2014 to 2.54 million in fiscal year 2015, followed by a decline to 2.34 million in fiscal year 2016 and 1.76 million in fiscal year 2017. Student and Exchange Visitors. Similar to tourist and business visitors, State officials partly attributed the overall changes in student and exchange visitor visa adjudications to the extension of the validity period of such visas for Chinese nationals, which represented the largest single country of nationality for student and exchange visitor visas in fiscal year 2017 (19 percent). In November 2014, the United States extended the validity period of the F visa for academic students from 1 year to 5 years. State officials noted that similar to tourist and business visitor visas, there was an initial surge in Chinese F-visa applicants due to the new 5-year F-visa validity period that began in fiscal year 2015, but the number dropped subsequently because Chinese students with such 5-year visas no longer needed to apply as frequently for F visas. State data for this time period indicate that the number of visa adjudications for F visas for Chinese nationals increased from about 267,000 in fiscal year 2014 to 301,000 in fiscal year 2015, followed by a decline of 172,000 in fiscal year 2016 and 134,000 in fiscal year 2017. Applicant’s Country of Nationality. In fiscal year 2017, more than half of all NIV adjudications were for applicants of six countries of nationality: China (2.02 million, or 16 percent), Mexico (1.75 million, or 14 percent), India (1.28 million, or 10 percent), Brazil (670,000, or 5 percent), Colombia (460,000, or 4 percent), and Argentina (370,000, or 3 percent), as shown in figure 6. Location of Adjudication. State data indicate that the geographic distribution of NIV adjudications across visa-issuing posts worldwide remained relatively consistent from fiscal years 2012 through 2017. NIV adjudications from visa-issuing posts in the Western Hemisphere comprised the largest proportion worldwide during this time period; however, this proportion decreased from 48.8 percent in fiscal year 2012 to 41.7 percent in fiscal year 2017. During the same time period, the proportion of NIV adjudications at visa-issuing posts in other regions increased slightly. For example, the percentage of NIV adjudications from posts in Africa increased from 3.8 percent to 5.5 percent, and the percentage of adjudications from posts in South and Central Asia increased from 7.9 percent to 11.2 percent from fiscal years 2012 through 2017. Figure 7 provides the proportion of NIV adjudications at visa- issuing posts from each region from fiscal years 2012 through 2017. NIV Refusal Rate Has Increased Since Fiscal Year 2012 and Varies By Visa Group The percentage of NIVs refused—known as the refusal rate—increased from fiscal years 2012 through 2016, and was about the same in fiscal year 2017 as the previous year. As shown in figure 8, the NIV refusal rate rose from about 14 percent in fiscal year 2012 to about 22 percent in fiscal year 2016, and remained about the same in fiscal year 2017; averaging about 18 percent over the time period. As a result, the total number of NIVs issued peaked in fiscal year 2015 at about 10.89 million, before falling in fiscal years 2016 and 2017 to 10.38 million and 9.68 million, respectively. The NIV refusal rate can fluctuate from year to year due to many factors. For example, according to State officials, removing a large, highly- qualified set of travelers from the NIV applicant population can drive up the statistical refusal rate. State officials also noted that when a country joins the Visa Waiver Program or a visa for certain nationalities increase from 1-year to 10-year visa validity periods, these individuals no longer apply for visas and affect the overall refusal rate. Further, State officials noted that changes in political and economic conditions in individual countries can affect visa eligibility, which in turn affects the overall refusal rate. State officials noted that the degree to which an applicant might seek to travel to the United States unlawfully is directly related to political, economic, and social conditions in their countries. For example, if global or regional economic conditions deteriorate, more applicants may have an incentive to come to the United States illegally by, for example, obtaining a NIV with the intent to unlawfully stay for a particular time period or purpose other than as permitted by their visa, which then would increase the number of NIV applications that consular officers are refusing. From fiscal years 2012 through 2017, the refusal rate varied by visa group. The highest refusal rate was for tourists and business visitors, which rose from about 15 percent in fiscal year 2012 to over 25 percent in fiscal year 2017, as shown in figure 9. Other visa categories, such as foreign officials and employees, transit and crewmembers, and fiancé(e)s and spouses, had refusal rates below 5 percent during this time period. State officials noted that because different visa categories have different eligibility and documentary requirements, they have different refusal rates. For example, F, J, and H visas require documentation of eligibility for student, exchange, or employment status, respectively. Most NIV Applications Refused from Fiscal Years 2012 through 2017 Were for Reasons Other than Terrorism-Related Ineligibilities According to State data, while the majority of NIV refusals from fiscal years 2012 through 2017 were a result of consular officers finding the applicants ineligible, a relatively small number of refusals were due to terrorism and other security-related concerns. NIV applicants can be refused a visa on a number of grounds of inadmissibility or other ineligibility under U.S. immigration law and State policy. For the purposes of this report, we have grouped most of these grounds for refusal into one of seven categories, as shown in table 4. State data indicate the more than 90 percent of NIVs refused each year from fiscal years 2012 through 2017 were based on the consular officers’ determination that the applicants were ineligible nonimmigrants—in other words, the consular officers believed that the applicant was an intending immigrant seeking to stay permanently in the United States, which would generally violate NIV conditions, or that the applicant otherwise failed to demonstrate eligibility for the particular visa he or she was seeking. For example, an applicant applying for a student visa could be refused as an ineligible nonimmigrant for failure to demonstrate possession of sufficient funds to cover his or her educational expenses as required. Similarly, an applicant could be refused as an ineligible nonimmigrant for indicating to the consular officer an intention to obtain a student visa to engage in unsanctioned activities while in the United States, such as full-time employment instead of pursuing an approved course of study. According to State data, the second most common reason for refusal during this time period was inadequate documentation, which accounted for approximately 5 percent of refusals each year. In such cases, a consular officer determined that the application failed to include necessary documentation for the consular officer to ascertain whether the applicant was eligible to receive a visa at that time. If, for example, the applicant provides sufficient additional information in support of the application, a consular officer may subsequently issue the visa, as appropriate. Our analysis of State data indicates that relatively few applicants— approximately 0.05 percent—were refused for terrorism and other security-related reasons from fiscal years 2012 through 2017. Security- related reasons can include applicants who have engaged in genocide, espionage, or torture, among other grounds. Terrorism-related grounds of inadmissibility include when an applicant has engaged in or incited terrorist activity, is a member of a terrorist organization, or is the child or spouse of a foreign national who has been found inadmissible based on terrorist activity occurring within the last five years, among other reasons. As shown in figure 10, in fiscal year 2017, State data indicate that 1,256 refusals (or 0.05 percent) were based on terrorism and other security-related concerns, of which 357 refusals were specifically for terrorism-related reasons. Executive Actions Taken in Calendar Year 2017 Resulted in Some NIV Refusals and Agencies Are Implementing Additional Changes to NIV Screening and Vetting Processes Executive Actions Taken in Calendar Year 2017 Introduced New Visa Entry Restrictions and Requirements to Enhance Screening and Vetting, Including for NIVs In calendar year 2017, the President issued two executive orders and a presidential proclamation that required, among other actions, visa entry restrictions for nationals of certain countries of concern, a review of information needed for visa adjudication, and changes to visa (including NIV) screening and vetting protocols and procedures (see timeline in figure 11). Initially, the President issued Executive Order 13769, Protecting the Nation from Foreign Terrorist Entry Into the United States (EO-1), in January 2017. In March 2017, the President revoked and replaced EO-1 with the issuance of Executive Order 13780 (EO-2), which had the same title as EO-1. Among other things, EO-2 suspended entry of certain foreign nationals for a 90 day period, subject to exceptions and waivers. It further directed federal agencies—including DHS, State, DOJ and ODNI—to review information needs from foreign governments for visa adjudication and develop uniform screening and vetting standards for U.S. entities to follow when adjudicating immigration benefits, including NIVs. In September 2017, as a result of the reviews undertaken pursuant to EO-2, the President issued Presidential Proclamation 9645, Enhancing Vetting Capabilities and Processes for Detecting Attempted Entry into the United States by Terrorists or Other Public-Safety Threats (Proclamation), which imposes certain conditional restrictions and limitations on the entry of nationals of eight countries—Chad, Iran, Libya, North Korea, Somalia, Syria, Venezuela and Yemen—into the United States for an indefinite period. These restrictions are to remain in effect until the Secretaries of Homeland Security and State determine that a country provides sufficient information for the United States to assess adequately whether its nationals pose a security or safety threat. Challenges to both EOs and the Proclamation have affected their implementation and, while EO-2’s entry restrictions have expired, the visa entry restrictions outlined in the Proclamation continue to be fully implemented as of June 2018, consistent with the U.S. Supreme Court’s June 26, 2018, decision, which held that the President may lawfully establish nationality-based entry restrictions under the INA, and that Proclamation 9645 itself “is squarely within the scope of Presidential authority.” A more detailed listing of the executive actions and related challenges to those actions brought in the federal courts can be found in appendix III. Some NIV Applications in Fiscal Year 2017 Were Refused Due to the Executive Actions Taken in 2017; Adjudications of Applications for Nationals of Affected Countries Decreased from Prior Fiscal Years Our analysis of State data indicates, out of the nearly 2.8 million NIV applications refused in fiscal year 2017, 1,338 were refused due to visa entry restrictions implemented in accordance with the executive actions. To implement the entry restrictions, in March 2017, State directed its consular officers to continue to accept all NIV applications and determine whether the applicant was otherwise eligible for a visa without regard to the applicable EO or Proclamation. If the applicant was ineligible for the visa on grounds unrelated to the executive action, such as having prior immigration violations, the applicant was to be refused on those grounds. If the applicant was otherwise eligible for the visa, but fell within the scope of the nationality-specific visa restrictions implemented pursuant to the applicable EO or Proclamation and was not eligible for a waiver or exception, the consular officer was to refuse the visa and enter a refusal code into State’s NIV database indicating that the applicant was refused solely due to the executive actions. More than 90 percent of the NIV applications refused in fiscal year 2017 pursuant to an executive action were for tourist and business visitor visas, and more than 5 percent were for students and exchange visitors. State data also indicate that the number of applications adjudicated for nationals of the 7 countries identified in EO-1—Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen—decreased by 22 percent in fiscal year 2017, as compared to a 7 percent general decrease in NIV adjudications worldwide that year. For example, as shown in table 5, the decrease in adjudications from fiscal years 2016 to 2017 for nationals of the 7 countries identified in EO-1 ranged from around 12 percent to more than 40 percent. State, DHS, and Other Agencies Are Implementing Changes to NIV Screening and Vetting Processes Consistent with the Executive Actions and Associated Guidance As directed by the executive actions, DHS, State, DOJ, and ODNI took several steps to enhance NIV screening and vetting processes given their responsibilities for implementing the presidential actions. Among other things, the responsibilities included: (1) a review of information needed for visa adjudication; (2) the development of uniform screening standards for immigration programs; and (3) implementation of enhanced visa screening and vetting protocols and procedures. Review of information needed for visa adjudication. In accordance with EO-2, DHS conducted a worldwide review, in consultation with State and ODNI, to identify additional information needed from foreign countries to determine that an individual is not a security or public-safety threat when adjudicating an application for a visa, admission, or other immigration benefit. According to State officials, an interagency working group composed of State, DHS, ODNI, and National Security Council staff was formed to conduct the review. To conduct this review, DHS developed a set of criteria for information sharing in support of immigration screening and vetting, as shown by table 6. According to DHS officials, to develop these criteria, DHS, in coordination with other agencies, identified current standards and best practices for information collection and sharing under various categories of visas to create a core list of information needed from foreign governments in the visa adjudication process. For example, State sent an information request to all U.S. posts overseas requesting information on host nations’ information sharing practices, according to State officials. To assess the extent to which countries were meeting the newly established criteria, DHS officials stated that they used various information sources to preliminarily develop a list of countries that were or were not meeting the standards for adequate information sharing. For example, DHS officials stated that they reviewed information from INTERPOL on a country’s frequency of reporting lost and stolen passport information, consulted with ODNI for information on which countries are terrorist safe havens, and worked with State to obtain information that State officials at post may have on host nations’ information sharing practices. According to the Proclamation, based on DHS assessments of each country, DHS reported to the President on July 9, 2017, that 47 countries were “inadequate” or “at risk” of not meeting the standards. DHS officials identified several reasons that a country may have been assessed as “inadequate” with regard to the criteria. For example, some countries may have been willing to provide information, but lacked the capacity to do so. Or, some countries may not have been willing to provide certain information, or simply did not currently have diplomatic relations with the U.S. government. As was required by EO-2, State engaged with foreign governments on their respective performance based on these criteria for a 50-day period. In July 2017, State directed its posts to inform their respective host governments of the new information sharing criteria and request that host governments provide the required information or develop a plan to do so. Posts were directed to then engage more intensively with countries DHS’s report preliminarily deemed “inadequate” or “at risk”. Each post was to submit an assessment of mitigating factors or specific interests that should be considered in the deliberations regarding any travel restrictions for nationals of those countries. DHS officials stated that they reviewed the additional information host nations provided to State and then reevaluated the initial classifications to determine if any countries remained “inadequate.” On September 15, 2017, in accordance with EO-2, DHS submitted to the President a list of countries recommended for inclusion in a presidential proclamation that would prohibit certain categories of foreign nationals of such countries from entering the United States. The countries listed were Chad, Iran, Libya, North Korea, Syria, Venezuela, and Yemen— which were assessed as “inadequate,” and Somalia, which was identified as a terrorist safe haven. The Presidential Proclamation indefinitely suspended entry into the United States of certain nonimmigrants from the listed countries (see table 7) and directed DHS, in consultation with State, to devise a process to assess whether the entry restrictions should be continued, modified or terminated. In September 2017, State issued additional guidance to posts on implementation of the Presidential Proclamation. As of July 2018, State continues to accept and process the NIV applications of foreign nationals from the eight countries covered by the Proclamation. Such applicants are to be interviewed, according to State guidance, and consular officers are to determine if the applicant is otherwise eligible for the visa, meets any of the proclamation’s exceptions, or qualifies for a waiver. Development of uniform screening standards for U.S. immigration benefit programs. Consistent with EO-2, State, DHS, DOJ, and ODNI developed a uniform baseline for screening and vetting standards and procedures by the U.S. government. According to State officials, an interagency working group comprised of State, DHS, DOJ, and ODNI staff is implementing these requirements. Based on its review of existing screening and vetting processes, DHS officials stated that the working group established uniform standards for (1) applications, (2) interviews, and (3) security system checks (i.e., biographic and biometric). Regarding applications, DHS officials stated that the group identified data elements against which applicants are to be screened and vetted. In February 2018, DHS Office of Policy officials stated that they had taken steps to create more consistency across U.S. government forms that collect information used for screening and vetting purposes, such as State’s DS-160 NIV application as well as 12 DHS forms. For example, officials stated that they anticipate issuing Federal Register notices announcing the intended changes to such forms. Regarding interviews, DHS officials stated that the working group established a requirement for all applicants seeking an immigration benefit, including NIV applicants, to undergo a baseline uniform national security and public safety interview. DHS officials stated that the working group modeled its interview baseline on elements of the refugee screening interview. To help implement this standard, DHS officials stated that the department is offering more training courses in enhanced communications (i.e. detecting deception and eliciting responses) and making such courses accessible to other U.S. government entities and U.S. officials overseas. Regarding security checks, the working group identified certain checks that should be conducted for all applicants seeking an immigration benefit, including NIV applicants. For example, DHS officials stated that the working group concluded that all applicants for U.S. immigration benefits should be screened against DHS’s TECS, among other federal databases. In February 2018, DHS Office of Policy officials stated that they were also exploring the extent to which current screening and vetting technologies can be expanded. For example, technology that is being used to screen applicants for counterterrorism concerns can potentially be modified to screen applicants for other concerns such as public safety or participation in transnational organized crime. However, these officials noted such changes to technology can take a long time. DHS officials stated that each department and agency is responsible for implementing the uniform standards for their relevant immigration programs. For example, with regard to maintaining information electronically, State officials stated that for nonimmigrant and immigrant visas, as of May 2018, they collected most, but not all, of the application data elements. In addition to executive actions taken in calendar year 2017, the President issued National Security Presidential Memorandum 9 on February 6, 2018, which directed DHS, in coordination with State, DOJ, and ODNI, to establish a National Vetting Center to optimize the use of federal government information in support of the national vetting enterprise. This memorandum stated that the U.S. government must develop an integrated approach to the use of intelligence and other data, across national security components, in order to improve how departments and agencies coordinate and use information to identify individuals presenting a threat to national security, border security, homeland security, or public safety. The center is to be overseen and guided by a National Vetting Governance Board, consisting of six senior executives designated by DHS, DOJ, ODNI, State, the Central Intelligence Agency, and the Department of Defense. Further, within 180 days of the issuance of the memorandum, these six departments and agencies, in coordination with the Office of Management and Budget, are to jointly submit to the President for approval an implementation plan for the center, addressing, among other things, the initial scope of the center’s vetting activities; the roles and responsibilities of agencies participating in the center; a resourcing strategy for the center; and a projected schedule to reach both initial and full operational capability. On February 14, 2018, the Secretary of Homeland Security selected an official to serve as the Director of the National Vetting Center and delegated the center’s authorities to CBP. DHS Office of Policy officials stated in February 2018 that the center is intended to serve as the focal point of the larger screening and vetting enterprise, and will coordinate policy and set priorities. The center will use the uniform baselines for screening and vetting standards and procedures established per EO-2 to set short- and long-term priorities to improve screening and vetting across the U.S. government. Further, these officials stated screening and vetting activities will continue to be implemented by the entities that are currently implementing such efforts, but roles and responsibilities for screening and vetting for immigration benefits may be modified in the future based on the work of the center. According to DHS Office of Policy officials, efforts to implement National Security Presidential Memorandum 9, such as the development of an implementation plan, are ongoing as of June 2018. Implementation of new visa screening and vetting protocols and procedures. In response to the EOs and a March 2017 presidential memorandum issued the same day as EO-2, State has taken several actions to implement new visa screening and vetting protocols and procedures. For example, State sought and received emergency approval from the Office of Management and Budget in May 2017 to develop a new form, the DS-5535. The form collects additional information from a subset of visa applicants to more rigorously evaluate applicants for visa ineligibilities, including those related to national security and terrorism. The new information requested includes the applicant’s travel history over the prior 15 years, all phone numbers used over the prior 15 years, and all email addresses and social media handles used in the last 5 years. State estimated that, across all posts, the groups requiring additional vetting represented about 70,500 individuals per year. Agency Comments We provided a draft of the sensitive version of this report to DHS, DOJ, State, and ODNI. DHS, DOJ, and State provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until seven days from the report date. At that time, we will send copies of this report to the Secretaries of Homeland Security and State, the Attorney General, and the Director of National Intelligence. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777or GamblerR@gao.gov. Key contributors to this report are listed in appendix IV. Appendix I: Enclosures on Nonimmigrant Visa Groups There are many nonimmigrant visas (NIV), which are issued to foreign nationals such as tourists, business visitors, and students seeking temporary admission into the United States. For the purposes of this report, we placed the majority of NIVs into one of seven groups. In the following enclosures, we provide a descriptive overview of each group on the basis of our analysis of the Department of State’s (State) fiscal years 2012 through 2017 NIV data. Each enclosure also contains the following: Description of the group. In this section, we provide a narrative description of the group, as well as a table of the specific NIVs that comprise the group. Characteristics of the applicants. In this section, we provide the number of annual NIV adjudications for fiscal years 2012 through 2017, the specific NIVs adjudicated in fiscal year 2017 within the group, the regions to which applicants applied for these NIVs in fiscal year 2017, and the top five nationalities that applied for NIVs in the group in fiscal year 2017. Issuances. In this section, we provide the number of NIVs issued within this group for fiscal years 2012 through 2017. Refusals. In this section, we provide the refusal rate for the entire NIV group for fiscal years 2012 through 2017. For the NIVs that were refused in fiscal year 2017 for this group, we also provide the top ground for refusal. NIV applicants can be refused a visa on a number of grounds of inadmissibility or other ineligibility under U.S. immigration law and State policy. However, across all visa groups, the top categories were either ineligible nonimmigrant or inadequate documentation: Ineligible nonimmigrant. For most NIV categories, the applicant is presumed to be an intending immigrant until the applicant establishes to the satisfaction of the consular officer that he or she is entitled to a nonimmigrant status. An applicant may be refused under this provision if, among other things, the consular officer determines the applicant lacks sufficient ties to his or her home country, or intends to abandon foreign residence; that evidence otherwise indicates an intent to immigrate to the United States permanently; or that the applicant is likely to violate the terms of the visa after being admitted. Inadequate documentation. The consular officer determined that the application is not in compliance with the INA because, for example, it lacks necessary documentation to allow the consular officer to determine visa eligibility. In such cases, the applicant would not be found eligible for the visa unless and until satisfactory documentation is provided to the consular officer or after the completion of administrative processing, such as security advisory opinions. Tourist and business visitor visas Characteristics of the applicant pool Number of adjudications (in millions) Visa types (FY 2017) (9,968,157 adjudications) Region in which applicant applied (FY 2017) (9,968,157 adjudications) Issuances ● Issued tourist and business visitor visas rose 22 percent from fiscal years 2012 through 2015, and declined by about 13 percent from fiscal years 2015 to 2017. ● The refusal rate for tourist and business visitor visas generally increased each year from fiscal year 2012 through fiscal year 2017. ● The vast majority of refusals in fiscal year 2017 were due to the applicant’s inability to overcome the presumption of his or her intent to immigrate or meet the visa’s eligibility criteria. Issued visas, fiscal years 2012 through 2017 (in thousands) Visa refusal rates, fiscal years 2012 through 2017 (percentage) Student and exchange visitor visas Characteristics of the applicant pool Number of adjudications (in thousands) Visa types (FY 2017) (992,855 adjudications) Region in which applicant applied (FY 2017) (992,855 adjudications) Issuances ● Generally, student and exchange visitor visa ● The refusal rate for student and exchange visitor issuances decreased each year from fiscal years 2015 through 2017. visas peaked in fiscal year 2016, and slightly declined in fiscal year 2017. ● The vast majority of refusals in fiscal year 2017 were due to the applicant’s inability to overcome the presumption of his or her intent to immigrate or meet the visa’s eligibility criteria. Issued visas, fiscal years 2012 through 2017 (in thousands) Visa refusal rates, fiscal years 2012 through 2017 (percentage) Temporary worker visas Characteristics of the applicant pool Number of adjudications (in thousands) Visa types (FY 2017) (884,667 adjudications) 19% 10% 16% Region in which applicant applied (FY 2017) (884,667 adjudications) Issuances ● Issued H-2A visas more than doubled from fiscal ● Generally, the refusal rates for temporary worker years 2012 through 2017. visas decreased from fiscal years 2012 through 2017. ● Department of State officials noted, for example, that H-2A visas are not numerically limited by statute. They also stated that they believe U.S. employers are increasingly less likely to hire workers without lawful status and are petitioning for lawfully admitted workers. ● In fiscal year 2017, temporary worker visas were most frequently refused because the applicant did not provide adequate documentation to the consular officer. Issued visas, fiscal years 2012 through 2017 (in thousands) Visa refusal rates, fiscal years 2012 through 2017 (percentage) Transit and crewmember visas Characteristics of the applicant pool Number of adjudications (in thousands) Visa types (FY 2017) (330,117 adjudications) Region in which applicant applied (FY 2017) (330,117 adjudications) Issuances ● Issued transit and crewmember visas increased by ● The refusal rates for transit and crewmember visas about 8 percent from fiscal years 2012 through 2017 (from about 295,000 to 320,000). varied over the period of fiscal years 2012 through 2017. ● Specifically, issued C-1/D visas increased over the ● The majority of refusals in fiscal year 2017 were same time period, but the number of issued visas for the remaining visa types in this category have decreased. due to the applicant’s inability to overcome the presumption of his or her intent to immigrate or meet the visa’s eligibility criteria. Issued visas, fiscal years 2012 through 2017 (in thousands) Visa refusal rates, fiscal years 2012 through 2017 (percentage) Foreign official and employee visas Characteristics of the applicant pool Number of adjudications (in thousands) Visa types (FY 2017) (166,187 adjudications) Region in which applicant applied (FY 2017) (166,187 adjudications) Issuances ● Issued foreign official and employee visas remained generally stable over the period of fiscal years 2012 through 2017. ● The refusal rates for foreign official and employee visas remained under 4 percent. ● In fiscal year 2017, foreign official and employee visas were most frequently refused because the applicant did not provide adequate documentation to the consular officer. Issued visas, fiscal years 2012 through 2017 (in thousands) Visa refusal rates, fiscal years 2012 through 2017 (percentage) Treaty trader and investor visas Characteristics of the applicant pool Number of adjudications (in thousands) Visa types (FY 2017) (68,580 adjudications) Region in which applicant applied (FY 2017) (68,580 adjudications) Issuances ● Overall, issued treaty trader and investor visas ● Generally, refusal rates for treaty trader and investor increased over the period of fiscal years 2012 through 2017. visas increased slightly over the period of fiscal years 2012 through 2017. ● Issuances for E-3 visas nearly doubled from fiscal ● The majority of refusals in fiscal year 2017 were year 2012 through 2017, but comprise a small percentage of this category overall. due to the applicant’s inability to overcome the presumption of their intent to immigrate or meet the visa’s eligibility criteria. Issued visas, fiscal years 2012 through 2017 (in thousands) Issued visas, fiscal years 2012 through 2017 (in thousands) Visa refusal rates, fiscal years 2012 through 2017 (percentage) Fiancé(e) and spouse visas Characteristics of the applicant pool Number of adjudications (in thousands) Visa types (FY 2017) (40,533 adjudications) Region in which applicant applied (FY 2017) (40,533 adjudications) Issuances ● The number of issued fiancé(e) and spouse visas ● Refusal rates for fiancé(e) and spouse visas were fluctuated over the period of fiscal years 2012 through 2017, but increased overall during this time period. relatively low during the period of fiscal years 2012 through 2017. ● Most refusals in fiscal year 2017 were due to inadequate documentation from the visa applicant, potentially indicating that such applications failed to include necessary documentation for the consular officer to ascertain whether the applicant was eligible to receive a visa at that time. Issued visas, fiscal years 2012 through 2017 (in thousands) Visa refusal rates, fiscal years 2012 through 2017 (percentage) Appendix II: Nonimmigrant Visa Statistics, Fiscal Years 2012 through 2017 Nonimmigrant visas (NIV) are issued to foreign nationals such as tourists, business visitors, and students seeking temporary admission into the United States. The Department of State (State) is generally responsible for the adjudication of NIV applications, and manages the application process, including the consular officer corps and its functions at more than 220 U.S. embassies and consulates (i.e., visa-issuing posts) overseas. Depending on various factors, such as the particular NIV sought, the applicant’s background, and visa demand, State officials noted that the length of the visa adjudication process can vary from a single day to months. This appendix provides descriptive statistics of NIV adjudications, issuances, and refusals for fiscal years 2012 through 2017. Specific details are shown in table 8 below. State data from fiscal years 2012 through 2016 indicate that NIV adjudications generally followed an annual cycle, ebbing during certain months during the fiscal year; however, adjudications in fiscal year 2017 departed slightly from this trend. Specifically, from fiscal years 2012 through 2016, the number of NIV adjudications typically reached its highest peak in the summer months, as shown in table 9. For example, State officials noted that a summer peak is generally due to international students who are applying for their visas for the coming academic year. There are many NIVs, and for the purposes of this report, we have placed the majority of NIVs into one of seven groups. Table 10 includes the annual NIV adjudications, issuances, and refusal rates, for each visa group for fiscal years 2012 through 2017. NIV applicants seeking to travel to the United States represent many different nationalities, but the countries of nationality with the most NIV adjudications have remained relatively consistent in recent years. Table 11 provides the top 25 countries of nationality for NIV adjudications for fiscal years 2012 through 2017. NIV applicants can apply for their NIVs at more than 220 visa-issuing U.S. posts overseas. Table 12 describes the regions to which NIV applicants applied from fiscal years 2012 through 2017. NIV applicants can be refused a visa on a number of grounds of inadmissibility or other ineligibility under U.S. immigration law and State policy. For the purposes of this report, we have grouped most of these grounds for refusal into one of seven categories, and group the remaining into a miscellaneous category, as shown in table 13. Appendix III: Foreign National Entry Restrictions and Related Litigation, January 2017 Through June 2018 From January through October 2017, the administration took various executive actions establishing nationality-based entry restrictions for certain categories of foreign nationals from designated countries. This appendix supplements information included in this report to provide a more comprehensive presentation of changes to U.S. immigration policy affecting nonimmigrant and immigrant entry into the United States, and outlines the legal standards applied, and precedent developed and relied upon, by federal courts in resolving challenges to the executive actions. In particular, it describes relevant aspects of the executive actions specifically addressed in this report—Executive Orders 13769 and 13780, both titled Protecting the Nation from Foreign Terrorist Entry into the United States, and Presidential Proclamation 9645, Enhancing Vetting Capabilities and Processes for Detecting Attempted Entry into the United States by Terrorists or Other Public-Safety Threats—that imposed visa entry restrictions on certain countries’ nationals and included provisions addressing NIV screening and vetting, as well as other executive actions on immigration issued by the current administration. Furthermore, this appendix provides a detailed account of the interrelated challenges to these executive actions brought in the federal courts through June 2018. In summary, on March 6, 2017, the President issued Executive Order (EO) 13780, Protecting the Nation from Foreign Terrorist Entry Into the United States, which instituted visa and refugee entry restrictions, and an accompanying memorandum addressed to the Secretaries of State and Homeland Security and the Attorney General, calling for heightened screening and vetting of visa applications and other immigration benefits. EO 13780 stated that it is U.S. policy to improve the screening and vetting protocols and procedures associated with the visa-issuance process and U.S. Refugee Admissions Program (USRAP). Enforcement of sections 2(c) and 6(a) of EO 13780 which established visa entry restrictions for nationals of six countries of particular concern—Iran, Libya, Somalia, Sudan, Syria, and Yemen—for a 90-day period, and suspended all refugee admissions for 120 days, was enjoined by federal district court orders issued in March 2017. On appeal, the U.S. Courts of Appeals for the Fourth and Ninth Circuits generally upheld these decisions. Upon review by the U.S. Supreme Court in June 2017, the injunction was partially lifted except with respect to foreign nationals who have bona fide ties to the United States Implementation of EO 13780 commenced on June 29, 2017. On September 24, 2017, pursuant to section 2(e) of EO 13780, the President issued Presidential Proclamation 9645, Enhancing Vetting Capabilities and Processes for Detecting Attempted Entry Into the United States by Terrorists or Other Public-Safety Threats. This proclamation restricts entry into the United States of certain categories of foreign nationals from eight countries—Chad, Iran, Libya, North Korea, Somalia, Syria, Venezuela, and Yemen—for an indefinite period. Preliminary injunctions issued by the U.S. District Courts for the Districts of Maryland (Maryland federal district court) and Hawaii (Hawaii federal district court) in October 2017 prohibited implementation of these visa entry restrictions except with respect to North Korean and Venezuelan nationals. On December 4, 2017, the U.S. Supreme Court issued two orders staying these district court injunctions; and on January 19, 2018, the Supreme Court granted the government’s petition for review of the December 22, 2017, decision of the Ninth Circuit, which partially affirmed the Hawaii federal district court’s preliminary injunction. As of June 2018, these latest visa entry restrictions continue to be fully implemented consistent with the Supreme Court’s June 26, 2018, decision, which held that the President may lawfully establish nationality-based entry restrictions, and that Proclamation 9645 itself “is squarely within the scope of Presidential authority.” The following sections describe these executive actions and related litigation in greater detail. Executive Actions and Related Litigation Executive Order 13769 On January 27, 2017, the President issued EO 13769, Protecting the Nation from Foreign Terrorist Entry Into the United States, which directed a review of information needs for adjudicating visas and other immigration benefits to confirm individuals seeking such benefits are who they claim to be, and are not security or public-safety threats. To temporarily reduce investigative burdens during the review period, the EO suspended U.S. entry for nationals of seven countries of particular concern—Iran, Iraq, Libya, Somalia, Sudan, Syria, and Yemen. In addition, EO 13769 put USRAP on hold for 120 days and indefinitely barred admission of Syrian refugees. Shortly after its issuance, however, the EO faced numerous legal challenges in federal courts across the country involving various constitutional and statutory issues such as detainee applications for writs of habeas corpus, alleged religious or nationality-based discrimination, and the extent of the EO’s applicability to certain categories of foreign nationals, including U.S. lawful permanent residents (LPR) and dual nationals holding passports issued by a listed country as well as another nation not subject to visa entry restrictions. On February 3, 2017, the Washington federal district court entered a nationwide temporary restraining order (TRO) prohibiting enforcement of the EO’s entry restrictions. In rejecting the government’s argument that a TRO only cover the particular states at issue, the court reasoned that partial implementation would “undermine the constitutional imperative of ‘a uniform Rule of Naturalization’ and Congress’s instruction that the ‘immigration laws of the United States should be enforced vigorously and uniformly.’” On February 9, 2017, the Ninth Circuit affirmed the nationwide injunction, thereby denying the government’s emergency motion for a stay of the Washington federal district court’s TRO pending appeal, because the government did not show a likelihood of success on the merits of its appeal, or that failure to enter a stay would cause irreparable injury. On March 6, 2017, however, the President issued EO 13780, which revoked and replaced EO 13769, and established revised restrictions on entry for nationals of the same countries of particular concern, except Iraq. Executive Order 13780 On March 6, 2017, the President signed EO 13780, Protecting the Nation from Foreign Terrorist Entry Into the United States, which revoked and replaced EO 13769 and put in place revised visa and refugee entry restrictions, and issued an accompanying memorandum calling for heightened screening and vetting of visa applications and other immigration benefits. In general, sections 2(c) and 6(a) of EO 13780 barred visa travel for nationals of six designated countries—Iran, Libya, Somalia, Sudan, Syria, and Yemen—for 90 days, and all refugee admission for 120 days. On March 15, 2017, sections 2 and 6 of the EO were enjoined on statutory grounds (i.e., based on potential violation of U.S. immigration law) pursuant to the order of the Hawaii federal district court granting the plaintiffs’ motion for a TRO. On March 16, 2017, the Maryland federal district court issued a preliminary injunction barring implementation of visa entry restrictions on a nationwide basis with respect to nationals of the six listed countries. On May 25, 2017, the Fourth Circuit affirmed the Maryland federal district court’s injunction on constitutional grounds (i.e., based on potential violation of the Establishment Clause of the First Amendment to the U.S. Constitution). On June 12, 2017, the Ninth Circuit generally affirmed the Hawaii federal district court’s ruling, but vacated the district court’s order to the extent it enjoined internal review procedures not burdening individuals outside the Executive Branch, therefore permitting the administration to conduct the internal reviews of visa information needs as directed in the EO. On June 14, 2017, the President issued a memorandum to the Secretaries of State and Homeland Security, Attorney General, and Director of National Intelligence, directing that sections 2 and 6 of EO 13780 were to be implemented 72 hours after all applicable injunctions are lifted or stayed. On June 26, 2017, the Supreme Court granted, in part, the government’s application to stay the March 15 and 16 injunctions of the Hawaii and Maryland federal district courts, as generally upheld on May 25 and June 12 by the Fourth and Ninth Circuits. The Court explained that the administration may enforce visa and refugee travel restrictions under sections 2 and 6 except with respect to an individual who can “credibly claim a bona fide relationship with a person or entity in the United States.” In the case of a visa or refugee applicant who is the relative of a person in the United States, such foreign national would be exempt from entry restrictions provided the family connection with their U.S. relative meets the “close familial relationship” standard. The Court further explained that a qualifying relationship with a U.S. entity would have to be formal, documented, and formed in the ordinary course, and not for the purpose of evading EO 13780. On June 29, 2017, the day that implementation of EO 13780 began, the State Department issued guidance providing that a close familial relationship exists for the parents, spouse, children, adult sons or daughters, sons and daughters-in-law, and siblings of a person in the United States, but not for such person’s grandparents, grandchildren, uncles, aunts, nephews, nieces, sisters-in-law, brothers-in-law or other relatives. The State of Hawaii filed a motion with the Hawaii federal district court seeking, among other things, a declaration that the partial injunction in place after the Supreme Court’s ruling prohibited application of travel restrictions to fiancés, grandparents, grandchildren, brothers and sisters in-law, aunts, uncles, nieces, nephews, and cousins of persons in the United States. On July 13, 2017, the Hawaii federal district court ruled, among other things, that section 2 of the EO, generally barring travel to the United States for nationals of certain countries, does not apply to the grandparents, grandchildren, brothers and sisters in-law, aunts, uncles, nieces, nephews and cousins of persons in the United States, who were initially excluded from the administration’s interpretation of “close family.” The government appealed this decision to the Supreme Court. On July 19, 2017, the Supreme Court denied the government’s motion seeking further clarification of its June 26 ruling, stayed the Hawaii federal district court’s order to the extent it included refugees covered by a formal assurance from a U.S.-based resettlement agency within the scope of the preliminary injunction, pending appeal to the Ninth Circuit, and left unchanged the district court’s broader formulation of exempt “close family.” On September 7, 2017, the Ninth Circuit upheld the Hawaii federal district court’s definition of close family members who are not to be subjected to travel restrictions, and rejected the government’s argument that refugees who had undergone a stringent review process and been approved by U.S.-based resettlement agencies lack a bona fide relationship to the United States, thus allowing admission of such refugees. On September 11, 2017, the Supreme Court temporarily enjoined aspects of the Hawaii federal district court’s holding that would permit admission of certain refugees with formal assurances from a U.S. resettlement entity. The next day, on September 12, 2017, the Supreme Court indefinitely stayed the Ninth Circuit’s September 7 ruling with respect to refugees covered by a formal assurance, thereby permitting the administration to suspend entry of such refugees. On September 24, 2017, pursuant to section 2(e) of EO 13780, the President issued Presidential Proclamation 9645, Enhancing Vetting Capabilities and Processes for Detecting Attempted Entry Into the United States by Terrorists or Other Public Safety Threats, which expanded the scope and duration of visa entry restrictions from six to eight countries, and from a 90-day to an indefinite period for the listed countries. On September 25, 2017, in light of the September 24 proclamation, the Supreme Court directed the parties to file briefs addressing whether, or to what extent, the cases before it regarding EO 13780 are moot. On October 10, 2017, after receiving the parties’ supplemental briefs, the Supreme Court decided that because section 2(c) of EO 13780 expired on September 24, there was no live case or controversy; and without expressing a view on the merits, the Court vacated and remanded the Maryland case to the Fourth Circuit with instructions to dismiss as moot the challenge to EO 13780. On October 24, 2017, consistent with its October 10 ruling, the Supreme Court also vacated and remanded the Hawaii case related to EO 13780 to the Ninth Circuit with instructions to dismiss it as moot. Consequently, after challenges to EO 13780 visa and refugee entry restrictions, as curtailed by the Supreme Court’s ruling of June 26, 2017, were rendered moot, litigation continued with respect to the President’s proclamation of September 24, 2017. Presidential Proclamation 9645 On September 24, 2017, pursuant to section 2(e) of EO 13780, the President issued Presidential Proclamation 9645 (the Proclamation), Enhancing Vetting Capabilities and Processes for Detecting Attempted Entry Into the United States by Terrorists or Other Public-Safety Threats, which imposes certain conditional restrictions and limitations on entry into the United States of nationals of eight countries—Chad, Iran, Libya, North Korea, Somalia, Syria, Venezuela, and Yemen—for an indefinite period. According to the Proclamation, travel restrictions are tailored to each nation’s information sharing and identity management deficiencies based on standard immigration screening and vetting criteria established by the Secretary of Homeland Security, and are to remain in effect until such time as the Secretaries of Homeland Security and State determine that a country provides sufficient information for the United States to assess adequately whether its nationals pose a security or safety threat. On October 17, 2017, the Hawaii federal district court issued a TRO, on statutory grounds, enjoining on a nationwide basis the implementation and enforcement of travel restrictions provided for under the Proclamation, except with respect to North Korean or Venezuelan nationals. On the same day, the Maryland federal district court granted in part plaintiffs’ motion for preliminary injunction, primarily on constitutional grounds, thereby prohibiting implementation of visa entry restrictions nationwide, except for nationals of North Korea and Venezuela as well as other covered foreign nationals who lack a credible claim of a bona fide relationship with a person or entity in the United States. On October 20, 2017, the Hawaii federal district court converted its October 17 TRO into a preliminary injunction, thereby continuing the nationwide prohibition on enforcement or implementation of the suspension on entry for nationals of Chad, Iran, Libya, Somalia, Syria, and Yemen. The district court did not stay its ruling or hold it in abeyance should an appeal be filed in the Ninth Circuit. On November 13, 2017, the Ninth Circuit granted, in part, the government’s request for an emergency stay of the Hawaii federal district court’s preliminary injunction, thereby allowing visa entry restrictions to go into effect with respect to the nationals of Chad, Iran, Libya, Somalia, Syria, and Yemen. However, consistent with the Supreme Court’s June 2017 ruling, the court ordered that those with a bone fide relationship to a person or entity in the United States not be subject to such travel restrictions. On November 20, 2017, the government petitioned the Supreme Court for a stay of the preliminary injunction issued by the Hawaii federal district court, pending consideration and disposition of the government’s appeal from that injunction to the Ninth Circuit and, if that court affirms the injunction, pending filing and disposition of a petition for a writ of certiorari and any further proceedings in the Supreme Court. On November 28, 2017, plaintiffs in the challenge to the Proclamation arising out of Hawaii asked that the Supreme Court deny the government’s request to lift the partial injunction left in place by the Ninth Circuit. On the same day, plaintiffs in the case arising out of Maryland requested that the Supreme Court not grant a stay of the federal district court’s preliminary injunction. In both cases, plaintiffs assert that the more expansive visa entry restrictions violate U.S. immigration law; additionally, for the Maryland case, plaintiffs argue that such restrictions are unconstitutional as a form of discrimination based on national origin. On December 4, 2017, the Supreme Court issued two orders staying the Maryland and Hawaii federal district courts’ orders of October 17 and 20 that preliminarily enjoined implementation of the Proclamation, pending decisions of the Ninth and Fourth Circuits in the government’s appeals, and of the Supreme Court regarding a petition for a writ of certiorari (if sought). As a result, the Proclamation’s visa entry restrictions were permitted to go into full effect unless and until they are either enjoined by the courts of appeals and a writ of certiorari is not sought thereafter, or the Supreme Court either denies a petition for certiorari (thereby resulting in termination of the Supreme Court’s stay order) or grants such petition followed by a final injunction prohibiting current or future implementation of the Proclamation’s restrictions. The Supreme Court further noted its expectation that the courts of appeals will render decisions “with appropriate dispatch,” in light of both courts having decided to consider their respective cases on an expedited basis. On December 8, 2017, the Department of State announced that it began fully implementing the Proclamation, as permitted by the Supreme Court, at the opening of business at U.S. embassies and consulates overseas. On December 22, 2017, the Ninth Circuit affirmed in part and vacated in part the Hawaii federal district court’s October 20 order enjoining enforcement of visa entry restrictions under the Proclamation, while limiting the preliminary injunction’s scope to foreign nationals who have a bona fide relationship with a person or entity in the United States. Without reaching plaintiffs’ constitutional claims, the court of appeals concluded that the Proclamation exceeded the scope of authority delegated to the President by Congress under the Immigration and Nationality Act (INA), in particular, sections 202(a)(1)(A) (immigrant visa nondiscrimination) and 212(f) (presidential suspension of, or imposition of restrictions on, alien entry), by deviating from statutory text, legislative history and prior executive practice; not including the requisite finding that entry of certain foreign nationals would be detrimental to U.S. interests; and contravening the INA’s prohibition on nationality-based discrimination in the issuance of immigrant visas. However, the court stayed its decision, given that the Supreme Court’s December 4 order lifted the federal district courts’ injunctions pending not only review by the courts of appeals, but also “disposition of the Government’s petition for a writ of certiorari, if such writ is sought.” On January 5, 2018, the government filed a petition for a writ of certiorari seeking review of the December 22, 2017, judgment of the Ninth Circuit which left in place the Hawaii federal district court injunction of the Proclamation’s visa entry restrictions for individuals with bona fide ties to the United States. On January 19, 2018, the Supreme Court granted the government’s certiorari petition and will therefore consider, and issue an opinion on the merits of, the Ninth Circuit’s decision. On February 15, 2018, the Fourth Circuit affirmed the preliminary injunction granted by the Maryland federal district court on constitutional grounds, but stayed its decision pending the outcome of the Ninth Circuit case before the Supreme Court. The court of appeals found that “laintiffs offer undisputed evidence that the President has openly and often expressed his desire” to bar the entry of Muslims into the United States. Therefore, the court concluded that, in light of the President’s official statements, the Proclamation likely violates the Establishment Clause as it “fails to demonstrate a primarily secular purpose,” and also goes against the basic principle that government is not to act with religious animus. On February 23, 2018, Fourth Circuit challengers filed a petition for a writ of certiorari seeking for the Supreme Court to consolidate their case with the Court’s ongoing review of the Ninth Circuit decision. These petitioners requested that the Court additionally consider their argument that the preliminary injunction should not have been limited to individuals with a bona fide relationship to a person or entity in the United States. On February 26, 2018, the Supreme Court granted Fourth Circuit petitioners’ motion to expedite consideration of their certiorari petition. On April 10, 2018, the President issued a proclamation announcing that because Chad has improved its identity-management and information sharing practices sufficiently to meet U.S. baseline security standards, nationals of Chad will again be able to receive visas for travel to the United States. On June 26, 2018, the Supreme Court held that the President lawfully exercised the broad discretion granted to him under INA § 212(f) (presidential suspension of, or imposition of restrictions on, alien entry), by issuing Proclamation No. 9645, which established nationality-based visa entry restrictions applicable to categories of foreign nationals from eight (now seven) countries for an indefinite period. In addition, while three individual plaintiffs had standing to bring an Establishment Clause challenge to entry restrictions prohibiting their relatives from coming to the United States, the Court found the Proclamation to be legitimate on its face as a way to prevent entry of certain foreign nationals where the government determines there is insufficient information for visa vetting. As a result of the Supreme Court’s June 26, 2018, decision, which held that the establishment of nationality-based entry restrictions is a lawful exercise of the President’s broad discretion in matters of immigration and national security, the visa entry restrictions imposed on categories of foreign nationals from certain countries pursuant to Presidential Proclamation 9645 continue to be fully implemented , as they have been since the Supreme Court’s December 4, 2017, orders staying the lower courts’ injunctions. Executive Order 13815 On October 24, 2017, the same day the 120-day suspension of refugee admissions under EO 13780 expired, the President signed EO 13815, Resuming the United States Refugee Admissions program With Enhanced Vetting Capabilities, which resumed USRAP and directed that special measures be applied to certain categories of refugees posing potential threats to the security and welfare of the United States. On December 23, 2017, the Washington federal district court issued a nationwide preliminary injunction on aspects of EO 13815 (and its accompanying memorandum), thus prohibiting the administration from: (1) temporarily suspending admission of refugees from 11 previously identified countries of concern, and reallocating resources from the processing of their applications during the 90-day review period (except for those lacking a bona fide relationship with a person or entity in the United States); and (2) indefinitely barring admission of, and application processing for, all following-to-join refugees. On January 5, 2018, the Washington federal district court denied the government’s motion for reconsideration of the court’s December 23, 2017, order temporarily halting enforcement of refugee entry restrictions that were to be implemented as part of the resumption of USRAP under the EO. Specifically, the government “ask the court to ‘modify its preliminary injunction to exclude from coverage refugee applicants who seek to establish a on the sole ground that they have received a formal assurance from a resettlement agency.’” In denying the government’s motion for reconsideration, the court relied on the September 7, 2017, decision of the Ninth Circuit which, among other things, rejected the notion that refugees with formal assurances from U.S.-based resettlement agencies do not meet the Supreme Court’s bona fide relationship standard. The court treated this Ninth Circuit ruling as binding precedent given that the Supreme Court’s indefinite stay of September 12 neither vacated the Ninth Circuit’s decision, nor provided any underlying reason(s) that would allow another court to discern its rationale. On January 9, 2018, the Washington federal district court also denied the government’s emergency motion for a stay of the court’s December 23, 2017, preliminary injunction, pending appeal to the Ninth Circuit. On January 31, 2018, DHS announced additional security measures to prevent exploitation of USRAP. Specifically, these security measures include additional screening for certain nationals of high-risk countries, a more risk-based approach to administering USRAP, and a periodic review and update of the refugee high-risk countries list and selection criteria. Therefore, as of June 2018, while the administration has announced additional security measures to strengthen the integrity of USRAP, the Washington federal district court’s December 23, 2017, preliminary injunction of EO 13815 continues to: (1) prohibit implementation of the temporary suspension of admission, and reallocation of resources from processing applications, of refugees from 11 previously identified countries of concern; and (2) forbid enforcement of the indefinite bar on entry of following-to-join refugees. Appendix IV: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgements In addition to the contact named above, Kathryn Bernet (Assistant Director), Colleen Corcoran, Eric Hauswirth, Thomas Lombardi, Amanda Miller, Sasan J. “Jon” Najmi, Erin O’Brien, Garrett Riba, and Dina Shorafa made significant contributions to this report.
Why GAO Did This Study Previous attempted and successful terrorist attacks against the United States have raised questions about the security of the U.S. government's process for adjudicating NIVs, which are issued to foreign nationals, such as tourists, business visitors, and students, seeking temporary admission into the United States. For example, the December 2015 shootings in San Bernardino, California, led to concerns about NIV screening and vetting processes because one of the attackers was admitted into the United States under a NIV. In 2017, the President issued executive actions directing agencies to improve visa screening and vetting, and establishing nationality-based visa entry restrictions, which the Supreme Court upheld in June 2018. GAO was asked to review NIV screening and vetting. This report examines (1) outcomes and characteristics of adjudicated NIV applications from fiscal years 2012 through 2017, and (2) key changes made to the NIV adjudication process in response to executive actions taken in 2017. GAO analyzed State NIV adjudication data for fiscal years 2012 through 2017, the most recent and complete data available. GAO visited seven consular posts selected based on visa workload and other factors. GAO reviewed relevant executive orders and proclamations, and documents related to implementing these actions. This is a public version of a sensitive report issued in June 2018. Information that DHS, State, and the Office of the Director of National Intelligence deemed sensitive has been removed. What GAO Found The total number of nonimmigrant visa (NIV) applications that Department of State (State) consular officers adjudicated annually peaked at about 13.4 million in fiscal year 2016, and decreased by about 880,000 adjudications in fiscal year 2017. NIV adjudications varied by visa group, country of nationality, and refusal reason: Visa group. From fiscal years 2012 through 2017, about 80 percent of NIV adjudications were for tourists and business visitors. During this time, adjudications for temporary workers increased by about 50 percent and decreased for students and exchange visitors by about 2 percent. Country of nationality. In fiscal year 2017, more than half of all NIV adjudications were for applicants of six countries of nationality: China (2.02 million, or 16 percent), Mexico (1.75 million, or 14 percent), India (1.28 million, or 10 percent), Brazil (670,000, or 5 percent), Colombia (460,000, or 4 percent), and Argentina (370,000, or 3 percent). Refusal reason. State data indicate that over this time period, 18 percent of adjudicated applications were refused; more than 90 percent were because the applicant did not qualify for the visa sought, and a small percentage (0.05 percent) were due to terrorism and security-related concerns. In 2017, two executive orders and a proclamation issued by the President required, among other actions, visa entry restrictions for nationals of certain listed countries of concern, the development of uniform baseline screening and vetting standards, and changes to NIV screening and vetting procedures. GAO's analysis of State data indicates that, out of the nearly 2.8 million NIV applications refused in fiscal year 2017, 1,338 applications were refused due to visa entry restrictions implemented per the executive actions. State, the Department of Homeland Security (DHS), and others developed standards for screening and vetting by the U.S. government for all immigration benefits, such as for the requirement for applicants to undergo certain security checks. Further, State sought and received emergency approval from the Office of Management and Budget in May 2017 to develop a new form to collect additional information from some visa applicants, such as email addresses and social media handles.
gao_GAO-18-656
gao_GAO-18-656_0
Background This section provides information on translating research into new products or services, the federal government’s role in supporting research, and OSTP’s role in fostering collaboration among the various entities. It also provides information on the areas of quantum computing and synthetic biology. Translating Research into New Products or Services Technological innovation involves not only creating new ideas but also translating those ideas into a new product or service. Innovation, and the research driving it, is inherently risky because the likelihood that research can be translated into a product or service and the ultimate value of that product or service are unknown. Because of this risk and the long time frames sometimes associated with technology development, there can be a gap in funding and investment support that makes it challenging to translate research into commercialized products or services. While government and universities often support early-stage research and industry tends to support later stages of development, there may be a gap during the middle stages of innovation during which innovators may have difficulty finding financial support, as illustrated in figure 1 (see app. III for a printable version). The linear, or pipeline, model of innovation presents innovation as a succession of outputs that transfer to the next level as inputs. The starting point in the pipeline model is basic research. Knowledge created through basic research transitions to the next stage of applied research then to development and, finally, commercialization. Under this model, innovation takes place in distinct and sequential phases. Critics of the pipeline model have noted that innovation is actually cyclical because the development of knowledge involves feedback and interaction at these different stages of the cycle. Alternative innovation models include the following: Extended pipeline model. Under this model certain research and development organizations support the entire technology development process, from basic research to initial commercialization. Unlike the pipeline model, in which the government’s support is disconnected from the rest of the innovation ecosystem, under the extended pipeline model the government’s role is deeply connected to the rest of the system. Under this model, federal entities such as DOD support the evolution of technologies, including electronics, computing, and the internet, across all stages of innovation. Induced innovation model. Innovation that follows this model is more industry-led because the parties involved have a market niche that the research needs to meet. Research under this model is more likely to lead to incremental advances because it is conducted in response to market demand. Manufacturing-led model. Under this model, innovation is pursued with the main objective of manufacturing. This model describes innovations in production technologies, processes, and products that emerge from the manufacturing process. The production process is supplemented by applied research and development. It is typically industry-led but may have strong government support, particularly in countries such as Germany, Japan and China whose economies are organized around this model. Federal Role in Supporting Research While the different innovation models receive various levels of federal support, examining the organization of federal agencies in support of innovation is complex because of the decentralized nature of the federal research system. More than 25 federal agencies support intramural or extramural research, and these agencies may play different roles in supporting research that may lead to potentially transformational technologies. For example, NSF supports basic research that is in keeping with its mission of promoting the progress of science; advancing the national health, prosperity, and welfare; and securing the national defense. DOD supports research in line with its mission to provide the military forces needed to deter war and to protect the United States’ security, while DOE supports research in line with its mission to ensure America’s security and prosperity by addressing its energy, environmental, and nuclear challenges. Commerce’s National Institute of Standards and Technology (NIST) supports research in measurement science, standards, and technology, in keeping with its mission to promote innovation and industrial competitiveness. Other agencies—such as EPA, and HHS’s Food and Drug Administration—support research in their capacity as regulatory agencies. Federal support for research is not only decentralized but also changes over time. Factors such as international conflict, budgetary pressures, and globalization may contribute to shifts in U.S. science and technology policy. In times of war, federal support for research has increased in part because of the view that America’s military survival might depend on science and technology leadership. Budgetary pressures also affect the federal role in research when such pressures lead to reductions in federal funding for research. Globalization and the associated integration of the world economy may also affect federal science and technology policy. While the United States invests far more resources in research and development than any other country, its rank in research and development intensity has slowly fallen in recent years. Researchers have said that, in addition to globalization, domestic changes—such as the structure of U.S. companies—present new challenges to commercializing new products and services. For example, in the last few decades, the amount of research produced by industrial laboratories has declined. Further, U.S. companies, particularly small and midsized firms, devote fewer resources to train employees compared to firms from the 1980s. In recognition of the need for a more skilled workforce to enhance U.S. competitiveness, the federal government has increasingly shifted attention to preparing students for careers in STEM fields. The federal role also changes in response to differing policy views. One policy perspective maintains that the federal role should be to support innovation across the economy. This policy approach has underpinned innovation and economic growth since at least the end of World War II. As we reported previously, another perspective is that the federal role should be to support individual sectors. Critics of the latter perspective argue that the government should not “pick winners and losers” in commercial contexts because it is unlikely that the government will have sufficient information or foresight about an individual firm’s or a particular technology’s growth potential to select it for special subsidy. This view advocates allocating resources through market mechanisms because such mechanisms are anticipated to result in U.S. investments that are most efficient and best suited to the comparative advantages of the United States. However, the federal government has supported individual sectors from research and development through implementation, most often because of the government’s own needs in areas deemed important for national security (e.g., aerospace and defense). In addition, findings of economic market failures have justified other interventions, such as for research, development, and demonstrations in various sectors, including agriculture and energy, and recently, advanced production technologies. The federal government has partnered with nonfederal entities to translate research into commercialized products to foster economic growth. For example, DOD, through programs such as the Defense Advanced Research Projects Agency (DARPA), has partnered with nonfederal entities to support both early-stage research and later-stage production. Some of these partnerships have led to development of transformational technologies. For example, in the 1970s DOD supported development of a communications network to facilitate information sharing, which is considered the foundation of the modern internet. DOD also funded research in the 1950s on speech recognition and artificial intelligence that commercial companies leveraged in the 1990s and 2000s to develop technologies such as the Speech Interpretation and Recognition Interface, the iPhone assistant. NIST research, such as its critical technical evaluations of speech recognition technologies dating back to the 1980s, also contributed to the development of Speech Interpretation and Recognition Interface, according to NIST officials. Alongside DOE, HHS and NSF, DOD has funded research that led to technologies used to make the first iPod and later the iPhone (see fig. 2). Many federal agencies also support other mechanisms, such as Small Business Innovation Research and Small Business Technology Transfer grants, to stimulate innovation by facilitating interactions among the federal government, private sector, and nonprofit research institutions. Role of OSTP in Fostering Collaboration OSTP was established in 1976 to provide advice on the scientific, engineering, and technological aspects of issues that require attention at the highest levels of government. Advances in technology in areas such as quantum computing and synthetic biology have become increasingly interdisciplinary, and OSTP works with agencies across the decentralized federal research system to coordinate activities to support these advances. The National Science and Technology Council (NSTC) is a key component of these efforts and is charged with coordinating science and technology policy across the federal government. One of the NSTC’s primary objectives is to establish clear national goals for federal science and technology investments. NSTC organizes its work under six committees, such as the Committee on STEM Education, which is responsible for coordinating federal programs and activities in support of STEM education. In addition to pulling together federal entities, OSTP also plays a role in pulling together nonfederal entities to help tackle technological issues of importance to the nation. For example, the National Strategic Computing Initiative, created in 2015, is a government collaboration with industry and academia to sustain and enhance U.S. leadership in high-performance computing. Quantum Computing Quantum computing has the potential to revolutionize computing by introducing a fundamentally new approach to computing not available with classical computers, which constitute most computers in use today. Classical computers process two different states as 1s and 0s (binary digits) to form “bits” of information that the computer manipulates. Bits can exist in either a 1 or 0 state. These bits may be created using, for example, specific voltage or current levels in a circuit, and there is a limit as to how quickly transistors in classical computers can manipulate these bits to conduct calculations or how many circuit components can be included on a computer chip. While classical computers rely on bits, quantum computers rely on quantum bits (“qubits”). Unlike bits, qubits can be in combinations of both a 1 and a 0 at the same time due to quantum superposition. Phenomena such as quantum superposition and quantum entanglement (the ability of two particles to have correlated information, even at a distance) make quantum computers more powerful than even today’s most advanced classical supercomputers for solving some complex problems. This ability to exist in combinations of both states simultaneously allows for the efficient implementation of certain algorithms, resulting in the ability to solve certain types of problems significantly faster than classical computers. To date, a universal quantum computer is not commercially available. As of 2017, quantum computers contain at most 50 qubits and can perform some small calculations more slowly than classical computers. Among the challenges to building a quantum computer are developing software and hardware. Quantum hardware allows the computer to manipulate qubits by completely isolating quantum processors from outside forces. Quantum computing hardware is at the laboratory prototype stage and is progressing steadily, according to a 2016 federal report. Hardware development efforts include the creation of logical qubits, which use error correction techniques to actively mitigate errors, thus stabilizing the quantum state of the qubit even in the presence of external factors (i.e., noise). Quantum information is extremely fragile and requires special techniques and equipment, such as extreme refrigeration, to maintain the qubit. Other challenges include creating qubits of high quality, packaging them together in a scalable form so they can perform complex calculations in a controllable way, and limiting the errors that can result from heat and electromagnetic radiation. Addressing these challenges may require developing new materials. Stakeholders still consider developing a universal quantum computer a long-term goal. When available, these computers could provide new computational methods and powerful new tools for researchers. Quantum computing has the potential to support significant breakthroughs in medicine, manufacturing, artificial intelligence, defense, and improved cybersecurity. However, it may take a decade or more before such technology is ready to be demonstrated at scale. Synthetic Biology Synthetic biology represents an intersection of biology and engineering that focuses on the modification or creation of novel biological systems. The current state of synthetic biology is mostly the result of research in biology, engineering, computer science, and information technology dating back to the mid-1900s. Synthetic biology has drawn increasing attention as a potentially transformative platform technology. Whether found in nature or synthesized in a test tube, the building blocks of synthetic biology are assembled to create biological systems. Synthetic biological systems can function in cell-free environments, such as cell extracts, or may be placed into living cells, such as bacteria, which serve as a “chassis.” In the short-term, synthetic biology is enhancing understanding of how living organisms work through progress in the ability to design and construct biological parts. Synthetic biology is already being applied in a variety of fields. Through the creation of novel biological systems, synthetic biology offers potential solutions to many current challenges, such as climate change, energy needs, and global health. For example, synthetic biology may help address global warming through the development of artificial leaf technology, a synthetic version of the photosynthesis process. In the energy sector, synthetic biology is being used to devise more efficient methods of producing biofuels, and in the healthcare sector, synthetic biology may lead to biosensors that can permanently reside in the body to detect and treat abnormalities such as cancer. Synthetic biology has already resulted in biosensors that can detect arsenic in drinking water. Factors that may support growth in synthetic biology applications include a decline in the cost of deoxyribonucleic acid (DNA) sequencing and increases in genetically engineered crop development, expenditures in research and development by biotechnology and pharmaceutical companies, and demand for synthetic genes. On the other hand, bio- safety and bio-security concerns about the potential that synthetic biology could be used for nefarious purposes may restrict the short-term growth of synthetic biology. Multiple Federal Agencies and Nonfederal Entities Support Quantum Computing and Synthetic Biology Research for Transformational Technological Advances Multiple federal agencies and nonfederal entities support quantum computing and synthetic biology research that could lead to transformational technological advances in many areas of the U.S. economy, including energy, medicine, and national security. We identified 6 agencies that in fiscal year 2016 through the second quarter of fiscal year 2018 supported quantum computing research to advance foundational understanding of quantum computing or to develop related hardware and software. We found that 4 of the 6 agencies reported a combined total of at least $23.4 million in obligations to support quantum computing research in fiscal year 2017. Similarly, we identified 10 agencies that, during the timeframe we reviewed, supported synthetic biology research to advance foundational understanding of synthetic biology or knowledge of how to apply it in bioengineering, national security, and biofuels development. We found that 6 of the 10 agencies reported a combined total of at least $211.2 million in obligations to support synthetic biology research in fiscal year 2017. We also identified a variety of nonfederal entities, such as universities and private companies, that conduct research in quantum computing and synthetic biology. Six Agencies Support Research in Quantum Computing In fiscal year 2017, 6 agencies—DOD, DOE, ODNI, NASA, Commerce’s NIST, and NSF—supported quantum computing research, and 4 of these 6 agencies reported a combined total of at least $23.4 million in obligations toward those efforts. Agency officials, stakeholders, and experts we interviewed told us they expect quantum computers could lead to transformational advances in national security technologies or in technology areas that rely heavily on simulation, such as machine learning for defense capabilities, pharmaceuticals, and materials science for advanced manufacturing. However, there is still uncertainty surrounding the specific applications of quantum computing. Agency officials, stakeholders, and experts told us that they anticipate that quantum computing applications may include large number factoring, optimization of certain tasks, and simulation of other quantum systems. Accordingly, agencies’ quantum computing efforts included research to advance foundational understanding of quantum information science as well as research to develop the hardware and software needed to build a universal quantum computer. Foundational Understanding of Quantum Information Science Joint Quantum Institute (JQI) The JQI is a research partnership between the National Institute of Standards and Technology and the University of Maryland, with the support and participation of the Laboratory for Physical Sciences. JQI was created in 2006 to pursue theoretical and experimental studies of quantum physics in the context of information science and technology. Among other objectives, JQI conducts fundamental research on the engineering and control of systems based on quantum mechanics, which describes the behavior of matter and energy at the smallest physical scales. One attribute of quantum physics is that certain properties of a particle, such as its momentum and position, are not fixed; instead these properties follow probability distributions that describe the likelihood a property may be a particular value. Researchers have also discovered that the quantum states of two separate objects, like two atoms, can be entangled such that the state of one object is correlated with the other. This entanglement makes it possible to move quantum information from one place to another. The phenomena that occur at the quantum scale have the potential to affect disparate economic sectors and could lead to improvements in computing and materials science, among others. For example, researchers at JQI have devised a new chip that generates and steers single photons, which could allow researchers to systematically assemble pathways for single photons and enable new types of optical devices. An illustration of a photonic chip created by JQI researchers. and Revolutionary Computing program, NSF supports theoretical and experimental research on quantum-based computing paradigms, information, transmission, and manipulation. Also, the NSF Physics Division’s Physics Frontiers Centers program supports university- based centers and institutes in enabling transformational advances through interdisciplinary research across different areas of focus. One of the Physics Frontier Centers that NSF supports is located at the JQI; this center supports research that focuses on studying the controlling and monitoring of quantum phenomena to support quantum engineering. A second Physics Frontier Center is at JILA. Both the JQI and JILA represent partnerships between the NSF and NIST. DOE’s Office of Science supports foundational quantum computing research as part of its Advanced Scientific Computing Research program, which focuses on discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE and the advancement of science. The program’s efforts include partnering with other Office of Science program offices to support research aimed at understanding how future computing technologies, including those based on quantum information science, could impact DOE’s mission. NASA’s Quantum Artificial Intelligence Laboratory—a collaborative effort with Google and the Universities Space Research Association— supports foundational research to maximize utilization of emerging quantum hardware. This work involves analytical and experimental research on the mechanisms underlying quantum computing, including, for example, researching quantum entanglement and measurement-based quantum computation. NASA also supports university-based quantum computing research through programs such as the Established Program to Stimulate Competitive Research (EPSCoR). Hardware Development Lincoln Laboratory’s Quantum Computing Laboratory The Massachusetts Institute of Technology’s Lincoln Laboratory is a federally funded research and development center sponsored by the Department of Defense that researches and develops a broad array of advanced technologies to meet critical national security needs. In the area of quantum information science, researchers with Lincoln Laboratory’s Quantum Computing Laboratory are exploring the fundamentally different ways that information can be stored and manipulated through quantum physics. Specifically, Lincoln Laboratory researchers are working to develop and scale up two systems that could comprise the quantum bits, or “qubits” of a quantum computer. In one method, called Josephson junction-based superconducting circuits, Lincoln Laboratory researchers are using cryogenic dilution refrigerators and microwave test and measurement equipment to control and measure superconducting qubits at extremely cold temperatures. In another method, researchers are using cryogenically cooled vacuum systems to house micro-fabricated chips that trap individual strontium and calcium ions, which are manipulated using lasers and other electromagnetic fields. For both methods, researchers are working to scale up systems of qubits to a size large enough to address real computational problems. Laser light manipulation of trapped ion qubits at Lincoln Laboratory. broader portfolios of research across the department. For example, as part of DOD’s Applied Research for the Advancement of Science and Technology Priorities program, the Office of the Secretary of Defense administers the Quantum Science and Engineering Program—a cross-cutting effort that has supported research related to technologies for controlling qubit entanglement, among other things. Additionally, DOD supports a research program on Quantum System Sciences at Lincoln Laboratory, a federally funded research and development center operated by the Massachusetts Institute of Technology (MIT). This research encompasses, among other topics, development of quantum-based computation technologies. DOE’s quantum science research efforts, such as those supported by the Office of Science’s Advanced Scientific Computing Research program, includes quantum computing hardware and architecture. After DOE issued its 2015 report on quantum computing for science, the agency held a February 2017 workshop to obtain information from stakeholders on the opportunities and challenges in establishing a quantum testbed to advance quantum computing hardware.Subsequently, DOE issued solicitations in 2017 and 2018 for proposals to support developing quantum testbeds. According to an April 2018 announcement for one of these solicitations, a testbed laboratory will host experimental quantum computing platforms that are not yet ready for commercialization, and will function as a collaborative facility to provide internal and external researchers with access to novel, early-stage quantum computing resources. NIST’s quantum science research efforts include projects within its Physical Measurement Laboratory that are looking at a spectrum of potential quantum computing hardware approaches, such as superconducting circuits or ion trap-based quantum computing, that could provide viable approaches for processing and manipulating quantum information. By working across multiple approaches, NIST has been able to apply different quantum hardware platforms to address computing and metrology problems, including creating one of the most advanced ion trap-based quantum computing platforms. Furthermore, NIST is using its advanced microfabrication facilities to develop a broad array of components that will enable the scaling of different quantum computing hardware platforms. ODNI, through the Intelligence Advanced Research Projects Activity’s (IARPA) Logical Qubits Program, is supporting research to overcome the limitations of current multi-qubit systems, whereby qubits are impacted by other qubits, environmental factors, and other forces, which can generate errors in quantum computing operations. IARPA’s Logical Qubits Program is sponsoring research teams to build qubit structures with reduced susceptibility to these types of problems and has developed a quantum system with between 10 and 20 qubits. NSF supports research related to quantum computing hardware as part of a broader portfolio of research under its Computing and Communication Foundations Division, which supports research that explores the foundations of computing and communications devices and their usage, including advancing hardware designs for computers and computational sciences, among other focus areas. For example, under the division’s Expeditions in Computing program, which provides financial assistance awards of up to $10 million over 5 years, NSF provided an award for the Enabling Practical-Scale Quantum Computation project in 2018. This project is a multi-institution, university-based effort to build a 100-qubit computer. Software Development Agency officials, stakeholders, and experts said one area in which a quantum computer could offer potential benefits over a classical computer is solving optimization problems. However, using a quantum computer for this or other applications requires developing software to, for example, translate algorithms into the steps to manipulate qubits to perform computing operations. Among the six agencies that support quantum computing research, examples of agencies’ efforts to support research to develop software necessary to operate a quantum computer include the following: DOD’s Air Force Research Laboratory issued a multi-year funding opportunity announcement for research on Quantum Computing Sciences with a focus on quantum computing algorithmic implementation and problem solving. Among other potential research topics, the Air Force is seeking research proposals to develop new algorithms to help solve optimization and machine learning problems. NASA’s Advanced Supercomputing Division provides funding for the Quantum Artificial Intelligence Laboratory. Through this effort, NASA hosts a 2,031-qubit D-Wave 2000 quantum device. NASA researchers are using this system to explore the potential for quantum computers to tackle optimization problems that are difficult or impossible for traditional supercomputers to handle and to explore the software algorithms that would be needed to do so. Ten Agencies Support Research in Synthetic Biology In fiscal year 2017, 10 agencies—DOD, DHS, DOE, EPA, HHS, ODNI, NASA, NIST, NSF, and USDA—supported synthetic biology research, and 6 of these agencies reported a combined total of at least $211.2 million in obligations toward those efforts. According to one agency official and experts, although synthetic biology has advanced significantly, foundational understanding is still needed in some key areas, including measurement and tool development. Accordingly, synthetic biology research that federal agencies supported included research to advance foundational understanding of the science, and the application of synthetic biology in specific areas, such as bioengineering, genome editing, national security, and biofuels and bioproduct development. Foundational Understanding of Synthetic Biology Genome in a Bottle The Genome in a Bottle consortium is one of several ongoing collaborations among the National Institute of Standards and Technology, Stanford University, and other partners in the Joint Initiative for Metrology in Biology. The initiative focuses on measurements and standards supporting the newest developments in genomics and synthetic biology. The Genome in a Bottle consortium focuses on genome sequencing, which involves determining the chemical building blocks of deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) and can give insights into the genes carried by an individual and how and when they are activated. Since the completion of the Human Genome Project in 2003 that first sequenced the whole genome of a human, scientists have worked to make whole human genome sequencing faster and less expensive. The consortium aims to develop the tools needed to ensure the accuracy of human genome sequencing. These tools include reference materials, standards, and data to enable the translation of whole human genome sequencing to clinical practice. Illustration of a chromosome inside a bottle. NIST supports foundational synthetic biology research by developing measurement solutions, serving as a neutral ground for the discussion of underpinning measurements and other manufacturing needs, and leading and contributing to the development of standards. NIST measurement infrastructure includes the development of enabling tools, methods, and protocols; bioinformatics and modeling tools; and documentary standards and reference materials. NIST also leads several consortia to work with measurement stakeholders and partners to accelerate breakthroughs in genomics and synthetic biology. These include NIST’s Genome in a Bottle consortium and the Joint Initiative for Metrology in Biology. DOE supports foundational research related to synthetic biology as part of a broader portfolio of research under the Biological and Environmental Research (BER) Genomic Science program, which seeks to understand how genomic information is translated to functional capabilities, enabling more confident redesign of microbes and plants for sustainable biofuel production, improved carbon storage, or contaminant bioremediation. Within BER, DOE funds the Joint Genome Institute to produce high-throughput sequencing, a fast method of determining the order of bases of genetic material, synthesis and analysis in support of BER’s bioenergy and environmental missions. Research enabled through this user facility includes developing renewable and sustainable sources of biofuels from plant biomass and exploring the biological processes controlling greenhouse gas accumulation in the atmosphere. Within HHS, multiple NIH institutes and centers support foundational research involving synthetic biology techniques, including NIH Common Fund support for research to understand and combat antibiotic resistance and National Cancer Institute support for research into new cancer immunotherapy methods. Additionally, the National Institute of Biomedical Imaging and Bioengineering has provided grants to researchers studying or using a multitude of synthetic biology techniques for applications, such as improving stem cell quality for biomedicine. NSF funds an estimated $60 million a year in foundational synthetic biology research across several directorates. For example, in 2013, NSF awarded a 5-year, $10 million Expeditions in Computing grant for a multi-university effort led by the California Institute of Technology to enable theoretical investigations in several synthetic biology-related topic areas. In 2016, NSF awarded a second 5-year, $10 million Expeditions in Computing grant for a multi-university effort led by Boston University to support synthetic biology research. Bioengineering Gene Editing The National Institutes of Health (NIH) describes gene editing as a group of technologies that give scientists the ability to add, remove, or alter genetic material at particular locations in the genome. One such technology is known as CRISPR-Cas9, which is short for clustered regularly interspaced short palindromic repeats and CRISPR- associated protein 9. According to NIH, the CRISPR-Cas9 system has generated excitement in the scientific community because it is faster, cheaper, more accurate, and more efficient than other existing gene editing methods. The system was adapted from a naturally occurring gene editing system that helps bacteria defend themselves against viruses by targeting the deoxyribonucleic acid (DNA) of the virus. In the lab, CRISPR-Cas9 allows researchers to cut out a specific sequence of DNA from cells. Once researchers cut out the targeted DNA sequence, they can use other techniques to add or delete genetic material. These genetic changes can cause the edited cells to express new physical traits, such as eye color, or change their disease risk. Gene editing is being applied to research on many diseases; however, according to NIH, there are still significant technical barriers to using gene editing therapies to treat human diseases. Further, the use of gene editing raises a number of ethical concerns. An illustration of a chromosome unravelling to show the DNA that makes up individual genes. Development programs are developing on-demand nutrients from microbes engineered to produce targeted nutrients for human consumption as well as examining how to manipulate certain types of bacteria to produce lightweight construction tools and materials. EPA employs synthetic biology approaches through its Chemical Safety for Sustainability Research Program, which seeks to develop new prediction techniques, pioneer the use of innovative technologies for chemical toxicity testing, and design tools to advance the management of chemical risks. For example, researchers are developing virtual tissues by building complex computer models for biological development. According to an EPA publication, the models will help reduce dependence on animal study data and provide faster chemical risk assessments. NSF’s Science and Technology Center Program’s Center for Cellular Construction seeks to develop tools to predict, design, and test the impact on cellular function of changes to cells’ internal organization. The center will also develop living “bioreactors” that will generate products of commercial value. NSF has funded research into bacterial immunity, which led to the development of clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9—a technology that allows researchers to precisely edit genes. Several NIH institutes and centers support research related to bioengineering. For example, NIH’s Synthetic Biology for Engineering Applications Funding Opportunity Announcement solicits applications to support research to advance the understanding and application of synthetic biology for human health. In addition, NIH institutes and centers have supported research across various areas, including engineering synthetic receptor systems and genetic controller circuits, engineering microbes as therapeutic platforms, and developing enabling technologies for human-machine hybrid tissues. National Security Application of synthetic biology may support U.S. national security efforts by aiding with monitoring for biological or conventional threats, and strengthening the resilience of soldiers in combat. Among the 10 agencies that support synthetic biology research, officials from DOD, ODNI, and DHS said their agencies support synthetic biology research with potential national security applications. Examples of federal efforts in this area include the following: DOD’s Office of Naval Research funds research to extend the natural capabilities of living organisms such as microbes and plants to create systems that will provide new naval capabilities, according to the office’s website. Office of Naval Research officials told us the office is funding ongoing research related to engineering gut microbes in order to enhance the resilience of service members to deployment stressors, among other things. In addition, DARPA’s Safe Genes Project supports force protection and military health and readiness by protecting service members from accidental or intentional misuse of genome-editing technologies. For example, researchers are developing the genetic circuitry and genome-editing machinery for robust, spatial, temporal, and reversible control of genome-editing activity in living systems. ODNI supports synthetic biology research through efforts including IARPA’s Functional Genomic and Computational Assessment of Threats program, which supports research to protect against critical threats related to pathogens and other biological threats. Researchers aim to develop better approaches and tools for characterization and analysis of biological threats based on gene function. DHS’s Biological Threat Characterization program and its Biodefense Knowledge Center program support synthetic biology research to understand the risks associated with the technologies useful for synthetic biology and the harmful pathogens that may be created by those who wish to do harm. Biofuels and Bioproducts Synthetic biology is being used to develop cost-effective methods for producing biofuels and bioproducts according to agency officials, experts, and DOE’s website. Among the 10 agencies that support synthetic biology research, officials from DOE and USDA said their agencies support synthetic biology research related to biofuels development applications. Examples of federal efforts in this area include the following: DOE officials told us that the Office of Energy Efficiency and Renewable Energy’s Bioenergy Technologies Office manages the Conversion Program and the Advanced Algal Systems Program, both of which employ synthetic biology techniques to accomplish office goals. Within the Conversion Program, DOE funds the Agile BioFoundry to help develop and transition synthetic biology tools from the laboratory to the biofuels and bioproducts industry. The program accomplishes this through targeted research and development partnerships with industry and academia, as well as by developing integrated synthetic biology tools designed to speed up biomanufacturing. In addition, the office funds the Advanced Algal Systems Program, which supports early-stage applied research to apply synthetic biology approaches to alternative fuels that use algae as their source, among other things. According to a DOE website, this industry has the capability of producing billions of gallons per year of renewable diesel, gasoline, and jet fuel. USDA, through the Agricultural Research Service, led a collaborative project between federal, industry, and academic researchers to produce a commercial rubber-based tire using the guayule plant, a small shrub native to the United States that has been considered a possible alternative source of natural rubber. Nonfederal Entities Support Research in Quantum Computing Nonfederal research to advance quantum computing includes efforts to address existing hardware and software challenges. We identified a variety of nonfederal entities, such as universities and private companies, that have ongoing efforts aimed at building a quantum computer. Stakeholders we spoke to told us that private companies have been increasing their research in quantum computing. Hardware Development Academic and industry stakeholders we interviewed described various efforts to develop the hardware needed for a quantum computer. Examples of ongoing efforts include the following: Academic researchers at Purdue University partner with Microsoft at Station Q-Purdue to perform a variety of experiments and activities related to building a semiconductor-based quantum computer, including testing different hardware designs. Academic researchers from Yale’s Quantum Institute are working to develop scalable superconducting devices. Researchers at IonQ are working to develop general-purpose quantum information processors using a trapped-ion approach to create a quantum computer that is scalable and that could support a broad array of applications across a variety of industries. A Google official told us that the company has been working for several years to build a quantum computer through the Quantum Artificial Intelligence Lab. In a March 2018 press release, Google announced its newest 72-qubit quantum computer, called Bristlecone. Software Development Academic and industry stakeholders we interviewed described ongoing efforts related to software development. Examples of ongoing efforts include the following: An official from Microsoft said the company is working to develop quantum algorithms and software to run on a quantum computer for a given set of problems. Researchers are also currently developing an operating system and various applications that could be run on a quantum device. An IBM official told us that, in 2016, the company launched the Quantum Experience, a quantum computing system with five superconducting qubits on the cloud, encouraging students and researchers worldwide to explore quantum computing. Over the past two years, the system’s software has been expanded and upgraded for greater functionality and exploration of quantum algorithms to allow researchers around the world to use the system to write more than 80 research publications. MIT and many other universities now use the Quantum Experience in their curricula. Nonfederal Entities Support Research in Synthetic Biology We identified a variety of nonfederal entities, such as universities and private companies, that conduct research in synthetic biology to advance foundational understanding and develop new products. Foundational Understanding of Synthetic Biology The iGEM Foundation The International Genetically Engineered Machine (iGEM) Foundation is an independent, non-profit organization dedicated to the advancement of synthetic biology, education and competition, and the development of an open community and collaboration. The foundation does these by fostering an open, cooperative community and friendly competition. The main iGEM program is the iGEM competition, which began in January 2003 as an independent study course at the Massachusetts Institute of Technology in which students developed biological devices to manipulate cells. This course became a summer competition with 5 teams in 2004, grew to 13 teams in 2005, and had expanded to 310 teams by 2017, reaching more than 40 countries. The competition was originally aimed at college students but has expanded to include high school students and others. The iGEM competition gives students the opportunity to push the boundaries of synthetic biology by tackling everyday issues facing the world. Multidisciplinary teams made up of primarily university students work together to design, build, test, and measure a system of their own design using interchangeable biological parts and standard molecular biology techniques. Every year nearly 6,000 people dedicate their summer to iGEM and then come together in the fall to present their work and compete at the annual Jamboree. A picture of the iGEM logo. The International Genetically Engineered Machine (iGEM) Foundation hosts an annual worldwide synthetic biology competition in Boston, the iGEM Giant Jamboree. The competition attracts teams from around the world (primarily university students) to use standardized genetic parts to address real-world problems in fields including health, medicine, manufacturing, and bioenergy. At MIT’s Synthetic Biology Center, researchers work with federal and industry partners to advance understanding of synthetic biology for genetic programming, DNA synthesis, and genome design. Researchers at the Synthetic Biology Center seek to create a programming language for living cells that is similar to languages used to program computers and robots. Development of New Products and Technologies DNA Storage To facilitate storing an ever-increasing amount of digital data, researchers from Microsoft, in collaboration with the University of Washington, are studying the use of synthetic deoxyribonucleic acid (DNA) as a means of storing data. According to a Microsoft researcher, this technology uses a process by which custom sequences of synthetic DNA are produced or manufactured to store information. The researcher described three main advantages of storing data in DNA as compared to the current means of storing data, generally magnetic and optical media: Density. DNA may allow for the storage of up to 1 exabyte (one quintillion bytes) of data per cubic millimeter. In comparison, according to Microsoft, storing similarly large volumes of data in optical discs would occupy significant physical space. IBM researchers are developing biosensors that may be used for the early detection of cancer. They are also working on understanding and analyzing cardiac, neurological, and mental health conditions. Researchers from Microsoft said the company is conducting research related to data storage using synthetic DNA as the information preservation medium. This storage technology uses a process by which custom sequences of synthetic DNA are manufactured to store information. Ginkgo Bioworks officials said the company is focused on trying to de- risk supply chains and improve supply chain management through synthetic biology approaches. To that end, the company designs custom enzymes for a variety of customers including companies in a wide range of industries such as food and fragrance companies. relevant storage mechanism, unlike other means of storing digital data (e.g., floppy discs), which becomes outdated as technology advances. The Energy Biosciences Institute is a partnership among the University of California, DOE’s Lawrence Berkeley National Lab, and the University of Illinois. Researchers at the Energy Biosciences Institute carry out research in the areas of biofuels, carbon sequestration, and sustainable chemicals productions, among other things. Agencies Coordinate Research through a Range of Efforts, but Interagency Groups Have Not Fully Implemented Selected Leading Practices Agency officials we interviewed said they coordinate on quantum computing and synthetic biology research through a range of efforts, but we found that certain efforts are new and that agencies have not fully implemented selected leading practices for collaboration in these efforts. Agency officials told us they use means of coordination ranging from attending ad hoc meetings, such as conferences or workshops, to participating in ongoing interagency groups, such as interagency groups on quantum information science (QIS) and synthetic biology. However, we found that new interagency groups on QIS and synthetic biology have not fully implemented leading practices that can enhance and sustain collaborative efforts. Agencies Coordinate on Quantum Computing and Synthetic Biology Research Using Efforts That Range from Ad Hoc Meetings to Ongoing Interagency Groups Agency officials said that they coordinate on quantum computing and synthetic biology research by attending ad hoc meetings, as well as through ongoing efforts such as participating in interagency working groups. The means of coordinating that officials most frequently cited were participating in working groups or attending a conference or workshop. Meetings such as these bring together representatives of different agencies or departments to discuss common problems, exchange information, or develop agreements on issues of mutual interest, as we have reported in the past. Specifically: Officials from 4 of the 6 agencies that support quantum computing research said they attended a conference or workshop related to quantum computing at some point from October 2015 through March 2018. For example, NASA and DOE officials participated in a 2017 NASA workshop that brought together experts from NASA research centers, DOE national laboratories, academia, and industry to discuss quantum information science and computation. Officials from all 10 agencies that support synthetic biology research cited attendance at a conference, and officials from 7 of these 10 cited workshops as a way in which they coordinated on synthetic biology research from October 2015 through March 2018. For example, officials from DOD, DOE, NIST, and national laboratories attended a 4-day conference in June 2017 to discuss synthetic biology applications in genetic engineering. Officials from 7 of the 10 agencies that support synthetic biology research also said they coordinated research with other selected agencies through communities of practice or consortia that meet on an ad hoc basis. For example, NASA officials said they support synthetic biology work through the Space Technology Research Institute in Biomanufacturing, a University of California Berkeley-led consortium of universities. Officials we interviewed also said they coordinate with one another through ongoing efforts, such as interagency groups. For example, on June 21, 2018, NSTC established the Subcommittee on Quantum Information Science (QIS Subcommittee) to coordinate quantum computing research. According to its June 2018 charter, the QIS Subcommittee’s purpose is to establish and maintain a national agenda in quantum information science and technology, expand U.S. economic and national security, and coordinate federal quantum information science and technology policy and programs. The functions of the QIS Subcommittee include to issue and update plan(s) that coordinate(s) federal policy to expand U.S. leadership in quantum information science and technology; enable stakeholders to invest effectively in quantum information science and technology and post-quantum application spaces through data gathering, analysis, consultation, planning, convening, and reporting; and provide a forum for research and development coordination and collaboration, including sharing expertise and best practices for program management and conducting joint workshops and program reviews. The QIS Subcommittee is led by co-chairs from NIST, DOE, NSF, and OSTP and includes 9 additional agencies. The QIS Subcommittee met for the first time as an official chartered group on June 28, 2018. The OSTP official serving as a co-chair for the QIS Subcommittee said that the group’s first priority will likely be to develop a national approach to QIS research and development. Officials from 5 of the 6 agencies that support quantum computing research said that prior to the formation of the QIS Subcommittee, they coordinated through the NSTC Interagency Working Group on Quantum Information Science (QIS working group), which was formed in 2014. In July 2016, the QIS working group produced a report, which the agency officials serving as the group’s co-chairs told us included its strategic plan for federal QIS research. The July 2016 report identified QIS as a priority for federal coordination and investment as a component of U.S. scientific leadership, national security, and economic competitiveness. The QIS Subcommittee co-chair from OSTP said that the shift from a working group to a subcommittee is a significant elevation that communicates the importance of QIS to the administration. Agencies also coordinated synthetic biology research through interagency working groups. Officials from NSF and USDA told us that, in December 2017, they formed a new synthetic biology working group that had 7 member agencies as of February 2018. These officials said that the participating agencies saw a need for continued communication and information sharing, and the officials said the group’s efforts will increase coordination. Prior to the formation of this new group, 7 of the 10 agencies that support synthetic biology research participated in an NSTC Synthetic Biology Working Group that NSF officials said existed from 2012 to 2013 and was co-chaired by DOD and DOE, according to a 2013 DOE report to Congress that the group produced. According to some officials, the working group ended after it produced this report, which described synthetic biology research and development needs at the time and identified which federal agencies were planning synthetic biology research. The report also discussed the need for communication and coordination among federal agencies that support basic and applied synthetic biology research to build synergies, consider new research and development needs, and evaluate issues as they emerge. According to a senior NSF official we interviewed who was helping lead efforts to establish the new group, one of its first undertakings will be to update the 2013 report to provide a roadmap for agencies’ synthetic biology research. However, the official also stated that the participating agencies were still considering the new group’s activities. Agencies Are Coordinating on Quantum Computing and Synthetic Biology through New Interagency Groups, But Have Not Fully Implemented Leading Collaboration Practices By recently establishing the QIS Subcommittee and a synthetic biology working group, NSTC and federal agencies, respectively, took steps to further coordination on quantum computing and synthetic biology research. However, the new subcommittee and working group have not fully implemented leading practices for collaboration. We have reported that effective collaboration can help reduce or better manage fragmentation, overlap, and duplication of federal programs. As described above, a number of federal agencies support research related to quantum computing and synthetic biology. In our April 2015 guide to evaluating and managing fragmentation, overlap, and duplication, we define fragmentation as those circumstances in which more than one federal agency, or organization within an agency, is involved in the same broad area of national need, and opportunities exist to improve service delivery. This definition applies concerning federal agencies’ quantum computing and synthetic biology research, with more than one agency involved in the same broad area of national need. However, as shown in our description above of the agencies’ support for research in these two areas, agencies’ activities sometimes differ in meaningful ways or leverage the efforts of other agencies. We examined agencies’ efforts to coordinate through interagency groups by selecting six leading practices that we have previously identified can enhance and sustain interagency collaboration: Define and articulate a common outcome. Effective collaboration requires agencies to define and articulate common outcomes or purposes they are seeking to achieve that are consistent with their respective agencies’ goals and missions. Establish mutually reinforcing or joint strategies. Having mutually reinforcing or joint strategies enables agencies to align activities, core processes, and resources to achieve a common outcome. Identify and address needs by leveraging resources. Agencies can sustain their collaborative efforts by identifying the human, information technology, physical, and financial resources necessary to achieve identified outcomes. Agree on roles and responsibilities. By defining and agreeing on roles and responsibilities, including leadership, collaborating agencies can better clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Establish compatible policies, procedures, and other means to operate across agency boundaries. Agencies can facilitate collaboration by addressing the compatibility of standards, policies, procedures, and data systems that will be used in the collaborative effort. Develop mechanisms to monitor, evaluate, and report on results. Creating the means to monitor and evaluate collaborative efforts enables agencies to identify areas for improvement. We identified limitations in agencies’ past efforts to coordinate quantum computing and synthetic biology research. In the area of quantum computing, the QIS working group—which preceded the subcommittee— took steps to implement selected leading practices for collaboration, but the group did not fully implement these practices. For example, the QIS working group’s July 2016 report broadly identified quantum computing research needs but did not identify common outcomes for agencies’ collaborative efforts to advance QIS, including quantum computing. The three senior officials who served as co-chairs of the QIS working group said they were not aware of any federal goals or outcomes for quantum computing research, and DOE officials said that clarifying common goals could help interagency collaboration on quantum computing research. Officials from some agencies cited challenges with collaborating on joint quantum computing projects—for instance, because of variations among agencies on time frames for providing financial assistance. OSTP officials described the establishment of the QIS Subcommittee as an effort to further previous coordination conducted through the QIS working group. While the QIS Subcommittee has taken initial steps to implement certain leading practices for collaboration, it has not fully implemented the relevant leading collaboration practices we identified. For example, by developing a charter that identifies its high-level purpose and functions and that identifies co-chairs for the group, the QIS Subcommittee has taken initial steps to identify some agencies’ roles and to establish means for operating across agency boundaries. Moreover, by having a charter signed by senior officials, the QIS Subcommittee has taken steps to document agencies’ agreement to collaborate, which is a key feature of collaborative mechanisms we have identified in our prior work. However, the working group has not defined roles and responsibilities for agencies other than the co-chairs. OSTP officials said that efforts to date have focused on ensuring that all relevant agencies are included in the QIS Subcommittee; the officials also said that agencies’ roles and responsibilities for contributing to the subcommittee will evolve. Table 1 provides additional information on the extent to which the QIS Subcommittee has implemented leading practices for collaboration. With regard to interagency coordination on synthetic biology research, NSF and USDA officials noted that the new synthetic biology working group hoped to, through continued communication and information sharing, address limitations in agencies’ coordination that existed prior to its formation. Officials from NSF said the group was needed for communication, information sharing and to leverage resources and DOD officials agreed that the working group was needed. Additionally, one DOD official and one expert said that limited interagency coordination had resulted in lost opportunities to further develop the area of synthetic biology. They also noted that having a national strategy for synthetic biology would be beneficial. Other officials noted that, as in the area of quantum computing, differences in funding timeframes across agencies hinder their ability to coordinate their synthetic biology research. Some of these officials also said such differences make it difficult to develop an integrative roadmap for their research. Like the QIS Subcommittee, the new synthetic biology working group has taken initial steps to implement some leading practices for interagency collaboration but has not fully implemented the relevant leading collaboration practices we have identified. For example, the group has taken initial steps to identify member agencies’ roles by having NSF serve as the lead agency for the first 2 years. However, the group has not identified other member agencies’ roles and responsibilities. An NSF official said the new working group had also considered developing a document, such as a charter, to guide its efforts but, as of June 2018, it had not yet decided whether to do so. Table 2 provides additional information on the extent to which the Synthetic Biology Working Group has implemented leading practices for collaboration. As we previously reported, interagency collaborative mechanisms can take many different forms, such as working groups or subcommittees, and the leading practices we identified that help enhance and sustain interagency collaboration can be adapted to help address the specific challenges agencies face. For example, incorporating the leading practices into agencies’ collaborative efforts can help address issues associated with potential fragmentation, overlap, and duplication in instances where multiple agencies have activities in a similar area. The QIS Subcommittee and the synthetic biology working group are mechanisms through which agencies can address limitations in past interagency coordination on quantum computing and synthetic biology. However, as of July 2018, the subcommittee and working group were still new and have had limited time to fully implement the leading practices we have identified. As the subcommittee and the working group move forward, by taking steps to fully implement these leading practices, member agencies could better marshal their collective efforts to support research in the areas of quantum computing and synthetic biology and help maintain U.S. competitiveness through transformational technological advances. Experts Identified Key Considerations for Maintaining U.S. Competitiveness through Transformational Technological Advances Experts who participated in the meeting we convened with the assistance of the National Academies identified four key considerations for maintaining U.S. competitiveness through transformational technological advances. These considerations extend beyond quantum computing and synthetic biology, and more broadly address the role of federal and nonfederal entities in supporting research for such advances. The key considerations experts identified were (1) developing a strategic approach for transformational technology, (2) fostering information sharing, (3) focusing on technology development and commercialization, and (4) strengthening the science and technology workforce. Developing a Strategic Approach for Transformational Technology Experts emphasized the importance of developing a strategic approach for advancing potentially transformational technologies for maintaining U.S. competitiveness. has a technological focus, such as additive manufacturing, advanced flexible electronics, or regenerative medicine, and includes members such as companies, nonprofit organizations, academic institutions, and federal agencies. Semiconductor Manufacturing Technology consortium (SEMATECH). Experts described SEMATECH, a nonprofit consortium that supported research and development on advanced semiconductor manufacturing, as a successful, industry-led, public-private collaboration that helped government and industry stakeholders take a strategic approach to challenges facing the U.S. semiconductor industry in the late 1980s.However, Commerce’s NIST officials noted that after federal support ended, SEMATECH began accepting memberships from companies from competitor countries, which led to a transfer of technology through the consortium’s work outside the United States. are industry-led and industry provides at least half of the annual funding because industry can best design a research program to meet its needs; develop a comprehensive industry assessment and prepare an operating plan that identifies realistic objectives and milestones as a basis for receiving federal funds; include active participation by member companies’ senior executives in establishing research priorities and overseeing technological progress; have a program to improve long-term working relationships between manufacturers and key suppliers, unless inappropriate for the industry’s structure; emphasize research projects that improve an industry’s overall efficiency and that have industrywide applications; consider ways to provide access for smaller industry members that might not have the resources to participate; and establish criteria for determining how or when government should end its funding. Grand challenges, strategies, and roadmaps. Experts described the importance of grand challenges, strategies, and roadmaps in supporting a strategic approach to developing transformational technologies. In particular, experts described how these mechanisms help stakeholders coalesce around technology goals and organize efforts toward reaching them. Examples experts noted included the following: Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. Experts described the BRAIN Initiative, which was launched in 2013 to build neuroscience measurement tools, as a key example of a grand challenge. The BRAIN Initiative—led by HHS (specifically NIH), NSF, and DARPA, with the participation of other federal agencies as well as foundations, universities, and industry—seeks to deepen understanding of the human mind and to improve how brain disorders are treated, prevented, and cured. National Nanotechnology Initiative. Experts described the National Nanotechnology Initiative as a key example of a federal government strategic effort. The National Nanotechnology Initiative began in 2000 and is an interagency effort to bring together the nanotechnology-related activities of 28 federal agencies in an effort to enhance understanding and control of nanoscale material. The National Nanotechnology Initiative maintains a strategic plan describing the initiative’s vision and goals and the strategies to achieve these goals. In discussing this initiative, experts described how it could enable federal agencies to share information on their research and ensure that key research areas are advanced in pursuit of a long-term national nanotechnology strategy. Grand challenges may be articulated through strategy documents and, according to experts, involve getting stakeholders to think about potentially transformational technologies in a future-oriented way. Roadmaps, according to experts, represent detailed plans to guide progress toward a technology goal. Federal agencies, industry, or others may lead roadmapping efforts, according to experts. Additionally, one expert stated that roadmaps can help accelerate technology development. Another expert noted that for some fields, such as quantum computing and synthetic biology, a technology development strategy is needed in addition to a research and development strategy because the former outlines how a technology would move forward beyond the research and development phase. Across both of these aspects of a strategic approach, experts emphasized the importance of a sustained commitment of resources to support technology development. One expert also emphasized the importance of setting tough performance objectives without specifying how innovators will solve a problem. Experts acknowledged that developing shared national strategies is challenging in the United States, in part because of the decentralized nature of research support across multiple federal agencies. However, experts also cited as strengths of the federal research system the ability of federal agencies to support multiple approaches to developing transformational technologies in accordance with their missions and the ability to evolve and try new approaches. Experts identified several indicators of when developing a strategic approach might be important to support U.S. competitiveness through transformational technological advances in a particular area. Specifically: Convergence of advances across different technology areas. Experts described how transformational technologies often occur as a result of different technologies that have advanced incrementally over time. One expert noted the development of the Global Positioning System as an example of a technology that required the convergence of advances in computing power, satellite technology, geospatial imaging, and timekeeping. Because of the strength and role of the federal government in convening and fostering engagement among non-traditional collaborators on interdisciplinary issues, experts identified technology convergence as a potential indicator of the need to take a strategic approach. Progress from discovery to real-world application. Experts described how progress from discovery in an area of science to the appearance of niche applications for a technology can be an indicator of the need to take a strategic approach. According to one expert, one challenge in technology development is how to push the technology forward as quickly as possible to develop it into something useful. Experts explained that by taking a strategic approach that extends beyond early-stage research, the federal government can support the development of potentially transformational technologies. Existence of barriers to technology development. Experts identified several barriers to the development of transformational technologies that could indicate the need to take a strategic approach to developing a technology. Examples of barriers experts identified included high capital costs for research, prototyping, demonstration, or other aspects of a technology development life cycle; regulatory barriers; lack of consensus on standards; and technology measurement challenges, such as limitations in the availability of tools with which to measure products or processes. Experts described multiple ways in which the federal government can play an important role in addressing such barriers through helping efforts to de-risk technologies, establishing or revising regulations, supporting standards development, and developing measurement tools. Increasing involvement across multiple stakeholders or competitors. Experts described aspects of how increasing involvement across multiple stakeholders in a particular technology area can indicate the need to take a strategic approach to developing a transformational technology. For example, when multiple federal agencies are working in a technology area or industrial participants increase involvement in a particular technology, experts said such involvement could signal that a strategic approach is needed to work across boundaries and engage the research community in a coordinated way. Similarly, according to one expert, increasing international competition in a technology area could serve as an indicator of the need for the federal government to exercise leadership through a strategic approach to organize domestic public and private efforts in order for the United States to remain competitive. Need for sustained, long-term investment in areas of national interest. Experts identified the need for sustained, long-term investment in areas of national interest as a potential indicator of the need for a strategic approach to transformational technologies. Experts described how the short-term cycles of many federal programs and disincentives for the private sector to sustain long-term investments can present challenges to developing transformational technologies, which one expert noted can take years or even decades to develop. Experts also cited a need for a strategic approach to advancing a technology when it has the potential to be transformational and presents enormous societal benefits. In the areas of quantum computing and synthetic biology, experts cited a need to develop a strategic approach to maintain U.S. competitiveness. Within the area of quantum computing, experts cited all of the indicators identified above in stating that U.S. competiveness in quantum computing could benefit from a national strategy. For example, experts described the need to foster interdisciplinary engagement across the fields of physics, engineering, and computer science to support convergence of advances in these areas to further quantum computing technology. Experts also indicated that real-world applications are beginning to become apparent in the area of quantum computing. However, they noted that significant barriers to development exist and discussed a need for sustained long- term investment in this area, which has significant implications for national security, and according to one expert, economic competitiveness. Moreover, experts expressed concern over the significant and increasing international competition from China, the European Union, and other countries. One expert noted that given the security implications of quantum computing technology, the United States needs to find a way to counter the significant investment that China is making. Stakeholders and one agency official we interviewed cited similar concerns, such as the European Union’s plans to launch a flagship initiative on quantum technology, which includes quantum computing; therefore, the United States needs a national quantum computing strategy, the experts said. Similarly, with regard to synthetic biology, experts cited several of the indicators described above in stating that the United States could benefit from a strategic approach to maintain competitiveness. For example, experts discussed barriers to technology development, including a lack of measurement tools and regulatory barriers. According to one expert, before the 2017 update to the Coordinated Framework for the Regulation of Biotechnology, the system was last updated in 1992. The expert said that it was not yet clear if the updated framework would help advance synthetic biology research. Experts also noted the need to engage across multiple stakeholders in this area; in particular, one expert noted the need for leadership to advance a dialogue about how synthetic biology could help address issues of national concern. Experts described significant foreign competition in synthetic biology. One expert said that there are more than 40 countries that have a unified strategy for synthetic biology. While one expert stated that NSF has initiated a synthetic biology roadmapping effort, a few experts stated that the United States does not have a similar unified synthetic biology strategy. One expert said that in the absence of such a strategy, the United States faces economic and physical security risks. Stakeholders we interviewed raised similar concerns. Fostering Information Sharing Experts also suggested considering how to foster information sharing to help maintain U.S. competitiveness through transformational technological advances. Experts discussed the role the federal government can play in bringing together stakeholders to discuss emerging technologies and collaborate on pre-competitive research. For example, according to one expert, in 2015, 2 years after the BRAIN initiative was launched, the White House convened a meeting that brought together industry partners, academic researchers, and government scientists to share information and discuss research plans. This expert highlighted the importance of communication among representatives of organizations that would not normally work together, and how these conversations about where they saw research going over the next 5 years led to greater understanding and collaboration to support the research under this initiative. Experts identified three key reasons for sharing information to facilitate transformational technological advances in supporting U.S. competitiveness: Convergence of different disciplines. Experts generally agreed that information sharing can facilitate an interdisciplinary approach to study a problem, which they said is important to the nation’s ability to conduct research for transformational technological advances. The federal government’s ability to convene groups, according to one expert, is particularly important for interdisciplinary areas of study because it can help bring stakeholders together to discuss how research could help address an area of national need. Another expert explained that agencies’ research is increasingly interdisciplinary, which increases the importance of coordinating across agencies.Agency officials and stakeholders we interviewed also discussed the importance of sharing information across fields of study. One stakeholder said that without government funding for interdisciplinary efforts in quantum computing, it will be challenging to solve problems, such as creating some of the computer programming needed to operate a quantum computer, that need to be solved in order to make quantum computing viable. Overcoming barriers to innovation. Experts discussed how information sharing can facilitate the identification of barriers to innovation and help overcome them. For example, one expert noted the importance of information sharing in trying to address the challenges the U.S. semiconductor industry faced in the 1980s. The expert emphasized the recognition that individual companies could not address the barriers to innovation on their own and that they needed information sharing, such as cross-licensing of intellectual property and communication about roadmapping to overcome barriers that they faced. Another expert explained that information sharing across federal agencies led to the identification of the U.S. biotechnology regulatory system as a significant barrier to innovation and that, based on this, the Coordinated Framework for the Regulation of Biotechnology was updated. This expert further said that information sharing is the first step in coordination—by sharing information, agencies can determine where there might be overlapping research efforts or gaps in ongoing research. Leveraging international research. Experts explained that bringing technologies to the United States that were developed elsewhere is not something that has been central to U.S. science and technology policy, but they stressed that the United States needs to consider how to take advantage of research that other countries are conducting and effectively utilize that information to maintain U.S. competitiveness. For example, one expert described the importance of the iGEM competition as an opportunity for information exchange among researchers from around the world who are working in synthetic biology-related fields. In describing this example, the expert noted that most bioengineers will not be U.S.-based and that, to remain competitive in synthetic biology, the United States needs to better understand discoveries being made by researchers from around the world. Experts said that while information sharing is important, there are tradeoffs, particularly with regard to sharing and protecting pre- competitive intellectual property. The experts said that the benefits of sharing pre-competitive intellectual property include the opportunity to speed innovation by allowing multiple researchers to work with the intellectual property concurrently and by preventing foreign competitors from restricting use of the intellectual property through obtaining a patent. Economically valuable knowledge can spread through publicly and freely available records such as scientific publications and open source software. Such knowledge can be used repeatedly, can quickly spread to users outside the institutions where it was created, and can lead to the creation of new products. For example, one expert stated that, as of October 2017, a quantum computer we described earlier in this report had been available over the Internet for public use for about a year and had 50,000 users. Having a larger number of users working with this resource could lead to more rapid discovery of ways in which a quantum computer might be used than if it had not been shared. The expert said that because this technology exists, it should be developed as quickly as possible to determine what its first useful application will be and to find the first problem that only a quantum computer can solve. Doing so, the expert said, would create opportunities in which a U.S. company could profit from the technology while also developing it. In addition, information sharing was cited as instrumental to the success of the Human Genome Project, according to NIH officials we interviewed, because the project made the genome’s sequencing available as a resource for researchers to use. A deoxyribonucleic acid (DNA) strand around the outline of a person. The Human Genome Project, which formally began in 1990, was a 13-year international collaborative research project coordinated by the Department of Energy and the National Institutes of Health. The Human Genome Project’s goals were to (1) identify all the genes in human DNA, (2) determine the chemical base pair sequences of human DNA, (3) store this information in databases, (4) improve data analysis tools, (5) conduct technology transfer, and (6) address the ethical, legal, and social issues that may arise from the project. The full sequence of the human genome was completed and published in April 2003. Through its policy of open data release, the Human Genome Project facilitated the research of others. The Human Genome Project also anticipated and promoted commercializing genomic resources and applications by establishing an infrastructure and supporting private-sector technology development. Consequently, the project led to new tools to support biological research. Further, the data and technologies generated by the project and related research present a broad array of commercial opportunities across many areas of the economy. These include more individualized diagnostics, prognostics, drugs, and other therapies as well as hardier, more nutritious, and healthier crops and animals, among other applications. At the same time, experts said that while information sharing is important, there are risks, such as foreign commercialization of U.S. intellectual property. Experts noted that the world is increasingly competing with the United States in research for transformational technological advances. One expert cautioned that while information sharing is important for transformational technologies, it must be done carefully so that other companies do not exploit a technology or it is not leaked to a foreign competitor. Similarly, one stakeholder said that while information sharing is beneficial at the early stages of technology development, a balanced approach to information sharing—an approach that allows for trade secrets and that guards some research results—is needed once a technology is no longer in the early stages of development. In light of these tradeoffs, experts emphasized the importance of ensuring that intellectual property protections support U.S. competitiveness; however, they also described challenges with how intellectual property is managed in the United States. For example, experts said it can be challenging to bring industry and academic researchers into partnerships that support transformational technological advances. Experts explained that some collaborators are willing to openly share their intellectual property, while other experts noted that some collaborators may be less inclined to do so because they view intellectual property as a profitable commodity. Additionally, one expert cited differences between potential industry and academic collaborators’ knowledge of, and attention paid to, developing technologies into commercial products as a potential barrier. One expert said that foreign countries generally allow university- developed intellectual property to be owned and licensed by the inventors or third-party companies (instead of the university). This can create a foundation for a startup company or make it easier to get the interest of companies who would like to acquire a university-based technology or process. The expert noted that in one circumstance, this has given an advantage to a foreign university in recruiting top researchers, helping it to become a leader in quantum computing. However, another expert stated that most major research universities have moved to a model of developing partnerships with firms, especially startups, which has minimal upfront licensing costs, and shared gains over time if the project is successful—according to that expert, such universities typically share research intellectual property rights with faculty inventors. Focusing on Technology Development and Commercialization Focusing on technology development and commercialization is another policy consideration that experts identified for maintaining U.S. competitiveness through transformational technological advances. According to experts, the United States’ “innovation ecosystem”—the network of public and private institutions within a country whose activities and interactions initiate, develop, commercialize, and diffuse new technology innovations—has either lost or needs better mechanisms for commercializing technologies to maintain U.S. competitiveness. To address this issue, experts discussed how the federal government could focus on technology development and commercialization by providing support across multiple stages of innovation and support for the development of tools to enhance innovation. Providing Longer-Term Assistance to Support Technology Development Experts discussed a need to improve technology development and commercialization by providing support across multiple stages of innovation. Experts described how sustained federal research investments have led to key scientific discoveries, including, for example, NIST and IARPA’s decade-long support for quantum computing research and NSF’s investment in synthetic biology. However, while experts said federal agencies’ ability to support new discoveries is a strength, they explained that the United States is losing the ability to commercialize technologies that are invented here. For example, according to one expert, while the technology might soon be available to build small (100 qubit) quantum computers, the United States does not have the necessary enterprise in place to manufacture those systems. Experts stated that it may take decades or more from the time research is funded until it is commercialized. During this intervening period, significant investment is needed to support the innovation cycle in terms of research in the design, building, and testing of new product prototypes and production processes. Experts described an increasing reliance, over time, on venture capital funding to support investments in the innovation cycle. They said that while this is generally working well in some areas such as software and biotechnology, venture capital investors have become less willing to support other technologies that require higher levels of capital investment, longer-term returns, and greater risk. For example, one expert stated that while the U.S. venture capital system spends $70 billion annually on technology commercialization activities, in 2015, the expert estimated that 5 percent of venture capital funding went to hard technologies. Multiple reports in recent years have documented the challenges associated with how the innovation cycle is supported in the United States and its implications for the domestic commercialization and production of new technologies. For example, in a 2012 report, the National Research Council stated that discoveries and inventions originating from research conducted at U.S. universities, corporations, and national laboratories no longer naturally led to products that are commercialized and manufactured within the United States. According to this report, manufacturing is important in developing new products because in many high-technology industries, design cannot easily be separated from manufacturing, and a lack of sustained investment in research and infrastructure threatens to damage the U.S. innovation ecosystem, economy, and security. To address this issue, experts discussed a need to provide longer-term federal financial assistance to better support technology development across multiple stages of innovation. Experts stated that federal agencies often support research on short-term funding cycles (e.g., 3 years or less) that may not be conducive to the long-term support sometimes needed to effectively de-risk potentially transformational technologies. A 2017 National Academies report cited short-term funding as one factor that has resulted in U.S. science losing its flexibility and nimbleness, elements that feed new discovery. Additionally, experts said that federal agencies’ support may not extend to the later stages of technology development but providing longer-term support for research is an important part of the federal government’s role in advancing transformational technologies. For example, one expert said that long-term federal support facilitates creating a research infrastructure that can support a technology’s development. Experts cited several examples of how federal agencies’ programs provide different models for supporting technology development across multiple stages of innovation. Advanced Technology Program. Experts cited NIST’s Advanced Technology Program—which COMPETES 2007 repealed—as a success in terms of its efforts to support transformational research.Experts cited several aspects of the program in discussing its success, including its support for (1) research that accelerated the development of high-risk technologies with the potential for broad- based economic benefits to the nation; (2) information sharing across different sectors; (3) active project management and workshops that taught awardees how to pitch their technology to venture capital investors, according to one expert. One expert noted that the program collaborated with NIH to develop diagnostic approaches that advanced the genomic revolution. ARPA-E. Experts described ARPA-E—which was modeled after DARPA—as an important challenge-based federal effort to advance technologies in areas aligned with DOE’s mission. Aspects of the ARPA-E model one expert cited as important to the program’s ability to support transformational technological advances included, among others, support for higher-risk research and the autonomy that program directors have in seeking expert input and selecting research projects. Manufacturing USA Institutes. One expert described the Manufacturing USA institutes as an important federal effort to support emerging technologies across multiple stages of innovation. Another expert explained that in order to continue to capture the economic benefits of the innovation system, the United States needs to embed the knowledge for technology production locally within the country. The first expert said the Manufacturing USA institutes help increase the connectivity among different actors involved with specific technology areas and improve their ability to leverage advances in those areas. Experts also discussed how other countries’ long-term funding for research efforts may help them support technology development. For example, one expert discussed Germany’s Fraunhofer Institutes, where the government makes research investments over time frames of 5 or even 20 years and rewards successful projects with funding increases each year. In addition, one expert noted that other countries such as the Netherlands and Singapore also provide long-term research funding, allowing them to develop the broader research infrastructure necessary to support technology development. In the area of quantum computing, one expert stated that the Netherlands’ investment has contributed to one of the largest quantum computing-focused efforts in the world. According to one expert, if U.S. researchers do not conduct the research necessary over the long term to prove their research ideas, other countries will have the opportunity to pick up where U.S. researchers leave off and commercialize technologies based on this research. Supporting Development of Tools to Enhance Innovation Experts stated that tool development is critical to transformational technological advances and discussed a need for federal government support for tool development to maintain U.S. competitiveness. A tool is something—such as equipment used for a specific purpose, a modified biological system, or a computer program—that is used to perform a task or that is needed to practice a profession. According to one expert, tools are crucial supporting technologies that are necessary for the product development process. According to recent reports, research in tools development can lead to the introduction of new products, materials, or the ability to produce materials at the commercial level. A bioprinted coronary artery. 3D bioprinting is a tool that scientists are developing in the field of regenerative medicine. 3D bioprinting uses 3D printing with biological materials to create skin, bones, arteries, and a variety of other tissues and organs. For example, the Department of Defense has conducted research into using 3D bioprinting to repair skin damaged by burns—injuries that account for 10 to 30 percent of battlefield casualties. To repair burned skin, researchers have created scans of burns that a computer then uses to have a 3D printer reconstruct the burned skin. 3D bioprinting has also been used to create small blood vessel networks that contain living cells that have joined with the blood vessel networks in a mouse, allowing blood to circulate through them. Such printed blood vessels could be used to replace a damaged heart muscle. In the future, such organs could be grown using 3D bioprinting and the cells of the person who needs the organ, and they could be used in place of transplanted organs. 3D bioprinted tissues could also be used to test the safety of new drugs. 3D bioprinting is in the early stages of development. Experts explained that the United States is at risk of losing its ability to develop tools, and they identified challenges to tool development, including the following: Unclear needs and long time frames. According to experts, industry may be less likely to invest in tool development when tools do not support existing products, but, rather, are a part of solving technology challenges that are not clearly defined. In this context, experts explained that tool development can take a relatively long time, which may not be compatible with industry’s short innovation time frames. Potentially high or unrecoverable costs. Developing tools is expensive, according to experts, and when creating a new tool, companies have to consider whether they will be able to recover their costs. One expert described a circumstance in which a modified laser was needed to support research on a quantum system. The expert explained that a laser manufacturing company would need to change its production line in order to make the modified laser, and it would be very expensive for the company to adjust its production line to make only the modified laser. Experts emphasized the important role federal agencies can play in helping overcome these challenges to tool development. For example, experts described the importance of federal support for developing measurement tools to accelerate and improve the learning cycles around designing, building, and testing technologies and products. Experts specifically cited NIST’s role in the development of measurement tools. For example, through the NIST-on-a-Chip program NIST is developing ultra-compact, inexpensive tools that will measure quantities such as time, distance, current and voltage, and temperature and pressure and that will allow measurement technologies to be deployed without requiring traditional measurement services. In line with NIST’s goals, the private sector will manufacture and distribute these technologies. Experts also noted the important role federal agencies play in providing access to tools, such as technology testbed facilities to support de-risking technologies through prototyping and other development activities. Strengthening the Science and Technology Workforce Experts identified strengthening the science and technology workforce as a consideration for maintaining U.S. competitiveness through transformational technological advances. According to experts, there is a need for federal agencies to work with academia and industry to improve connections between the training academia provides and what industry needs, such as interdisciplinary training. Experts further discussed the recruitment of researchers and the retention of research talent and a technically trained workforce; according to experts, attracting researchers has historically been a U.S. strength, but this ability may be at risk. Improving Connections between Academic Training and Industry Needs Experts identified the need to improve connections between academic institutions and industry so that the training academia provides corresponds to industry’s needs, particularly for interdisciplinary research fields. Without strengthening these connections, according to experts, academia may not deliver the interdisciplinary training needed for some research areas. Experts identified the systems engineering training needed to build a quantum computer as one such area of interdisciplinary training. For example, one expert said engineers are usually unfamiliar with the quantum mechanics used in a quantum computer and this is challenging since knowledge of both disciplines—quantum mechanics and engineering—is necessary to develop the technology. Also, not many quantum computing researchers are trained in the fields of computer science or engineering, according to stakeholders and agency officials we interviewed. A few experts said that because universities are not training the researchers needed in some interdisciplinary areas, there are not enough researchers in those areas available for industry to hire. Experts, other stakeholders and agency officials we interviewed, as well as some recent reports, identified several factors that may contribute to a disconnect between academic training and industry needs. For example, experts explained that universities appear to operate on the assumption that industry, not universities, must teach students the practical skills needed to be productive members of an engineering team. Additionally, according to a 2012 report by the National Research Council, job markets and careers for doctoral scientists and engineers have shifted since 1990 so that more than 50 percent of new doctorates work outside of academia, but there are few incentives to motivate graduate programs to align doctoral education with evolving employment activities. According to one expert, graduate education is largely supported by federally funded research awards to universities which tend to support basic research, not applied research or development. This expert further stated that as a result, graduate students are not taught later stage applied work relevant to industry because that has not been what federal research has historically funded. According to a different 2012 National Research Council report, cultural barriers often separate industry from academia and are reinforced by organizational incentives— universities have traditionally emphasized the need to publish research, not commercialize it. Further, one expert, a stakeholder, and an agency official we interviewed said that universities generally were not hiring faculty who focus on quantum computing as part of their computer science and engineering departments. The expert attributed this to limited funding available to support those research programs. According to this expert, the financial assistance federal research programs provide can send an important signal to universities that can lead to evolving academic programs and hiring in interdisciplinary fields. A 2016 MIT report made similar observations and said that many universities remain siloed along departmental lines and need resources and structures that allow for team teaching—two people from different research areas co- teaching a course—or research in which students from different disciplines could be paired to answer a research question. However, in synthetic biology, one expert noted that some universities have started entirely new Departments of Bioengineering because aspects of synthetic biology contribute to the development of an independent, distinctive, and complementary type of engineering. This has resulted in the development of a new curriculum that incorporates synthetic biology into the training and development of bioengineers, according to this expert. Recruitment and Retention of Talent Experts discussed the importance of recruiting researchers and retaining talent and a technically trained workforce. Experts stated that attracting researchers to come and stay in the United States has historically been a national strength. The Congressional Budget Office has reported that foreign-born workers contribute disproportionately to innovation. Further, according to this report, foreign-born researchers account for a disproportionate number of the scientific researchers who yield many of the big discoveries and conceptual breakthroughs that drive science. However, according to a few experts, and a National Research Council report, the United States is increasingly competing with other countries to recruit and retain talented researchers. Countries such as Canada, China, and Singapore are attracting talented researchers to their universities and research institutes by offering high salaries and the opportunity to run well-funded programs, according to a National Research Council report. For example, according to a few experts, China started the Thousand Talents Program in 2008 to get talented researchers to return to China. The Thousand Talents Program’s goal is to bring top talent trained overseas to China on a full- or part-time basis. One expert gave the example of a university president resigning from a U.S. university because he believed the possibilities for research were greater in Asia. According to one expert, the nation’s ability to recruit and retain researchers may be at risk because the United States is not working to retain and incentivize talent. According to that expert, this puts the nation at risk of missing out on the next global transformational technological advance. According to some experts, one challenge to retaining talent in the United States is that limited job opportunities are available to young researchers trained in certain areas. It is important to create conditions for young researchers to find employment in research and development, according to one expert, so that they can contribute to these areas. Creating the right incentive structure for people to produce transformational technologies in the United States is important, according to another expert, because when technologies are produced in the United States, the skills needed to produce them become embedded in that community. We have previously reported that too much location of skilled manufacturing jobs abroad can, in general, put the United States at a disadvantage in terms of its ability to design new products, according to participants in a 2013 forum on nanomanufacturing. Similarly, in a 2012 report, the National Research Council stated that manufacturing is integral to new product development, and production lines are linked to an iterative innovation chain that includes research and development, product refinement, and full-scale production. In many high-technology industries, design cannot be easily separated from manufacturing, and talent availability is the most important factor for deciding where to place a production facility. In some cases, according to this 2012 report, companies are choosing to produce abroad because of concerns related to the capacity of the U. S. supply chain, technical skills of U.S. workers, and the investment climate for high-volume manufacturing. Also according to this report, as a result of these factors, the United States is finding it increasingly difficult to capture the economic value generated by public and private investments in research and development. Conclusions Federal support for research in areas such as quantum computing and synthetic biology can help promote U.S. competitiveness in the global economy. For example, advances in quantum computing have the potential to lead to transformational advances in national security technologies or technology areas that rely heavily on simulation, such as pharmaceuticals and materials science for advanced manufacturing. Research in synthetic biology could help achieve significant advances in health care, energy, and other sectors. When agencies collaborate on their research efforts, they can produce more public value than when they act alone. Moreover, collaboration through mechanisms such as interagency groups can help address complex issues, such as those remaining to be resolved in quantum computing and synthetic biology. Collaboration can also mitigate challenges associated with fragmentation of efforts across multiple agencies, as well as potential overlap and duplication. NSTC and federal agencies have taken steps, building on earlier efforts, to coordinate their activities in the areas of quantum computing and synthetic biology. Specifically, both the new QIS Subcommittee and the new synthetic biology working group have taken initial steps to implement certain leading practices that can enhance and sustain collaborative efforts. For example, both have taken steps toward agreeing on roles and responsibilities. These steps could help address problems identified in previous interagency coordination efforts. However, both the subcommittee and working group are recently established and have had limited time to fully implement the leading practices that we describe in this report. As the subcommittee and working group move forward, by taking steps to fully implement these leading practices for collaboration, member agencies could better marshal their collective efforts to support research in quantum computing and synthetic biology and help maintain U.S. competitiveness through transformational technological advances. Recommendations for Executive Action We are making a total of five recommendations, including one to OSTP, one to Commerce, one to DOE, and two to NSF. As the QIS Subcommittee moves forward, the Office of Science and Technology Policy co-chair, in coordination with other co-chairs and participating agency officials, should take steps to fully implement leading practices that enhance and sustain collaboration. (Recommendation 1) As the QIS Subcommittee moves forward, the Department of Commerce co-chair, in coordination with other co-chairs and participating agency officials, should take steps to fully implement leading practices that enhance and sustain collaboration. (Recommendation 2) As the QIS Subcommittee moves forward, the Department of Energy co-chair, in coordination with other co-chairs and participating agency officials, should take steps to fully implement leading practices that enhance and sustain collaboration. (Recommendation 3) As the QIS Subcommittee moves forward, the National Science Foundation co-chair, in coordination with other co-chairs and participating agency officials, should take steps to fully implement leading practices that enhance and sustain collaboration. (Recommendation 4) As the Interagency Working Group on Synthetic Biology moves forward, the Director of the National Science Foundation, in coordination with participating agency officials, should take steps to fully implement leading practices that enhance and sustain collaboration. (Recommendation 5) Agency Comments, Third-Party Views, and Our Evaluation We provided a draft of this product to Commerce, DOD, EPA, DOE, DHS, HHS, NASA, NSF, ODNI, OSTP and USDA for comment. Commerce, DOE, NSF, and OSTP generally agreed with the recommendations directed to them. Commerce, DOE, and NSF provided written comments that are reproduced in appendixes IV, V, and VI, respectively. In expressing concurrence with the recommendations directed to them, these agencies’ written comments discussed aspects of the interagency groups’ efforts we examined in our report or the agencies’ own efforts related to coordination and collaboration. OSTP’s General Counsel provided OSTP’s comments by email. In its comments, OSTP stated that it sees value in our recommendation and will implement the recommendation as resources allow. However, OSTP expressed concern about the impact that resource limitations could have on its ability to implement the recommendation. We recognize that OSTP faces certain resource limitations. However, we believe that implementing our recommendation would allow leveraging of limited resources across the agencies participating in a collaborative effort. In an email from an official with the Office of the Chief Financial Officer in USDA’s Agricultural Research Service, USDA provided general comments on our findings and our recommendation pertaining to the Interagency Working Group on Synthetic Biology. Specifically, USDA concurred that federal support for research and development help drive technological advances and promote U.S. competitiveness. USDA also agreed that the leading practices we discuss in our report can enhance and sustain interagency collaboration, and it expressed support for the implementation of these practices in the Interagency Working Group on Synthetic Biology, consistent with our recommendation. In addition, Commerce, DHS, DOE, EPA, HHS, NASA, and OSTP provided technical comments, which we incorporated as appropriate. Officials from DOD and ODNI stated via email that they had no comments on the report. We also provided a draft of this report to a participant who served as moderator in our October 2017 expert meeting on research for transformational technological advances. We requested his views on aspects of the report on which he has expertise and, in particular, the characterization of statements made by experts at our meeting. He provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Energy, Health and Human Services, and Homeland Security; the Administrators of the Environmental Protection Agency and the National Aeronautics and Space Administration; the Directors of National Intelligence, the National Science Foundation and the Office of Science and Technology Policy; and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to (1) describe federal agencies’ and nonfederal entities’ support for research for transformational technological advances in selected areas, (2) examine federal agencies’ coordination on this research, and (3) provide experts’ views on considerations for maintaining U.S. competitiveness through transformational technological advances. For the purposes of this report, we selected quantum computing (a sub- area of quantum information science) and synthetic biology (the intersection of biology and engineering that focuses on the modification or creation of novel biological systems) as examples of research for transformational technological advances. We selected these two areas of research because they: (1) represent enabling or platform technologies, which could lead to other advances, (2) are supported by a mix of federal agencies and nonfederal entities, and (3) represent areas of congressional interest in which we have not recently conducted work. We conducted this performance audit from November 2016 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Support for Research To describe federal agencies’ and nonfederal entities’ support for research for transformational technological advances in quantum computing or synthetic biology we reviewed agency documentation, relevant literature, and our prior work related to federal research efforts. We focused on federal and nonfederal efforts in fiscal years 2016 through the second quarter of fiscal year 2018. For example, we reviewed the National Science and Technology Council’s 2016 report on advancing quantum information science which discusses the state of the research area and federal involvement. We also interviewed officials from 10 agencies and departments that have ongoing work in either quantum computing or synthetic biology, or in some instances, work in both research areas. These agencies were the: Department of Commerce, Department of Defense, Environmental Protection Agency, Department of Energy, Department of Homeland Security, Department of Health and Human Services, National Aeronautics and Space Administration, National Science Foundation (NSF), Office of the Director of National Intelligence, and U.S. Department of Agriculture. We initially selected federal agencies on the basis of those that had total research and development obligations of $500 million or greater in fiscal year 2016 according to NSF’s Survey of Federal Funds for Research and Development. Additionally, we included an agency which we learned, through the course of our work, had significant ongoing work in both research areas. We did not seek to develop comprehensive information on federal agencies’ efforts to support research in quantum computing and synthetic biology. As a result, federal agencies could have ongoing efforts in these two areas that we do not discuss in our report. To examine the funding federal agencies provide for quantum computing and synthetic biology research, we requested data on obligations for quantum computing and synthetic biology research for fiscal years 2016 through 2017, information on the type of research funded, and the names of individual studies or projects. We requested funding data from all agencies within our scope but some agencies did not provide such data. We assessed the reliability of the data we obtained by checking for obvious errors in accuracy and completeness and by comparing the data with other sources of funding information, such as agency budget documents, where possible. We determined that the data were sufficiently reliable for reporting an approximate, minimum amount of federal financial assistance obligated for quantum computing and synthetic biology research. To examine the extent to which nonfederal entities have supported research related to synthetic biology and quantum computing, we interviewed stakeholders from 21 nonfederal entities with experience in the areas of quantum computing, synthetic biology, or federal research more broadly. To collect a range of viewpoints, we selected nonfederal entities from industry, academia, nonprofit organizations, and professional associations. The 21 nonfederal entities we interviewed included: 1. American Chemical Society 2. American Physical Society 3. Arizona State University 4. Georgia Institute of Technology 8. IBM 9. Institute of Electrical and Electronics Engineers 10. Information Technology and Innovation Foundation 12. Massachusetts Institute of Technology (MIT)13. Materials Research Society 15. National Venture Capital Association 17. Science and Technology Policy Institute 18. University of California 19. University of Colorado We also defined the people cited in this report in the following manner: 1. Experts: individuals who participated in our expert meeting. 2. Stakeholders: academic researchers, industry officials, and representatives of professional organizations who we interviewed. This group does not include agency officials. 3. Agency officials: federal officials we interviewed. We identified and selected these stakeholders through a literature review and referrals. We conducted a literature review to learn about the current state of each research area as well as to identify relevant stakeholders in the areas of synthetic biology and quantum computing. We then contacted the stakeholders for interviews and asked them for additional references. We interviewed stakeholders both in person and over the phone. We did not seek to develop comprehensive information on nonfederal efforts to support research in quantum computing and synthetic biology. As a result, we acknowledge that there are nonfederal entities that may have ongoing efforts in these two areas that we do not discuss in our report. Federal Agencies’ Coordination on Research To examine federal agencies’ coordination on quantum computing and synthetic biology research, we identified coordination efforts in fiscal year 2016 through the second quarter of fiscal year 2018 through our review of agency documentation and interviews with federal officials. Additionally, we interviewed officials with the Office of Science and Technology Policy. For ongoing interagency coordination efforts, we compared agencies’ efforts with selected leading practices for enhancing and sustaining collaboration. We selected six of the eight practices based on their relevance to the operations of the interagency coordination efforts we identified. In this report, and in our past work, we define collaboration broadly as any joint activity that is intended to produce more public value than could be produced when organizations act alone. Through interviews and a data request, we asked agency officials to provide information on their efforts to coordinate quantum computing and synthetic biology research from fiscal year 2016 through the second quarter of fiscal year 2018. For interagency groups related to quantum computing and synthetic biology, we obtained information through June 2018. Experts’ Views To provide experts’ views on considerations for maintaining U.S. competitiveness through transformational technological advances, we convened a meeting of 19 experts on October 12 and 13, 2017, with the assistance of the National Academies of Sciences, Engineering, and Medicine. The experts included current and former federal officials, as well as subject matter experts from industry, academia, nonprofit organizations, and professional associations. About half of the experts were subject matter experts in the areas of quantum computing or synthetic biology, while the other half were experts with broader perspectives on the role of federal and nonfederal entities in supporting research for transformational technological advances. We worked with the National Academies staff to select experts with a range of viewpoints. Prior to the meeting, we worked with National Academies staff to help ensure balance and to assess potential conflicts of interest among the experts. For example, we asked all participating experts to provide information on (1) whether their immediate family had any investments or assets that could be affected, in a direct and predictable way, by a decision or action based on the information or opinions they would provide to GAO; (2) whether they or their spouse received any income or hold any organizational positions that could be affected, in a direct and predictable way, by the information or opinions they would provide GAO; and (3) whether there were any other circumstances, not addressed in the two previous questions, that could be reasonably viewed by others as affecting participants’ point of view on the topics to be discussed. We received signed responses from all participating experts. Three of the 19 experts reported potential conflicts. We evaluated their statements and determined that they did not have any inappropriate biases when taken in the context of the overall group of experts taking part in the meeting. As a result of these efforts, we determined that the group of 19 experts, overall, was balanced and had no inappropriate biases. However, the views of these experts cannot be generalized to everyone with expertise on research for transformational technological advances; they represent only the views of the experts who participated in our meeting. We list the experts who participated in our meeting in Appendix II. We divided the 2-day expert meeting into 8 sessions focused on a range of topics, such as the role of federal and nonfederal entities in keeping the United States competitive. Each session featured an opening presentation by two selected experts, followed by open discussion among all meeting participants. At the end of each session, one expert was tasked with highlighting the key themes discussed during that session. We then solicited feedback from the experts to determine whether there were any additional comments they wanted to add to those themes. We recorded and transcribed the meeting to ensure that we accurately captured the experts’ statements. We analyzed the information gathered from the experts by reviewing and conducting a content analysis of the transcript and identifying considerations for maintaining U.S. competitiveness based on categorizing the experts’ comments. For purposes of quantifying expert remarks, we refer to a statement from an individual expert as being from one expert, and unless there is significant disagreement in the transcript, we refer to statements from two or more experts as being from experts. In cases of significant disagreement in the transcript, we refer to statements from two to three experts as being from a few experts, and statements from four to six experts as being from some experts. Before publication and consistent with our quality assurance framework, we provided the experts with a draft of our report and asked them to provide their views on whether our overall characterization of the meeting generally reflected the considerations discussed during the meeting. Of the 18 experts who responded to our request for review, 13 experts agreed that our overall characterization generally reflected the key considerations identified during the meeting, one partially agreed, and one differed with our report’s presentation of specific issues regarding synthetic biology. We incorporated feedback experts provided on the draft, as appropriate. To corroborate statements made by the experts on particular topics, as appropriate, we identified and analyzed studies and reports by agencies, the National Academies, and others that were recommended to us by experts. In addition, we compared the experts’ statements to other information provided by agency officials and stakeholders we interviewed. Appendix II: Participants in GAO’s Meeting on Research for Transformational Technological Advances Appendix II: Participants in GAO’s Meeting on Research for Transformational Technological Advances Affiliation Ceres Nanosciences, Inc. Appendix III: Funding/Investment Gap in the Manufacturing-Innovation Process (Corresponds to fig. 1) Appendix III: Funding/Investment Gap in the Manufacturing-Innovation Process (Corresponds to fig. 1) Figure 3 shows the potential gap during the middle stages of innovation, in which innovators may have difficulty finding financial support. The figure includes a static display of the rollover information included in figure 1, which is interactive. Figure 3 Funding/Investment Gap in the Manufacturing-Innovation Process (Corresponds to fig. 1) Appendix V: Comments from the Department of Energy Appendix VI: Comments from the National Science Foundation Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following individuals made contributions to this report: Christopher Murray (Assistant Director), Angela Miles (Analyst-in-Charge), Justin Fisher, Scott Fletcher, Ashley Grant, Charlotte E. Hinkle, Gwen Kirby, Patricia Moye, Cynthia Norris, Emily Pinto, Tind Shepper Ryen, McKenna Storey, and Walter Vance.
Why GAO Did This Study Scientific and technological innovation contributes to U.S. economic competitiveness and prosperity. Federal agencies support transformational technological advances—those that result in new or significantly enhanced technologies—by, for example, funding research (nearly $70 billion in obligations in fiscal year 2017). GAO was asked to examine support for research that could lead to transformational technological advances. This report (1) describes federal agencies' and nonfederal entities' support for such research in selected areas, (2) examines federal agencies' coordination on this research, and (3) describes experts' views on considerations for maintaining U.S. competitiveness through such advances. GAO selected quantum computing and synthetic biology as examples of research areas that could lead to transformational technological advances. GAO reviewed agency documents and interviewed federal officials, subject matter experts, and stakeholders. GAO also worked with the National Academies of Sciences, Engineering, and Medicine to convene a meeting to solicit views from 19 experts selected from government, academia, and industry, among others. What GAO Found Multiple federal and nonfederal entities support research for transformational technological advances in the areas of quantum computing—the manipulation of bits of data using the behavior of individual atoms, molecules, or other quantum systems to potentially outperform supercomputers—and synthetic biology—the combination of biology and engineering to create or modify biological systems. GAO found that at least 6 agencies support quantum computing research; at least 10 agencies support synthetic biology research; and nonfederal entities, such as universities and businesses, support research in both areas. Agency officials said they coordinate on quantum computing and synthetic biology through efforts such as conferences and interagency groups, but GAO found that certain new efforts have not fully implemented selected leading collaboration practices. The quantum computing group, co-chaired by officials from 4 agencies, and the synthetic biology group, led by the National Science Foundation, have taken initial steps to implement some leading practices GAO identified that can enhance and sustain interagency collaboration. For example, both groups agreed to coordinate their research, and participating agencies documented agreement with the quantum computing group's purpose through a charter. However, the groups have not fully implemented other practices, such as agreeing on roles and responsibilities and identifying common outcomes, that could help ensure they effectively marshal agencies' efforts to maintain U.S. competitiveness in quantum computing and synthetic biology. Experts identified considerations for maintaining U.S. competitiveness through transformational technological advances. The considerations broadly address federal and nonfederal entities' roles in supporting such advances and include: developing a strategic approach using consortia or other mechanisms to bring together potential partners; fostering an environment in which information is shared among researchers while also considering the risks of information sharing; focusing on technology development and commercialization, for example, by providing support across multiple stages of technology innovation; and strengthening the science and technology workforce through training, recruiting, and retaining talent. What GAO Recommends GAO recommends that the agencies leading the interagency quantum computing and synthetic biology groups take steps to fully implement leading collaboration practices. The agencies agreed with GAO's recommendations.
gao_GAO-18-502
gao_GAO-18-502_0
Background The Rehabilitation Act of 1973 (Rehabilitation Act), as amended by WIOA, authorizes a number of grant programs to support employment and independent living for persons with disabilities, including the State Vocational Rehabilitation Services program. This program is the primary federal government effort to help individuals with disabilities prepare for and obtain employment. An individual who is deemed eligible works with state VR agency staff to prepare an individualized plan for employment, which describes the employment goal and the specific services needed to achieve that goal. Education’s Rehabilitation Services Administration (RSA) awards funds to state VR agencies through the program to help individuals with disabilities engage in gainful employment. States must provide a 21.3 percent nonfederal match of these funds. In fiscal year 2016, total program funds for VR—including state match funds—were $3.81 billion. States, territories, and the District of Columbia generally designate a single agency to administer the program, although, depending on state law, states may designate more than one agency. Twenty-three states have two separate agencies, one that exclusively serves blind and visually impaired individuals (known as agencies for the blind) and another that serves individuals who are not blind or visually impaired (known as general agencies). Twenty-seven states, the District of Columbia, and the five territories have a single combined agency that serves both blind and visually impaired individuals and individuals with other types of impairments (known as combined agencies). In total, there are 79 state VR agencies. Pre-employment Transition Services In 2014, WIOA amended the Rehabilitation Act to require state VR agencies to provide students with disabilities with pre-employment transition services. According to information Education provided with its regulations, WIOA emphasized the provision of services to students with disabilities to ensure that they have meaningful opportunities to receive training and other supports and services they need to achieve employment outcomes. WIOA requires states to make pre-employment transition services available statewide to all students with disabilities in need of such services, who are eligible or potentially eligible, regardless of whether a student has submitted an application for services from a state VR agency. In this context, students with disabilities include those with an individualized education program (IEP) for special education services through the school system, those receiving an accommodation for their disability, and others. In information provided with the regulations, Education stated that state VR agencies should work closely with school systems and others to identify these students. WIOA requires each state to reserve at least 15 percent of a state’s VR allotment for a fiscal year for pre-employment transition services for students with disabilities. If a state cannot use or match all of its VR funding, it relinquishes funds to the federal government and the state’s total award amount is then reduced. However, the state must still reserve 15 percent of what it did not relinquish for the provision of pre- employment transition services. WIOA established required activities under pre-employment transition services that states must make available to students with disabilities. Education has provided states with additional information about each of the activities (see table 1). After making the required pre-employment transition services available, if a state has funding remaining, WIOA lists nine other “authorized” activities that a state may implement. For example, in providing the authorized activities, states may, among other things, provide training to local VR and educational service providers; coordinate transition services with local educational agencies; and disseminate information about innovative, effective, and efficient approaches to achieve the goals (see appendix II for a full listing of authorized activities). Education’s guidance indicates that such authorized activities should improve the transition of students with disabilities from school to postsecondary education or an employment outcome and support the arrangement or provision of the required activities. WIOA also requires local offices of state VR agencies to conduct coordination responsibilities, which includes coordinating with state and local educational agencies to ensure the provision of pre-employment transition services. These can be conducted concurrently with the “required” activities, and states can use the reserved funds for them. Examples of coordination responsibilities that local offices of state VR agencies must undertake are attending meetings, when invited, about IEPs; and working with the local public workforce system and employers to develop work opportunities for students with disabilities. In support of this coordination and in recognition that VR and educational agencies both offer transition services to students, WIOA requires that VR agencies establish or update their interagency agreements with states educational agencies. Interagency agreements between the state VR and educational agencies are intended to describe the steps each agency will take to implement pre-employment transition services and determine the roles and responsibilities of each agency, including financial responsibilities and procedures for identifying students in need of pre- employment transition services. Federal Guidance, Assistance, and Monitoring Following the passage of WIOA, Education, through its Rehabilitation Services Administration (RSA), issued regulations and guidance to implement pre-employment transition services requirements (see fig. 1). Education also provided technical assistance to state VR agencies through webinars, conference calls, and presentations at conferences. For example, Education presented information to state officials in a series of webinars about the new programmatic and financial processes and procedures related to pre-employment transition services just after the final regulations were issued in 2016. In addition, Education funded technical assistance centers to help state VR agencies and their partners answer questions and provide training about WIOA. Two of these centers are the Workforce Innovation Technical Assistance Center (WINTAC) and the National Technical Assistance Center on Transition (NTACT). Each center focuses its efforts on a specific set of issues: WINTAC on helping state VR agencies implement WIOA requirements, including pre- employment transition services; and NTACT on helping state VR and educational agencies improve outcomes for students receiving transition services. RSA is to conduct periodic monitoring visits to assess state VR agencies’ implementation of the VR program, including pre-employment transition services. RSA is to monitor states for compliance with the administrative, financial, and performance requirements of the program, as well as identify technical assistance needs at individual state VR agencies. According to Education officials, RSA plans to follow a 5-year monitoring cycle that began in fiscal year 2017 and will generally include monitoring visits to 10 states per year through fiscal year 2021. In fiscal year 2017, Education visited 14 VR agencies in 10 states, and in fiscal year 2018, Education plans to visit 15 VR agencies in 12 states. Most States Reported Expanding Their Transition Services to Students and Developing Their Administrative Capacity to Provide These Services Most State Vocational Rehabilitation Agencies Reported Expanding Their Services for Students with Disabilities Most state VR agencies that responded to our survey reported expanding services for students with disabilities since WIOA’s enactment in July 2014 by either serving more students through pre-employment transition services or by initiating new or additional services. Most state VR agencies that responded to our survey reported that they provided the five required activities to more students with disabilities since WIOA’s enactment (see fig. 2). State VR agencies indicated in their survey responses that they had previously provided and continue to provide transition services to students who apply and are eligible for the VR program, and many of the activities were not entirely new to state VR agencies. Most agencies that responded to our survey reported providing each of the required activities to students with disabilities before the enactment of WIOA, while fewer reported initiating these services since enactment (see fig. 3). Of the five required activities, instruction in self-advocacy saw the biggest expansion during this time. In information provided with the regulations, Education described instruction in self-advocacy as, for example, classroom lessons in which students learn about their rights, responsibilities, and how to request accommodations or services and supports needed during transition. In written comments on our survey, 10 state VR agencies reported partnering with other organizations, such as universities or centers for independent living, to provide instruction in self- advocacy. One agency reported on our survey that it offers peer mentoring as an additional component of self-advocacy services, and another reported providing self-advocacy and mentoring for deaf-blind students by deaf-blind adults. In October 2016, based on views of an expert panel that we convened on autism spectrum disorders and transitioning youth, we reported that it is critically important that all transitioning youth, regardless of their level of disability, be given the opportunity to state their own preferences to the extent of their capabilities to reach their maximum independence. State VR agencies reported developing additional programming as a result of WIOA’s enactment, including expanding programs for more students, adding new opportunities and experiences, and creating new partnerships. Officials from all four of the state VR agencies we interviewed said they had programs in place prior to WIOA that offered activities similar to pre-employment transition services, but they have since expanded these services or created additional programs for students with disabilities. For example, an official we interviewed from the Idaho Division of Vocational Rehabilitation said the agency had previously worked to enroll students in the VR program prior to graduation, but has since begun developing new programming and instruction aimed at serving larger groups and providing other services, such as a paid work experience. An official from Maryland’s Division of Rehabilitation Services said many of the services they previously offered were during school hours, and students had limited access to these services if they wanted to stay in class. The agency has since added services after school and during the summer, such as opportunities for students to meet with employers, according to Maryland officials. Officials from the Illinois Department of Human Services, Division of Rehabilitation Services, said that while the agency had previously provided work-based learning experiences, it has since expanded the number of spots available for students in an existing program and created a new work-based learning program that is a collaboration between school districts, a community rehabilitation partner, and businesses. Providing new services with specific requirements to an expanded population has been a significant change, according to officials in one of the state VR agencies we interviewed and in all three of our discussion groups. For example, officials from Maryland’s Division of Rehabilitation Services said that, while they provided all five required activities before WIOA, they now provide the activities to a younger population and make the activities available statewide. State VR officials in all three of our discussion groups said that providing pre-employment transition services allows them to provide these services to more students with disabilities or at an earlier age, which will likely have positive effects on students’ transition from school to work. For example, officials in one discussion group noted that the provision of pre-employment transition services is increasing awareness, enhancing services, and increasing the likelihood that VR program outcomes will improve. In another discussion group, officials said their agencies had already seen benefits from pre- employment transition services and the services have raised students’ expectations for the types of jobs they might obtain. While 32 of the state VR agencies responding to our survey reported that they had identified all potentially eligible students, another 37 reported that they were currently in the process of identifying these students. State VR officials in all three of our discussion groups and who we interviewed in two of four state VR agencies said they have had challenges finding the population eligible for services. In written comments on our survey, one agency reported that while statewide information on students was not readily available, officials worked with the state educational agency to identify potentially eligible students, including more than 137,000 students with an IEP and an estimated 13,000 additional students that do not have an IEP. We previously reported on the difficulties state VR officials faced in obtaining data they could use to identify other youth with disabilities. Compared to combined and general agencies, more agencies for the blind reported in our survey that they did not provide the five required activities to more students with disabilities, and officials in some of these agencies said they can serve a much smaller population. For example, 57 percent (12 of 21) of agencies for the blind reported providing job exploration counseling to more students, compared to 83 percent (25 of 30) of combined agencies and 91 percent (20 of 22) of general agencies since WIOA enactment. Similarly, 67 percent (14 of 21) of agencies for the blind reported providing work-based learning experiences to more students, compared to 83 percent (25 of 30) for combined agencies and 86 percent (19 of 22) for general agencies. Officials in some of these agencies for the blind and from the National Council of State Agencies for the Blind (NCSAB) told us in interviews that agencies for the blind have far fewer potentially eligible students they could serve compared to other types of agencies. For example, officials we interviewed with Idaho’s Commission for the Blind and Visually Impaired said that Idaho has only 40 students being provided pre-employment transition services. In contrast, the Idaho Division of Vocational Rehabilitation reportedly provided at least one pre-employment transition service to approximately 700 students in a one-year period. The ability of agencies for the blind to serve more students may also be restricted because they are not able to provide pre-employment transition services to younger students in some cases, according to officials with NCSAB and Idaho’s Commission for the Blind and Visually Impaired. NCSAB officials told us that state VR agencies have traditionally provided VR services to youth who are blind or visually impaired at younger ages compared to general agencies that serve youth with other types of disabilities. The ages at which students may be provided pre-employment transition services varied by agency, based on responses to our survey, but the most common age range reported across all types of agencies was 14 to 21 years old. According to Education officials, as a result of WIOA, two agencies in the same state must agree on a common age range during which students can be provided pre-employment transition services. Most agencies in states with two VR agencies responding to our survey (35 of 44) reported agreeing on an age range for receiving pre- employment transition services. NCSAB officials said that in some cases agencies for the blind have had to raise the minimum age at which they would begin providing services to students. Officials with Idaho’s Commission for the Blind and Visually Impaired, for example, said they would prefer to begin services at younger ages because their agency has the resources to do so. However, officials with Idaho’s Division of Vocational Rehabilitation said they do not have the resources to provide pre-employment transition services to the relatively large number of students with disabilities at a younger age. Most State Vocational Rehabilitation Agencies Reported Building Administrative Capacity State VR agencies reported taking a range of actions to build their administrative capacity to implement pre-employment transition services. These actions included building staff capacity and expanding contracts with services providers. Building staff capacity. Most state VR agencies reported building staff capacity to facilitate and carry out the requirements of pre- employment transition services by: Establishing a new specialist position. More than half (45 of 74) of VR agencies reported in our survey establishing at least one new transition specialist position specifically for pre-employment transition services. For example, the Idaho Division of Vocational Rehabilitation reported establishing this position and officials told us that they hired a specialist who was previously the transition coordinator for the state’s educational agency. In written comments on our survey, a respondent from another state commented that their agency has hired 20 pre-employment transition services specialists to provide the five required activities. Officials we interviewed from Maryland’s Division of Rehabilitation Services said they added six salaried positions dedicated to providing pre-employment transition services. Another agency responding to our survey reported dedicating a supervisor and 15 percent of their counselors exclusively to this purpose. Training staff. All 74 state VR agencies reported providing training on pre-employment transition services to their staff. For example, in written comments, one agency reported developing training tools for its counselors, such as answers to frequently asked questions, posting guidance on its intranet, and having WINTAC provide training. Expanding contracts and agreements with service providers. In addition to being provided by state VR agency staff, pre-employment services can be offered through a variety of methods and service providers, and many state VR agencies reported entering into new or additional contracts with service providers or expanding contracts with existing providers. Pre-employment transition services can be provided directly by state VR agency staff or through agreements with third parties, such as community rehabilitation programs, independent living agencies, public colleges and universities, and school districts. In our survey, 62 of 74 agencies reported entering into new or additional contracts with third-party providers to provide pre- employment transition services. Officials we interviewed from three of four state VR agencies said they either established or expanded existing contracts and agreements. For example, officials from the Illinois Department of Human Service, Division of Rehabilitation Services told us that after the enactment of WIOA, they expanded arrangements with independent living centers and initiated a new program that provides students with work experiences. Agencies reported several examples of approaches using third parties: establishing contracts with community rehabilitation programs to partnering with independent living agencies to work with youth on entering into provider agreements with local workforce centers to assist with providing job preparation and a paid work experience, developing programs with public colleges and universities focused on financial literacy and self-advocacy, and contracting with individual school districts to deliver services in the school environment. States Reported Challenges Using Reserved Funds, Updating Interagency Agreements, Among Others, and Reported Needing More Assistance from Education Fewer Than Half of States Reported Using All Reserved Funds, and Some Reported a Need for More Information on Allowable Costs Twenty-one of 56 states (50 states, 5 territories, and the District of Columbia) reported using the full amount of grant funds they reserved for pre-employment transition services for students with disabilities for fiscal year 2016, according to the most recent full year of data available from Education (see fig. 4). In aggregate, states reportedly expended approximately $357 million out of the approximately $465 million reserved (about $108 million less than the target) for fiscal year 2016. For fiscal year 2015, states reportedly expended approximately $324 million on pre-employment transition services out of the approximately $453 million reserved for that purpose (about $130 million less than the target). Results from our 2017 survey of state VR agencies revealed similar trends: Fewer than half the 74 agencies reported that they used at least 15 percent of their VR grant allotment each year. Thirty-two of the 74 agencies responding to our survey reported using the minimum required 15 percent of federal VR grant funds reserved for the provision of pre- employment transition services for fiscal years 2016 and 2017. For fiscal year 2015, 25 agencies reported using the required 15 percent minimum reserved funds. Officials we interviewed in two of four state VR agencies and officials in all three discussion groups explained that some of the services they generally provided to participants in the VR program are not allowable for the funds reserved for pre-employment transition services. These expenditures included transportation, tuition, and others associated with individualized services. For example, officials in Maryland’s Division of Rehabilitation Services told us that transportation costs for students to get to the place where the services are provided are not covered. VR agency officials in two of our discussion groups told us that assistive technology, such as hearing aids, could not be paid for with the 15 percent of funds reserved for pre-employment transition services. In another group, participants said that some expenditures, such as tuition or for the services of a job coach to help students with the most significant disabilities, could not be paid with reserved funds. In information provided with the regulations, Education stated that it does not have the statutory authority to allow these expenditures to be paid for with the funds reserved for pre-employment transition services and these services must be paid with other VR funds. When it promulgated its final regulations, Education noted that state VR agencies would experience challenges in using their funds because many of the services provided to students with disabilities prior to WIOA’s enactment would not qualify as pre-employment transition services. Education reviewed past expenditures for a subset of students and estimated that 82 percent of state VR agencies’ reported purchases for those students would not meet the statutory definition of pre-employment services under WIOA. Education concluded that states would have to reach a larger number of students with disabilities in order to meet the spending requirement and that state VR agencies would need to develop and implement aggressive strategies to expend these funds in these initial years of implementation. According to WINTAC officials, state VR agency officials are commonly unclear about what kinds of activities they can provide using the funds reserved for pre-employment transition services. For instance, they said that states must make required activities (e.g. work-based learning experiences and self-advocacy) available to all students with disabilities before providing authorized activities (e.g. model projects, partnerships), in accordance with WIOA. However, state officials have commonly interpreted that to mean that all students must actually receive the required activities before the agency can begin providing other activities, according to WINTAC officials. WINTAC officials explained that states may have been conducting authorized or coordination activities without knowing these activities could be paid for with the reserved funds. None of the state VR agency officials we interviewed said they had yet moved beyond providing required activities to providing authorized activities. Officials from two of the agencies we interviewed told us they were in the process of planning authorized activities. For example, officials with the Idaho Division of Vocational Rehabilitation said they were completing an assessment of their needs, which would help them plan authorized activities. Officials we interviewed from the other two agencies—the Idaho Commission for the Blind and Visually Impaired and the Illinois Division of Rehabilitation Services—said they did not have the resources to provide authorized activities or were unsure about how to properly transition from required to authorized activities under the current guidance. Education communicated with states on broad requirements but provided little detailed information directly to states on the allowable use of funds reserved for pre-employment transition services. Education provided information when it promulgated final regulations, in grant award notifications, on its website, and in presentations at conferences. In each of these formats, Education described activities on which states could not spend funds, but provided little detailed information on what expenditures are allowed. Regulations: Education’s final regulations restate many provisions in WIOA, including the prohibition on using any of the reserved funds for administrative costs. In responding to comments it received on its proposed regulations, Education provided examples of services that commenters requested would be considered pre-employment transition services, such as, postsecondary education, on-the-job supports, job coaching, travel expenses, and uniforms. In information provided with the regulations, Education explained that it had no statutory authority to expand or limit the pre-employment transition services listed in WIOA. Education stated that a state VR agency can allocate costs associated with staff time spent providing pre- employment transition services, including an employee’s salary and fringe benefits, to the funds reserved for pre-employment transition services. However, Education did not provide additional information on what specific types of expenditures states were permitted to spend funds on in providing pre-employment transition services as required by WIOA. Grant award notification: The notification that accompanies each state’s VR grant award lists the three sets of activities for which the reserved money can be used: required, authorized, and coordination; it lists each of the activities as they are listed in WIOA. It also discusses the prohibition on using any of the reserved funds for administrative costs. It does not list or describe what specific expenditures the reserved funds can be used for to undertake each of the listed activities. Education’s website: A list of frequently asked questions on Education’s website outlines the requirements of WIOA and explains that the reserved funds must only be used to provide pre-employment transition services as listed in WIOA. Similar to the regulations, the website explains that the total costs of an employee’s salary and fringe benefits may be allocated to the reserved funds if that employee is providing only pre-employment transition services to students with disabilities but does not include additional detail for any other expenditures. Presentation materials: In one set of presentation materials, Education provided an example of a potential allowable expenditure for one of the required activities, work-based learning. It did not include information on allowable expenditures for the other four required activities, or any of the nine authorized activities. In another set of presentation materials, Education provided examples of services for each of the five required pre-employment transition services activities. According to Education officials, these examples would be allowable expenditures. Education, however, provided the most detailed information through WINTAC. WINTAC’s website provided answers to some specific questions on the use of funds reserved for pre-employment transition services. In one set of frequently asked questions, WINTAC included a list of 28 questions with detailed answers, including what specific expenditures may be charged to the reserved funds. For example, based upon guidance issued by RSA, the website explains that reserved funds may be used to pay for auxiliary aids and services, such as interpreters, if they are directly related to one of the five required pre-employment transition services activities. However, the reserved funds may not be used to pay for the costs of foreign language interpreters because they are not an auxiliary aid or serve that is required due to the individual’s disability. WINTAC’s website also included answers provided by Education on 13 other frequently asked questions. Information included that reserved funds cannot be used to pay for the cost of an assessment to determine whether a student met the definition of a student with a disability; they can be used to pay for items required by an employer for work-based learning activities. Some state VR agencies we surveyed and those that participated in one of our three discussion groups said they would like more detailed information directly from Education. Seven survey respondents reported that they would like Education to provide answers to their specific questions. In one discussion group, participants noted that when states approach WINTAC with a new question, the technical assistance center sometimes needs to obtain the answer from Education. This process can be inefficient at times. In addition, the answer may not be broadly shared with all the states, limiting its benefit, whereas information issued directly from Education could help communicate the answer more efficiently and broadly. One survey respondent reported, for example, that guidance varied by source—training, Education’s RSA staff, or technical assistance center websites—and said that Education should provide all state VR agencies with the same information at the same time. According to standards for federal internal control, management should communicate externally through reporting lines so that external parties can help the entity achieve its objectives and address related risks. Management should also periodically evaluate its methods of communication so it has the appropriate tools to communicate quality information throughout and outside of the entity on a timely basis. Education officials said that during fiscal years 2015 and 2016, states were unclear about allowable expenditures using reserved funds, and that they plan to clarify guidance as they learn about the issues from states during their monitoring. According to Education officials, they respond to issues that need clarification and provide answers to questions as part of formal monitoring visits or through other communications with state agencies. Education officials said they expect to complete a round of monitoring visits to all states by the end of fiscal year 2021. However, an Education official said they have no timeframe for providing further information on allowable costs to states. With better information on timeframes for when this information will be provided, states would be able to better plan their use of the remaining funds reserved for pre- employment transition services. State VR and Educational Agencies Have Begun Collaborating through Joint Training and Guidance, But Fewer Than Half Have Updated Their Interagency Agreements Most state VR agencies (61 of 74) that responded to our survey reported providing training on pre-employment transition services along with their state’s educational agency since WIOA’s enactment in 2014. Joint training may help coordination between state VR and educational agencies, as state VR officials participating in our discussion groups said that some educators were not familiar with pre-employment transition services. Similarly, an official we interviewed from Idaho’s state educational agency said it was common in the past for teachers and VR counselors not to know one another. Joint trainings provided to VR staff and teachers have improved these relationships, and teachers can invite VR counselors to students’ IEP meetings, the official said. Joint training includes staff presentations at conferences and participation in other training sessions. For example, officials we interviewed from the Idaho Division of Vocational Rehabilitation said their transition coordinator has given presentations to education directors around the state about changes resulting from WIOA and how the inclusion of pre-employment transition services can affect special education for the school districts. In written comments on our survey, one agency reported that it co-sponsors an annual conference with VR, special education, developmental services, and other public and private entities. During this conference, they plan how to improve services for students with disabilities. About one-third of state VR agencies (23 of 74) reported issuing joint guidance with their state’s educational agency, a recommended practice according to WINTAC. The other two-thirds of survey respondents reported that joint guidance was either in progress (27 of 74) or that they had not issued such guidance (23 of 74). Joint guidance can include written policies and procedures that are created by and provided to state VR and educational agency staff. For example, in written comments on our survey, one agency reported developing written policies and guidance for transition counselors that the state educational agency endorses and provides to special education staff. In Maryland, VR and special education officials told us that they issued guidance through jointly created materials on pre-employment transition services. Less than half the state VR agencies that responded to our survey (34 of 74) reported updating their interagency agreement with their state’s educational agency, which is intended to facilitate collaboration and coordination on delivery of pre-employment transition services. The majority of agencies reported that their agreement is either in progress (37 of 74) or not yet updated (3 of 74). These required agreements outline how VR agencies and schools will plan and coordinate service provision, provide for each agency’s responsibilities, including financial responsibilities, and provide for student outreach procedures, among other things. Discussion group participants and CSAVR representatives emphasized the value of completing their interagency agreements with the state educational agency. In one group, officials whose agencies had completed their agreements said they are essential for state VR agencies to provide services in schools. Participants in another discussion group explained that once they have a state-level agreement in place, they can discuss what services school districts need for students and then determine how to provide those services. According to state educational agency officials we interviewed, Individuals with Disabilities Education Act (IDEA) requirements are similar to requirements for pre-employment transition services, and they need to coordinate with VR officials at both the state and local levels to agree on each agency’s assigned tasks and expectations. These officials said state VR and educational agencies should coordinate funding to make services available where they are needed and to complement each other’s transition efforts. Illinois’s agreement, for example, specifies that the state educational agency is responsible for providing outreach, guidance, and coordination to local educational agencies regarding the provision of pre-employment transition services. According to the agreement, Illinois’s VR agency is responsible for providing pre-employment transition services, both directly and through cooperative agreements with local educational agencies, and for providing written information to the state educational agency regarding services available to students with disabilities. Officials we interviewed with CSAVR said state VR agencies that have made progress in developing their interagency agreements with state educational agencies tend to be more successful in implementing pre-employment transition services. According to Education officials, Education provides guidance and technical assistance on interagency agreements to states as part of Education’s monitoring or when asked by states. Education officials said they provide technical assistance during periodic monitoring visits, which are currently limited to about 10 states per year from fiscal years 2017- 2021; by helping state VR agencies develop policies and procedures; and by making sure pre-employment transition services are coordinated with the state educational agency and through interagency agreements. According to Education officials, there is no statutory provision authorizing Education to identify states that have not updated their interagency agreement. Education officials said they do not collect information from state VR agencies on the status of these agreements except when they conduct monitoring visits in specific states. In addition, Education officials said that when monitoring, they may meet with state educational agency partners to help them understand the new components of pre- employment transition services in an agreement, or they may refer the state agencies to WINTAC or NTACT resources. Providing assistance during monitoring may be helpful for some states. However, given that less than half of state VR agencies we surveyed reported updating and finalizing their agreements and Education officials say they will take another three years to complete this round of monitoring, additional action by Education may be needed to raise awareness among the remaining states about the importance of these agreements to help states coordinate services to students with disabilities. Additional action could include, for example, conducting earlier state outreach or monitoring to assess state progress on finalizing the interagency agreements and offering technical assistance when appropriate. However, Education officials said there is no requirement that state educational agencies provide pre-employment transition services to meet their obligations to IDEA-eligible students with disabilities under part B of the Individuals with Disabilities Education Act. As a result, WINTAC and participants in our discussion groups explained that it can be difficult to get state educational agencies to work with state VR agencies to update interagency agreements. Education officials said that WIOA requires state VR agencies to update these interagency agreements to include pre- employment transition services but they do not track their completion because states are not required to report when the agreements are finalized. Moreover, Education officials said they have heard from states that some reasons that interagency agreements are not specifically updated are that the agreements are written broadly enough so that they can remain in effect when there are additional changes to the law, the details of actual practices are rarely reflected in the high level of an interagency agreement, and that modifications to agreements are time- consuming and would not result in changes to the interagency coordination practice. Without an updated agreement between the state VR and educational agencies, efforts to collaborate on pre-employment transition services may be hindered. Officials with the National Technical Assistance Center on Transition (NTACT) told us that some of the state agencies for which they provided in-depth technical assistance were not working closely together. Officials from two of the three state educational agencies we interviewed said they viewed pre-employment transition services as primarily the responsibility of the VR agency. State VR officials in all three of our discussion groups said they have experienced coordination challenges, including difficulty determining each agency’s responsibilities for providing pre-employment transition services, obtaining data needed to identify and provide services to students, and determining which agency will pay for which services, among other challenges. Interagency agreements can help to address these types of issues. Federal internal controls recommend that management communicate with and obtain information to identify, analyze, and respond to risks related to achieving defined objectives, such as those that can arise from new laws and regulations. Moreover, we found in prior work that it is important to establish ways to operate across agency boundaries, with measures such as developing common terminology and fostering open lines of communication. A lack of collaboration between state VR and educational agencies increases the risk that some students will not successfully transition from school to post-school activities. In addition, our prior work has identified lack of collaboration among and between federal agencies and state and local governments as a challenge to effective grant implementation. Interagency agreements are intended to serve as a mechanism related to collaboration practices, which include defining a common outcome, establishing joint strategies, and agreeing on roles and responsibilities of each agency. By taking additional steps, such as discussing the benefits of finalizing interagency agreements, and reminding states of existing technical assistance resources pertaining to updating and finalizing interagency agreements, Education would help raise awareness about the importance of the agreements and be better positioned to help states efficiently and effectively coordinate services to students with disabilities. States Reported That Best Practices Would Be Useful to Them in Implementing Services and Could Help Them Address Challenges Most state VR agencies (63 of 74) that responded to our survey reported that additional assistance with identifying best practices would be useful to their agencies. Similarly, state VR officials in all three of our discussion groups spoke to the need for Education to develop and disseminate best practices to help states, for example, comply with program requirements. WIOA requires Education to highlight best state practices on pre-employment transition services. Best practices may also help states address the challenges they reported facing in implementing and administering pre-employment transition services for students with disabilities, such as (1) coordinating with state educational agencies, (2) using VR resources more efficiently and effectively to help states balance providing pre-employment services with the full VR program, and (3) collecting data on services provided, and (4) updating data tracking systems. Coordinating service delivery with state educational agencies. Over half (41 of 74) of state VR agencies reported in our survey that additional assistance on coordinating with state educational agencies would be useful for them. Similarly, officials from all three state educational agencies we interviewed said they would like additional assistance on interagency collaboration. Officials with NTACT told us that some of the state agencies for which they provided in-depth technical assistance were not working closely together. Officials from two of the three state educational agencies we interviewed said they viewed pre-employment transition services as primarily the responsibility of the VR agency. State VR officials in all three of our discussion groups said they have experienced coordination challenges, including difficulty determining each agency’s responsibilities for providing pre-employment transition services, obtaining data needed to identify and provide services to students, and determining which agency will pay for which services, among other challenges. An official we interviewed from the Idaho Department of Education said it would be helpful to have more clearly defined roles, obligations, and means of sharing data between the state-level agencies. In written responses to our survey, one respondent said having examples of highly successful collaborations between a state educational agency and state VR agencies would be helpful. According to Education’s guidance, a student’s transition from school to post-school activities is a shared responsibility and coordination and collaboration between the state VR and educational agencies is essential. However, according to information Education provided with the regulations, while some have sought clarification and additional guidance in this area, Education determined that decisions on agencies’ responsibilities must be made at the state level to allow states maximum flexibility allowed under the law. In the absence of more specific guidelines for how state agencies should collaborate, best practices from other states could provide helpful examples. Balancing pre-employment transition services with VR services. Several state VR agencies in both our written survey responses and in discussion groups noted that by increasing services mandated for pre-employment transition services for students, they have had to reduce VR services to adults, which has made it difficult to balance the two programs. In issuing its final regulations, Education acknowledged that reserving funds would decrease amounts available for the full VR program, resulting in a transfer of benefits from individuals historically served by VR to students with disabilities in need of transition services. According to state VR directors with the National Council of State Agencies for the Blind (NCSAB), agencies for the blind have had to restrict VR services while also not being able to use all of the funds reserved for pre-employment transition services for students with disabilities because VR services cannot be paid with reserved funds. Most state VR agencies that completed our survey (50 of 74) reported that balancing pre-employment transition services with other vocational rehabilitation services was moderately difficult, very difficult, or extremely difficult during federal fiscal year 2017. Collecting data. Data collection was one of the top challenges identified by state VR agencies in our survey, with 48 of 74 reporting that collecting data on the provision of pre-employment transition services was moderately difficult, very difficult, or extremely difficult during fiscal year 2017. Prior to WIOA, agencies collected and reported data only on individuals who had applied and enrolled in the VR program. For pre-employment transition services, agencies now collect data on who provided and received each of the five required activities, including for individuals who have not submitted a VR application. State VR officials in two of our three discussion groups said that they have experienced challenges collecting sensitive information (such as social security numbers) for minors and collecting data on individuals for group services. Officials in one of the three discussion groups also said that these problems are particularly significant when trying to collect information on potentially eligible students for whom they do not have open VR cases. These students could include all those with an IEP and those that receive accommodations in school based on their disability, among others. Updating data tracking systems. Updating data systems was also one of the top challenges reported in our survey, and was cited as an additional administrative burden by state VR officials in our discussion groups. Specifically, 53 of 74 state VR agencies reported that it was moderately difficult, very difficult, or extremely difficult to update tracking systems to collect and report financial and service data on pre-employment transition services during fiscal year 2017. According to a state VR agency official we interviewed, updating that state’s tracking system is difficult because data collected on pre-employment transition services—such as the type of service provider and how the service was provided—do not fit well into a case management system designed for the full VR program. Updating these tracking systems also created an additional administrative burden for VR agency staff, according to officials from all three discussion groups and three of the four state agencies that we interviewed. Officials in two of our three discussion groups said that they have one or more full-time staff members doing only administrative tasks or that they have had to hire additional staff to handle data tracking. Education officials said that they plan to document and share best practices with states; however, they said the agency does not have a final written plan for managing these efforts because plans are still under discussion in light of inquiries received. Education officials said they are collecting information on state VR agencies’ practices through monitoring and they are sharing this information with WINTAC—information that could be useful for sharing best practices across states—but a comprehensive summary of states’ efforts will not be available until after Education officials conduct monitoring visits of all states by the end of fiscal year 2021. In addition, in a 2015 technical assistance circular, Education recommended that state VR agencies consult with other federal, state, and local agencies to identify best practices for providing pre-employment transition services to students and youth with a variety of disabilities. Education officials also said that they are looking for opportunities, such as webinars and conferences, to share information with states. However, Education does not have set timeframes and has not detailed the specific steps and activities for fully leveraging knowledge to address common challenges, or for finalizing and disseminating best practices. By doing so, Education would be better positioned to provide best practices information to state VR agencies to better serve students with disabilities who are transitioning from high school. Conclusions Pre-employment transition services are designed to help students with disabilities begin to identify career interests and move from high school to post-secondary education or employment. Using federal funding, state VR agencies reported that they have generally enhanced their services and staff capacity and begun to coordinate with state educational agencies. As a result, state VR agencies generally reported serving an increased number of students. However, most states reported they have not used all the funds reserved for pre-employment transition services or updated interagency agreements between state VR and educational agencies. Education has developed multiple forms of guidance and made presentations, either directly or through its technical assistance centers. Education officials said they plan to issue additional guidance as needed. However, without clear timeframes for the issuance of this guidance, states do not know when information will become available to help them make decisions on allowable expenditures for pre-employment transition services. As a result, opportunities may be missed to identify and serve all students who might be eligible, and unserved students could continue to face difficulties preparing for a future of meaningful post-secondary education or employment. In addition, agreements between state VR and educational agencies can help facilitate the effective coordination of and financial responsibility for services. Finally, WIOA requires Education to highlight best state practices for implementing pre-employment transition services. Developing a written plan with specific timeframes would help Education provide states with information on best practices, such as balancing service delivery between pre-employment transition services and other VR services and collecting data that other states may have successfully addressed. Recommendations for Executive Action We are making the following three recommendations to Education: The Secretary of Education should establish timeframes for providing states with additional information on allowable expenditures of funds reserved for pre-employment transition services. (Recommendation 1) The Secretary of Education should take additional steps to provide states assistance on updating and finalizing their interagency agreements with state educational agencies to include pre-employment transition services. These steps could include, for example, accelerating their efforts to discuss the benefits of finalizing interagency agreements, and reminding states of existing technical assistance resources pertaining to updating and finalizing interagency agreements. (Recommendation 2) The Secretary of Education should develop a written plan with specific timeframes and activities for identifying and disseminating best practices that address, as appropriate, implementation challenges for pre- employment transition services, such as those identified in this report. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to Education for review and comment. Education’s written comments are reproduced in appendix III. Education also provided technical comments, which we incorporated into our report where appropriate. Education concurred with recommendation 1 and disagreed with recommendations 2 and 3 in the draft report. With regard to recommendation 1, Education stated that it agreed and will establish projected timeframes for providing states with additional information on allowable expenditures for the provision of pre- employment transition services. Education also stated that it intends to provide states with additional information in at least two forums before the end of calendar year 2018 and to review and analyze previous guidance provided to states on allowable expenditures. With regard to the draft report’s recommendation 2, which called for Education to identify states that have not updated and finalized their interagency agreements to include pre-employment transition services, Education stated that it disagreed, in large part, because there is no statutory provision authorizing the agency to identify such states. However, Education is taking some steps as part of its ongoing monitoring of the VR program to provide assistance to states that have not updated their interagency agreements, which is consistent with the intention of our recommendation, but more could be done. Education stated that it will continue to offer and provide technical assistance if it becomes known through the onsite monitoring of the VR program or through other means that states have not updated their interagency agreements between VR agencies and state educational agencies. It also noted that the Rehabilitation Services Administration (RSA) and its Office of Special Education Programs will provide information related to sources of technical assistance, as appropriate, to VR agencies and state educational agencies. While these steps may be helpful, given the number of states that have not updated and finalized their agreements and the length of time Education officials say they will take to complete this round of monitoring where Education asks state VR agencies about these agreements, additional action by Education may be needed to help states more efficiently and effectively coordinate services to students with disabilities. Education also wrote that while the Rehabilitation Act requires an interagency agreement, the Individuals with Disabilities Education Act does not contain a parallel requirement for state and local educational agencies with respect to the provision of pre-employment transition services or the incorporation of such discussion into the interagency agreement. In light of these differing requirements, as we state in our report, stakeholders with whom we spoke indicated it can be difficult to get state educational agencies to work with state VR agencies to update interagency agreements. Therefore, it is all the more important for Education to take additional action to engage with VR agencies regarding interagency agreements and to work closely with VR agencies as Education becomes aware of states that have not updated their agreements. Education suggested a modified recommendation that removed reference to Education identifying states that have not updated and finalized their agreements. We modified the recommendation and the report to address Education’s concerns about its authority to identify states. By taking additional steps, such as discussing the benefits of finalizing interagency agreements, and reminding states of existing technical assistance resources pertaining to updating and finalizing interagency agreements, Education would help raise awareness about the importance of the interagency agreements and be better positioned to help states efficiently and effectively coordinate services to students with disabilities. With regard to recommendation 3, Education stated that it disagreed because it is premature to develop a timeline for the dissemination of best practices. Education stated that the identification of “best” practices, meaning those that are clearly supported by a body of evidence derived from valid and reliable research findings, is still emerging as states implement the requirements. Education suggested a modified recommendation that included planning for the dissemination of best practices identified by states as they become available. Education stated in its comments that as RSA identifies best practices through its monitoring and technical assistance activities, it will, in collaboration with its Office of Special Education Programs, consider when and how best to disseminate this information to state VR and educational agencies. With regard to including specific timeframes and activities in a written plan, by detailing the specific steps Education is taking and plans to take along with the amount of time it expects them to take, Education would be better positioned to complete those steps in a timely manner and meet the statutory requirement that Education highlight best state practices and support state agencies. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Education. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Elizabeth H. Curda at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of this report are to examine (1) the steps states have reported taking to implement pre-employment transition services, and (2) the implementation challenges, if any, states reported facing, and how the Department of Education (Education) has addressed them. To address these objectives, we reviewed federal laws and regulations, and Education’s guidance and technical assistance documents, including circulars, policy directives, and transition guides. We also reviewed expenditure data reported by state vocational rehabilitation (VR) agencies to Education for fiscal years 2015 and 2016, the most recent full years of data available. To assess the reliability of the data, we interviewed Education officials about their collection of the data and their opinion of the data’s quality, completeness, and accuracy. We also electronically tested the data for any obvious errors. We determined that the data were reliable for the purposes of our review. We interviewed representatives from the Council of State Administrators of Vocational Rehabilitation (CSAVR) and the National Council of State Agencies for the Blind. In addition, we interviewed officials from Education’s Office of Special Education and Rehabilitation Services, Rehabilitation Services Administration, Office of Special Education Programs, and the Workforce Innovation Technical Assistance Center and National Technical Assistance Center on Transition—two technical assistance centers funded by Education. Survey of State Vocational Rehabilitation (VR) Agencies To address both of the objectives, we conducted a survey of all 79 state VR agencies from October through December 2017. Seventy-four of the 79 agencies (94 percent) responded. The survey questionnaire included open-ended and closed-ended questions about agencies’ efforts to train staff, update interagency agreements, expand services to students with disabilities, and other issues. We took steps to minimize the potential errors that may be introduced by the practical difficulties of conducting any survey. Because we selected the entire population of VR agencies for our survey, our estimates are not subject to sampling error. We conducted pretests of the draft questionnaire with three agencies in the population and made revisions to reduce the possibility of measurement error from differences in how questions were interpreted and the sources of information available to respondents. We reviewed state officials’ submitted survey responses and conducted follow-up, as necessary, to determine that their responses were complete, reasonable, and sufficiently reliable for the purposes of this report. A second independent analyst checked the accuracy of all computer analyses we performed to minimize the likelihood of errors in data processing. We made multiple follow-up attempts during the survey with agencies that had not yet responded. The five agencies that did not respond had smaller values, on average, on three characteristics related to size, than those that did respond. The nonrespondents tended to be smaller than respondent agencies. The sums totals for each of these three characteristics across the five nonresponding agencies comprised less than 1 percent of the totals for the population, suggesting a lower possibility of material error in our results from nonresponse. Interviews and Discussion Groups with State VR Agencies For more in-depth information on both of the objectives, we conducted interviews and held discussion groups. We conducted interviews with officials in Idaho, Illinois, and Maryland. For each state, we interviewed state VR officials and state educational agency officials. We selected these states for variety using the following criteria: size of the special education population (large, medium, and small); state agency organization, for example, whether the VR agency was organized under the state’s educational or other department; and whether the state had a second agency for serving individuals who are blind or visually impaired. We convened three discussion groups with state VR agency directors or their designated officials, with a total of 39 participants from 29 separate agencies (10 to 12 agencies represented per discussion group). These discussion groups took place during a conference of state VR directors in November 2017 in Greenville, South Carolina. To select participants, we worked with the conference organizer, CSAVR, to send invitations for our discussion groups to all conference attendees. We additionally included a question in our survey asking respondents whether they would like to participate in discussion groups at the conference, and contacted those who responded affirmatively via phone and email. We moderated each discussion to keep participants focused on the specified issues within discussion timeframes. Criteria Applied To assess Education’s efforts to address state VR agencies’ challenges in providing pre-employment transition services, we applied standards for internal control in the federal government. Specifically, we applied principle 15 related to communicating with external parties. In addition, regarding Education’s assistance to state VR agencies’ efforts to update interagency agreements with state educational agencies, we also applied key considerations for implementing interagency collaborative mechanisms that we have previously identified. We conducted this performance audit from February 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: WIOA Authorized Activities for Pre-employment Transition Services Appendix III: Comments from the United States Department of Education Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Elizabeth H. Curda, (202) 512-7215 or curdae@gao.gov. Staff Acknowledgments In addition to the contact named above, Sara Schibanoff Kelly (Assistant Director), Paul Schearf (Analyst-In-Charge), Matthew Rabe, and Paul Wright made key contributions to this report. Also contributing to this report were James Bennett, Kristy Kennedy, Sheila R. McCoy, Thomas James, Jessica Orr, Sam Portnow, Carl Ramirez, Monica Savoy, Kate Van Gelder, Adam Wendel, and James Whitcomb. Related GAO Products Workforce Innovation and Opportunity Act: States and Local Areas Report Progress in Meeting Youth Program Requirements. GAO-18-475. Washington, D.C.: June 15, 2018. Supplemental Security Income: SSA Could Strengthen Its Efforts to Encourage Employment for Transition-Age Youth. GAO-17-485. Washington, D.C.: May 17, 2017. Youth with Autism: Federal Agencies Should Take Additional Action to Support Transition-Age Youth. GAO-17-352. Washington, D.C.: May 4, 2017. Youth with Autism: Roundtable Views of Services Needed During the Transition into Adulthood. GAO-17-109. Washington, D.C.: October 18, 2016. Students with Disabilities: Better Federal Coordination Could Lessen Challenges in the Transition from High School. GAO-12-594. Washington, D.C.: July 12, 2012.
Why GAO Did This Study WIOA requires states to reserve at least 15 percent of their total State Vocational Rehabilitation Services program funds to provide pre-employment transition services to help students with disabilities transition from school to work. GAO was asked to review how states were implementing these services. This report examines (1) steps states reported taking to implement pre-employment transition services, and (2) implementation challenges states reported and how Education has addressed them. GAO reviewed documents and funding data from Education, and federal laws and regulations; surveyed all 79 state VR agencies (74 responded); held discussion groups with representatives of 29 state VR agencies; and interviewed officials from Education and three states (Idaho, Illinois, and Maryland) GAO selected for variety in size and type of agencies, among other factors. What GAO Found Of the 74 state vocational rehabilitation (VR) agencies that responded to GAO's survey, most reported expanding services to help students with disabilities transition from school to work as required under the Workforce Innovation and Opportunity Act (WIOA), enacted in July 2014. Most state agencies reported serving more students and providing work-based learning experiences and other activities, referred to as pre-employment transition services (see figure). State VR agencies reported two key challenges with implementing pre-employment transition services for students as required by WIOA. Spending reserved funds : States reported spending about $357 million out of the $465 million reserved for these services in fiscal year 2016. Education officials said that states had difficulty determining what expenditures were allowable, and some state officials said they would like more detailed information from Education. Education officials said they plan to clarify guidance but have no timeframe for providing further information, which would help states to better plan their use of reserved funds. Finalizing interagency agreements : Fewer than half the state VR agencies that responded to GAO's survey (34 of 74) reported updating their interagency agreement with their state's educational agency. Interagency agreements can help promote collaboration by, for example, establishing roles and responsibilities of each agency. Although Education offers technical assistance on interagency agreements, without increased efforts to raise awareness about the importance of these agreements and provide assistance to states where needed, Education may miss opportunities to help state VR and educational agencies efficiently and effectively coordinate services. In addition, WIOA requires Education to highlight best state practices, and most VR agencies responding to GAO's survey (63 of 74) reported this would be useful. Education does not have a written plan or timeframe for identifying and disseminating best practices. As a result, Education may miss opportunities to help more students with disabilities successfully transition from school to work. What GAO Recommends GAO is recommending that Education (1) establish timeframes for providing additional information on allowable expenditures, (2) take additional steps to assist states that have not updated and finalized their interagency agreements, and (3) develop a written plan with specific timeframes and activities for identifying and disseminating best practices. Education agreed with the first recommendation and disagreed with the other two. GAO revised the second recommendation and maintains that specific information is needed for the third, as discussed in the report.
gao_GAO-19-15
gao_GAO-19-15_0
Background Disability Compensation Claims Process VA pays monthly disability compensation to veterans with service- connected disabilities (i.e., injuries or diseases incurred or aggravated while on active military duty) according to the severity of the disability. VBA’s Compensation Service sets policy and oversees the delivery of disability compensation. VBA’s Office of Performance Analysis and Integrity analyzes performance information related to claims. VBA’s Office of Field Operations provides operational oversight to district and regional offices. The 57 regional offices are grouped into five district offices, which manage the regional offices in their areas. VBA staff in the Veterans Service Centers of the regional offices process disability compensation claims. These claims processors include Veterans Service Representatives who gather evidence needed to determine entitlement and review the amount of the award and authorize payment, if any, and Rating Veterans Service Representatives who decide entitlement and the rating percentage. Veterans may claim more than one medical condition, and VBA assigns a rating percentage for each claimed medical condition, as well as for the claim overall. As shown in figure 1, after a veteran submits a claim to VBA, a Veterans Service Representative reviews the claim and helps the veteran gather the relevant evidence needed to evaluate the claim. Such evidence includes the veteran’s military service records, medical examinations, and treatment records from Veterans Health Administration medical facilities and private medical service providers. Also, if necessary to provide support to substantiate the claim, VA will provide a medical examination for the veteran. Once VBA has gathered the supporting evidence, a Rating Veterans Service Representative—who typically has more experience at VBA than a Veterans Service Representative—evaluates the claim and determines whether the veteran is eligible for benefits and, if so, assigns a percentage rating. A Veterans Service Representative then determines the amount of the award, if any, and drafts a decision notice. A senior Veterans Service Representative then authorizes the award and releases the decision notice to the veteran following a review of both for accuracy. National Work Queue In May 2016, VBA completed implementation of the National Work Queue—an electronic workload management initiative that prioritizes and distributes claims across regional offices. Previously, a veteran’s claim was generally processed from start to finish (i.e., awarding of benefits or notification of denial) by the veteran’s local regional office of jurisdiction, and the regional office’s workload generally depended on how many claims were filed by veterans within its area of jurisdiction. Now, a claim can be processed by multiple regional offices, and claims are distributed based on regional office capacity (see fig. 2). National Trends in Disability Compensation Claims Processing VBA establishes national targets and tracks performance for disability compensation claims processing. Since fiscal year 2014, national claims processing timeliness has improved substantially, and accuracy scores have decreased slightly, as shown in table 1. VBA’s 12-month issue- based accuracy target for fiscal year 2017 was 96 percent and its target for fiscal year 2018 was the same. From fiscal year 2014 to 2017, VBA’s national accuracy estimate decreased from about 96 percent to about 94 percent. In addition, VBA’s target for backlog claims—defined by VBA as those pending for more than 125 days—for fiscal year 2017 was no more than 15 percent of claims inventory and its target for fiscal year 2018 was no more than 21 percent of claims. In fiscal year 2017, VBA’s reported percentage of backlog claims was 23 percent, with a reduction from 240,443 to 70,965 total reported backlog claims from fiscal years 2014 to 2017. Regional Office Performance Measures for Disability Compensation Claims Processing VBA’s Office of Performance Analysis and Integrity collects a variety of data on timeliness and accuracy, including on VBA’s claims backlog, so that VBA can monitor regional office performance. To improve timeliness and accuracy, and reduce the claims backlog, VBA sets performance standards for the directors of regional offices. In fiscal year 2018, regional office performance was assessed using two primary metrics—timeliness (Time-in-Queue) and accuracy (12-month issue-based accuracy). Since 1999, VBA has assessed the accuracy of disability compensation claims decisions at the national and regional office level using its Systematic Technical Accuracy Review (STAR). With this tool, VBA reviews a stratified random sample of completed claims, and certified reviewers use a checklist to assess specific aspects of each claim. Veterans Service Organizations and Congressional Caseworkers According to VA, as of October 2017, 31 congressionally chartered VSOs were recognized by VA under federal statute to help veterans navigate the claims process. VSOs commonly are private nonprofit groups that advocate without fees on behalf of veterans. VSOs employ individuals, called veterans service officers, whose offices often are located at a VBA regional office. Through a power of attorney, VSOs can represent veterans before VA, and assist them and their families with disability compensation claims, among other things. VSO staff are trained to help veterans understand and apply for any VA benefits to which they may be entitled, including disability compensation. In addition to helping veterans submit claims to VBA, VSOs are allowed to communicate with VBA on behalf of the veteran throughout the life of the claim, and are given up to 48 hours to review the claim decision before it is finalized (after the Rating phase in figure 1 above). VSOs can have access to VBA’s electronic claims management system to view claims status and submit claims documents. According to a Congressional Research Service report, as of March 2016, 919 congressional caseworkers were working for constituents on a variety of issue areas, including veterans’ disability compensation claims. Also according to the report, congressional caseworkers cannot legally represent veterans, but with a privacy release form from the veteran, VBA may respond to a congressional inquiry. According to VA officials, congressional caseworkers can then obtain certain claim-related information from VA, such as the status of the veteran’s claim. VA’s guidance on “special controlled correspondence” governs VBA’s communication with congressional caseworkers, including required time frames for responding to congressional inquiries. Congressional caseworkers generally work out of Congressional Members’ state and district offices. VBA Manages Workload and Performance through Established Processes, but Guidance for Claims with Errors Has Gaps VBA Allocates Claims Workload across Offices Based on Their Capacity, but Guidance for Processing Claims with Errors Has Gaps The National Work Queue, which VBA uses to distribute disability compensation claims, was designed to even out the differences in claims workload across regional offices by having multiple offices complete parts of a claim and allocating claims based on each office’s capacity. For example, as shown in figure 3, in fiscal year 2017, about 88 percent of all disability compensation claims were processed by more than one office, and over 75 percent were processed by three or more offices. This distribution method is intended to keep all offices working at their capacity, regardless of the volume of claims filed by veterans in each region. While VBA officials stated that they had initially planned to continue to have a majority of claims processed at veterans’ local regional offices, after implementation of the National Work Queue they determined that the system operates more effectively if veteran location is a lower priority factor for claims distribution. Thus, very few claims are processed entirely at a veteran’s local regional office, unless the veteran has a documented hardship that may necessitate expediting the claim or face- to-face interaction. VBA officials added that the National Work Queue formula distributes claims based on VBA priorities. For example, VBA prioritizes claims for veterans with documented hardships (e.g., terminal illness, financial hardship). In addition, the National Work Queue formula takes into account the length of time since the claim was received and prioritizes backlog claims—defined by VBA as claims that have been open for more than 125 days. Once the National Work Queue allocates claims to a regional office, the office has some discretion in managing the distribution of claims to its staff and managing the claims review process. For example, while VBA determines how the claims workload is allocated across offices, regional office managers decide which claims within the office’s queue to work first, how to program the office’s queue for distributing claims to individual claims processors’ electronic work queues, and whether any changes to this distribution are needed throughout the day. Regional office managers at each of the four offices we visited reported using VBA’s timeliness goals and daily data on claims processing timeliness to prioritize claims. Managers at the offices we visited also described additional strategies to manage their work queue, including: At two of the four offices we visited, managers said that they provide a list of claims to claims processors to prioritize, such as those that are older or have been in the office’s work queue for multiple days. Managers at one office said that they manually alter individual claims processors’ electronic work queues so that older claims are processed first. Managers at one office stated that because they instruct claims processors to focus on meeting timeliness targets for the office, all claims are worked within a few days; thus, they encourage their staff to focus on meeting the office timeliness goals rather than requiring them to work the claims in their queue in a specified order. VBA officials acknowledged that regional office managers may have different strategies for managing workload, but noted that all offices are expected to respond to VA national priorities—such as decreasing the claims backlog—while also meeting their individual office performance goals. While VBA officials noted that having discretion in workload management can be beneficial, such discretion can also lead to inconsistent handling of the claims workload. In particular, we found gaps in guidance for managing deferrals—actions taken by claims processors in VBA’s electronic claims management system when they identify claims errors that occurred earlier in the claims process. The deferral process began with the National Work Queue since claims were, for the first time, routinely being processed by multiple regional offices. Through deferrals, when claims processors identify errors in a claim, they can use the National Work Queue to return the claim for correction to the office that made the error. According to VBA data, in fiscal year 2017, VBA claims processors deferred claims in 450,305 instances, which represented almost 4 percent of the total disability claims processing work completed. While VBA officials said that claims processors who find errors are generally expected to defer a claim, managers and claims processors at the regional offices we visited had different perspectives regarding when Veterans Service Representatives should do this. At all four of the regional offices we visited, managers and claims processors said that they generally would not defer a claim if the error could be corrected and the claim moved forward. At one regional office, managers and claims processors said that they would log a deferral in the electronic claims management system, so the error would be tracked and the previous claims processor could be notified and trained, but that they would also correct the error themselves to move the claim forward. VBA provides some guidance to Rating Veterans Service Representatives regarding the circumstances in which they should defer claims, but does not have corresponding guidance for Veterans Service Representatives. However, according to our analysis of VBA data from fiscal year 2017, more than 75 percent of deferrals are logged during the Initial Development, Supplemental Development, Award, or Authorization phases—when Veterans Service Representatives are typically processing claims. Existing guidance for Veterans Service Representatives on deferrals in the National Work Queue Playbook and other documents focuses on the process for deferring a claim in the electronic claims management system, rather than on situations that merit a deferral. Specifically, VBA does not provide guidance on when Veterans Service Representatives should defer a claim or consider other options, such as correcting the error and moving the claim forward, with or without a deferral. VBA officials stated that the policy regarding when to defer claims is not prescriptive—and they do not plan to provide additional guidance—because they want to allow regional offices the discretion to decide what action is best for the veteran. However, federal internal control standards state that agencies should design control activities to achieve objectives and respond to risks. For example, a control activity that is performed routinely and consistently generally is more precise than one performed sporadically. As such, deferrals may not serve as an effective control without being used consistently across VBA’s regional offices. VBA’s lack of guidance on when to defer claims may lead to delays for veterans and missed opportunities to train individuals who make errors. In some cases, differences in regional office practices for when to defer claims may lead to situations in which claims that could move forward are instead sent back to the previous office, causing unnecessary delays for veterans. In addition, we heard from managers or claims processors at three offices we visited that claims may not always be deferred for legitimate reasons and that the ability to defer claims may create incentives for employees to defer a claim based on an insignificant error if they want to avoid working on a complex claim. In other cases, more significant errors may end up being fixed at a regional office without providing feedback to the office that made the mistake. While the practice of fixing the error rather than deferring the claim may keep the claim moving for the veteran, it also means that claims processors who make errors may repeat the same mistakes in the future. VBA Sets Regional Office Performance Goals and Individual Expectations and Has Developed Processes for Managing Performance VBA sets regional office goals and individual claims processor expectations that align with national efforts to increase timeliness and accuracy of claims decisions. VBA holds regional offices accountable for meeting performance goals through the Director’s Performance Plan. For disability compensation claims in fiscal year 2018, VBA assessed regional office performance using the Time-in-Queue and 12-month issue-based accuracy measures. VBA has developed processes and tools for communicating performance information to regional offices and for identifying common errors. For example, VBA sets timeliness goals for regional offices and generates daily claims processing timeliness data for each office. At the regional offices we visited, we observed that VBA displays these data on monitors so that managers and employees can see how their office is performing on a daily basis. In addition, VBA has created performance reporting tools that allow regional office managers, claims processors, and various VBA workgroups to download regional office performance information and analyze office performance issues at their discretion. At the regional offices we visited, quality review teams analyze claims processing errors made by their employees, such as those identified in STAR reviews and through the deferral process. Based on common mistakes they identify, quality review staff at all four offices we visited said that they incorporate topics related to the errors into training sessions, or provide direct coaching to individual employees. VBA also conducted an In-Process Review pilot from November 2017 through May 2018 at selected regional offices. The pilot involved a quality review for two phases of the claims process. The purpose of the pilot was for employees to learn from and correct mistakes in a non-punitive setting while the claim was being processed. VBA officials reported that VBA discontinued the pilot in May 2018—prior to its scheduled completion date—because the pilot was not demonstrating the anticipated benefit of reducing the number of errors at pilot offices that resulted in deferrals. VBA also develops practices at the national level for managing individual employee performance and, in some cases, provides regional office managers with discretion for implementing those practices. In support of the regional office performance standards, VBA sets individual employee performance standards in the following five areas: (1) quality of work; (2) timeliness of corrective actions and responsiveness to workload assignments; (3) production (i.e., the number of transactions, or tasks, completed within the assessment period); (4) completion of training; and (5) organizational support. The production standards include a goal for the number of credits, or points, that employees are expected to earn during each pay period for their work activities. According to VBA officials, regional office managers are held accountable for providing feedback to employees on a regular basis and addressing performance deficiencies appropriately and in a timely manner. In addition, according to VBA officials, VA’s policy allows regional office managers “broad discretion” in determining when a performance deficiency exists. Employee performance incentive programs, which provide monetary awards to top performers in each regional office, are also managed at the national level. However, within regional offices, some managers told us that they also occasionally provide small incentives or celebrations to show appreciation for staffs’ contributions. VBA’s Timeliness and Accuracy Measures Do Not Adequately Reflect Regional Office Performance for Disability Compensation Claims Processing Regional Office Timeliness Measure Does Not Capture Performance over a Period of Time VBA uses Time-in-Queue—the average number of business days that claims have been pending at a regional office—to measure overall regional office timeliness for processing disability compensation claims. Time-in-Queue is measured separately for each phase of the claims process—Initial Development, Supplemental Development, Rating, Award, and Authorization—and VBA has established timeliness goals for each of these phases. VBA holds regional offices accountable for meeting timeliness goals through the Director’s Performance Plan, which rates offices as successful if they meet Time-in-Queue standards for each phase of the claims process in 10 out of 12 months. For this purpose, the measure is a snapshot on the last day of each month that shows how long, on average, claims have been pending at each office; however, it does not capture regional office performance over a period of time. Consequently, Time-in-Queue can provide a skewed picture for a period of time, depending on the work that is assigned to the office toward the end of the month and the speed with which claims are processed during that limited time period. Moreover, according to VBA officials, the agency used Time-in-Queue scores and additional factors—such as space considerations and training capacity—to determine the amount of new resources to allocate to its regional offices in May 2017, and the agency will continue to consider such performance information when allocating resources in the future. However, federal internal control standards state that agencies should use quality information to achieve objectives. For example, an agency should obtain data from reliable sources in a timely manner and based on identified requirements, and reliable sources are those that provide data that are reasonably free from error and bias and faithfully represent what they purport to represent. In addition, our prior work has shown that practices for improving the usefulness of performance data include using new methods of measurement to address data limitations, such as Time- in-Queue only capturing performance as a snapshot on 1 day. VBA officials acknowledged that the Time-in-Queue performance measure does not reflect the complete timeliness of a regional office. These officials said that the agency is exploring adding a Time-to-Exit- Queue measure that could capture regional office timeliness over a period of time. For example, Time-to-Exit-Queue could measure the timeliness of all claims processing work completed throughout the month instead of work pending on the last day of the month. However, VBA has not yet completed the development of the Time-to-Exit-Queue performance measure. VBA has also not determined whether or when it will replace or supplement Time-in-Queue with a new primary metric— Time-to-Exit-Queue or something else—to measure regional office timeliness. Until VBA implements a new measure to more fully assess regional offices’ timeliness, the agency will not have a complete picture of regional office performance over time, which could impair decision- making related to regional office performance, such as decisions about targeting resources to high- or low-performing offices. A Regional Office’s Accuracy Score Does Not Always Reflect the Work Completed in That Office VBA uses the STAR 12-month issue-based accuracy score to measure regional office accuracy in processing disability compensation claims, but this score could provide a misleading picture of an office’s performance. VBA’s accuracy measure attributes the accuracy of sampled claims to the regional office that finishes the claim even though, under the National Work Queue, that office may not have done all of the work on the claim. In fiscal year 2017, about 88 percent of all disability compensation claims were processed by more than one office, and about 43 percent were processed by five or more offices, as shown earlier in figure 3. As a result, the scores attributed to each office may not reflect the true accuracy of the office’s work. In addition, any errors made by other offices earlier in the claims process would not be reflected in those offices’ accuracy scores. Therefore, the current regional office accuracy measure does not reflect the accuracy of each office’s work and may skew the score negatively or positively. According to VBA officials, the agency uses issue-based accuracy scores, among other things, to determine how to allocate resources to regional offices. However, federal internal control standards state that agencies should use quality information to achieve objectives. For example, an agency obtains data from reliable sources in a timely manner based on identified requirements, and reliable sources provide data that are reasonably free from error and bias and faithfully represent what they purport to represent. In addition, our prior work has shown that practices for improving the usefulness of performance data include using new methods of measurement to address data limitations. VBA officials said that they recognize the limitations of the agency’s regional office accuracy measure, but VBA officials also said it is reasonable to hold the office that completes the claim accountable because Veterans Service Representatives are responsible for checking for errors in the claims process before completing the claim during the Authorization phase. However, according to VBA officials, some areas on VBA’s accuracy checklist—such as whether the claimed conditions were correctly granted or denied, and whether the correct percentage evaluation was assigned—are beyond the scope of the Veterans Service Representatives’ review or qualifications. These tasks are completed by Rating Veterans Service Representatives. In fiscal year 2017, these two areas—whether the claimed conditions were correctly granted or denied, and whether the correct percentage was assigned—accounted for an estimated 28 percent of all errors nationwide. In addition, these two areas ranged from an estimated low of about 13 percent (5 of 40) of all errors attributed to one regional office to an estimated high of about 55 percent (16 of 29) of all errors attributed to another regional office. In addition, while VBA officials said that it is reasonable to hold the office that completes the claim accountable for errors, officials also said that when STAR errors are identified, only the regional offices that actually made the errors are told about them in order to improve staff performance. This suggests that VBA does not view the Veterans Service Representative who completes the claim as fully responsible for all errors in the claims process. According to VBA officials, the agency has been exploring the development of a new accuracy measure that would enable it to assign error scores to the offices that actually made the errors. For example, VBA is considering using the STAR reviews to produce a claims phase- based score that would attribute the accuracy of individual phases of the claims process to the offices completing those phases. However, according to VBA officials, sampling by each phase of the claims process would be more complicated than the current system of sampling by regional office and would require additional staff. In addition, the agency is also exploring leveraging its existing Individual Quality Reviews— currently used to assess the accuracy of individual staff’s work—to create individual regional office accuracy scores. VBA officials added, however, that there are challenges with converting these individual accuracy scores to office scores, such as calculating scores by claims phase instead of by employee position since an employee may conduct work in various phases. VBA has not determined which alternate measure, if any, to use, and does not have a timeline for addressing the challenges it has identified with the alternate measures being considered, or for implementing a new accuracy measure. Until VBA implements a new measure to assess regional offices’ accuracy, it will not have an accurate picture of individual regional offices’ performance, which could impair decision-making, such as targeting resources to high- or low-performing offices. Stakeholders Were Generally Satisfied with Communication at Selected Regional Offices, but VBA’s Communication Policies Are Applied Inconsistently Selected Veterans Service Organizations Were Generally Satisfied with Access to Regional Office Staff, but VBA’s Communication Policy and Practice Are Not Aligned Despite being generally satisfied with regional office communication, VSOs we spoke with also expressed some frustrations. VSOs we spoke with at all four offices reported generally being able to contact someone to answer their questions. Moreover, VBA staff we spoke with reported being flexible in communicating with VSOs in the manner in which they preferred. In addition, Compensation Service and Benefits Assistance Service site visit reports found that VSOs are generally satisfied with regional office communication. However, VSOs at all four offices we visited expressed some frustrations with communication, but they varied some by offices. Examples of communication issues included: Diminished contact. VSOs noted that the National Work Queue reduced personal relationships and collaboration between VSOs and regional office staff since claims are no longer fully processed at the local regional office, and therefore VSOs can no longer simply walk across the office to discuss a claim. Delayed responses. VSOs said there sometimes are delays in receiving responses from regional offices, with staff taking different lengths of time to respond to an inquiry, or not responding at all. Sometimes, once VSOs receive a response, the claim is no longer being processed at the regional office they contacted, so the response is no longer useful. Decreased notice of activity. VSOs said that with the advent of electronic claims processing, they no longer receive paper copies of disability ratings and other documents that VBA sends to the veteran. VSOs have access to such information in VBA’s electronic claims management system, but the system does not notify them when VBA has sent documents to the veteran, such as requests for information and evidence. VSOs said it is time-consuming for them to proactively monitor a large number of veterans’ electronic claims files for new documents. VSOs may communicate with a regional office throughout the life of a claim for various purposes and, according to VBA officials, regional offices generally have discretion in establishing local policies for handling VSO questions or inquiries. One exception to this local discretion is during the 48-hour review period when VSOs can review a completed disability rating before it is finalized. A November 2016 VBA policy states that during the 48-hour review period, VSOs may contact a regional office’s Change Management Agent. The policy also states that VSOs should not contact the Change Management Agent for claim status updates, evidence submission, or any other type of inquiry unrelated to a rating decision discrepancy. According to VBA officials, the policy to contact the Change Management Agent during the 48-hour review period was intended to streamline the inquiry process for VSOs, provide consistent responses to them, and minimize disruptions for claims processors. The previous policy required VSOs to first contact the Rating Veterans Service Representative before the Change Management Agent during the 48-hour review period. VSOs at three offices we visited reported contacting the Change Management Agent for inquiries during the 48-hour review period, but also reported contacting the Change Management Agent at other points during the claims process. VSOs at all four offices we visited also reported contacting other staff, such as claims processors or their supervisors at their local regional offices, during the 48-hour review period, unrelated to the Change Management Agent’s availability or a particular type of claim, which VBA officials stated were reasons for which VSOs might contact an alternate VBA official. Federal standards for internal control state that an agency should externally communicate the necessary quality information to achieve an entity’s objectives, for example, communicating with external parties using established reporting lines, and periodically evaluating its communication methods. VBA officials told us that the November 2016 policy was intended to address communication during the 48-hour review period, and that regional office discretion for communication with VSOs outside of this period was still in place, including contacting Change Management Agents if regional offices determined this was best. However, regional offices and VSOs do not consistently implement this policy. Moreover, the policy states that VSOs are not to contact Change Management Agents for claim status updates, evidence submission, or any other question unrelated to a rating decision discrepancy. These types of inquiries generally occur outside of the 48-hour review period, so this portion of the policy conflicts with VBA officials’ description of regional office discretion for communication with VSOs throughout the life of a claim. Although VSO communication with Change Management Agents did not always appear to match VBA’s policy for communication during or outside of the 48-hour review period, VSOs we spoke with seemed to value regional offices’ flexibility in communicating with them. However, it is possible that the policy’s lack of clarity or inconsistent application could contribute to communication frustrations for VSOs, and that changes to either the policy or its enforcement could better serve VSOs and regional office staff. Evaluating its regional office communication policy with VSOs and ensuring that the policy is clear, that it aligns with regional offices’ practices, and that it effectively meets VSOs’ communication needs, could help VBA ensure that it is providing timely and consistent responses to VSOs on behalf of the veterans they represent, while minimizing disruptions to regional office staff. Such alignment could be achieved either by adjusting the communication policy or better enforcing the existing policy. Selected Congressional Caseworkers Were Satisfied with Communication with Regional Offices, but VBA’s Communication Was Not Always Timely or Accurate Congressional caseworkers we spoke with at all four offices we visited were satisfied with regional office communication regarding disability compensation claims, though some regional office responses were not timely or accurate, according to VA Inspector General reports. VBA has congressional liaisons at each of its regional offices to answer inquiries from congressional caseworkers. Caseworkers generally contact the VBA liaison at their local regional office when they inquire about claims— whether the claims are being processed at the local regional office or another regional office. Caseworkers may also contact the VBA liaison at the office where the claim is being processed once they find out from VBA where that is. According to regional office officials at the four offices we visited, most congressional inquiries received at the regional offices are by email or phone, although some are by regular mail; the congressional inquiries are most often regarding the status of a veteran’s claim. While caseworkers we spoke with were satisfied with their communication with regional offices, VA’s Office of Inspector General found that in some instances, VBA regional offices had not provided timely or accurate responses to special controlled correspondence, which includes congressional inquiries. According to VBA guidance on special controlled correspondence in fiscal year 2017, VBA liaisons are to respond to caseworkers’ inquiries within 5 business days with a full or interim response, for example. During its inspections of regional offices during fiscal year 2017, the Office of Inspector General found that some offices had not provided interim responses within 5 business days and, in a few cases, had provided inaccurate responses. At some offices, the Office of Inspector General made recommendations for improving regional offices’ responses to inquiries and, according to its reports, regional offices planned and implemented changes, such as providing additional training to staff and improving oversight of correspondence. Caseworkers we spoke with at three offices we visited identified ways that regional offices could improve communication with them or ways that VBA could provide them with additional information or support. For example, while caseworkers generally contact their local regional office with inquiries, caseworkers at two offices said that a regularly updated contact list of VBA liaisons at all VBA regional offices could be helpful so that they can immediately contact another regional office if they learn that a claim is being processed there, or if their local VBA liaison is unable to provide sufficient specifics on a claim. Some of these caseworkers suggested that the list could either be posted to a non-public website or sent to VBA regional offices to distribute to local caseworkers. According to VBA officials, the agency does maintain a list of regional office VBA liaisons, and updates it quarterly. The list is provided upon request, both electronically and in hard copy, and caseworkers frequently request the list, according to VBA officials. However, the caseworkers we spoke with at all four offices we visited were not aware of this list. In September 2017, VBA developed an online toolkit for congressional caseworkers to better assist them in serving their veteran constituencies. The toolkit webpage provides a central location for caseworkers to quickly locate information regarding available VA benefits and services. For example, the toolkit provides a link to a description of the disability compensation program and how to apply for benefits. VBA officials reported that in September 2017, they provided information on the toolkit to VA’s Office of Congressional and Legislative Affairs for distribution to congressional staff. However, caseworkers and VBA liaisons at all four offices we visited were not aware of this online toolkit, and caseworkers we spoke with at two offices we visited said that it could have been useful to them if they had been aware of it or if it had additional elements, such as regional office expectations for caseworker inquiries. According to VBA officials, they have not received any feedback on the toolkit beyond that initially provided by another VA office. This could be, in part, because VBA does not have an outreach mechanism to actively obtain perspectives from congressional caseworkers on their communication with regional offices or their information or support needs, or to determine whether the findings from the Office of Inspector General are typical across regional offices. The Office of Inspector General stopped performing its reviews of regional offices—including evaluations of communication with congressional caseworkers—in fiscal year 2017 to focus its efforts on VBA-wide audits, so this information is no longer available to VBA. Federal standards for internal control state that an agency should externally communicate the necessary quality information to achieve an entity’s objectives, for example, selecting the appropriate methods to communicate externally, and periodically evaluating its methods of communication so that the agency has the appropriate tools to communicate quality information outside the agency. VBA officials reported an open-door policy in which caseworkers can share concerns and requests as needed, and said that a formal outreach mechanism is not necessary. Although caseworkers can approach regional office staff with ideas for improvement, this informal mechanism is not a consistent process and does not facilitate candid feedback, nor does it include documentation of potential improvements and actions taken. By creating an outreach mechanism to solicit periodic feedback from congressional caseworkers, VBA could streamline the inquiry process and enable them to provide more accurate and timely information to veterans. Conclusions VBA’s National Work Queue has been in place for more than 2 years and provides opportunities for a higher level of service to veterans. However, with claims moving among regional offices, the individual performance of regional offices remains critical to VBA’s success. For example, regional offices’ inconsistent use of deferrals when claims processors identify errors could unnecessarily delay the decision on a veteran’s claim or prevent staff from receiving needed training. In addition, VBA has developed several practices to assess performance at regional offices, but some of this information could be of limited use if the agency continues using its existing measures. Specifically, VBA’s two primary performance measures for regional offices do not allow the agency to adequately measure claims timeliness and accuracy. Finally, communication with VSOs and congressional caseworkers could be improved by clarifying the VSO communication policy and aligning it with practice and VSO needs, and conducting caseworker outreach in order to provide more consistent and timely information to VSOs and caseworkers. Without these improvements, VSOs and caseworkers may not be able to serve veterans in as timely a manner as possible. Recommendations for Executive Action We are making the following five recommendations to VBA: The Under Secretary for Benefits should clarify how Veterans Service Representatives should handle claims when they identify an error, including when to defer a claim and when to correct the error on their own. (Recommendation 1) The Under Secretary for Benefits should develop and implement a new regional office performance measure that allows it to better assess each regional office’s timeliness over a period of time. (Recommendation 2) The Under Secretary for Benefits should develop and implement a new regional office performance measure that allows it to better measure the accuracy of each regional office’s work. (Recommendation 3) The Under Secretary for Benefits should evaluate its policy for regional office communication with VSOs to ensure that it is clear, that it aligns with practice, and that it meets the communication needs of VSOs. (Recommendation 4) The Under Secretary for Benefits should develop and implement a mechanism to obtain periodic feedback from congressional caseworkers on their communication with regional offices regarding claims and needed information or support. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Veterans Affairs for review and comment. VA provided written comments, which are reproduced in appendix I. VA concurred with all of our recommendations and described VBA’s plans for taking action to address them. Regarding Recommendation 1, VA stated that VBA is working to clarify guidance to regional offices for handling claims when errors are identified. Regarding Recommendations 2 and 3, VA stated that VBA is working to develop and implement new performance measures for regional office timeliness and accuracy. Regarding Recommendation 4, VA stated that VBA will review and enhance its policy for communication with VSOs. Regarding Recommendation 5, VA stated that VBA will review existing practices on support for congressional caseworkers, and develop and implement mechanisms to strengthen this support. VA also reported that regional office managers have been directed to meet at least quarterly with congressional caseworkers to gather feedback and resolve issues. If VBA can demonstrate that it is consistently using such feedback mechanisms across regional offices to identify and address caseworker concerns, this will meet the intent of our recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Nyree Ryder Tee (Assistant Director), Rebecca Kuhlmann Taylor (Analyst-in-Charge), Justin Gordinas, and Martin E. Scire made significant contributions to the report. Also contributing to the report were James E. Bennett, Alex Galuten, Benjamin T. Licht, Liam O’Laughlin, David Perkins, Almeta Spencer, Walter K. Vance, and Kathleen van Gelder.
Why GAO Did This Study Each year, VBA processes more than 1 million disability compensation claims and provides about $65 billion in benefits to veterans. The Jeff Miller and Richard Blumenthal Veterans Health Care and Benefits Improvement Act of 2016 includes a provision for GAO to review VBA's regional offices to help VBA achieve more consistent performance in processing disability compensation claims. This report examines (1) how VBA manages workload and performance for the disability compensation claims process, (2) how well VBA's timeliness and accuracy measures capture its regional offices' performance in processing these claims, and (3) how well selected regional offices communicate with VSOs and congressional caseworkers about these claims. GAO reviewed VBA policies and procedures; visited four regional offices selected to represent a range of performance scores and claims processing volume in fiscal year 2017; and interviewed VBA headquarters officials and management and staff from the selected regional offices. GAO also interviewed VSOs and congressional caseworkers—selected for House, Senate, and bipartisan representation—to learn more about their communication with VBA. What GAO Found In 2016, the Veterans Benefits Administration (VBA) centralized distribution of the disability compensation claims workload through the National Work Queue, which prioritizes and distributes claims to regional offices based on their capacity; however, there are gaps in VBA's guidance for processing claims with errors. Under the National Work Queue, multiple regional offices can work on a single claim instead of the claim remaining at one office for the duration of processing (see figure). GAO found gaps in guidance about whether a claims processor should fix an error made by another regional office, or return the claim to that office to be corrected. The former could result in missed opportunities to train staff who made the error, while the latter could result in processing delays. VBA primarily uses timeliness and accuracy measures to assess its regional offices' performance in processing disability compensation claims, but these measures do not adequately capture performance. The timeliness measure can be skewed because it is a snapshot of how long claims have been pending at an office on the last day of the month, and does not capture performance over a period of time. The accuracy measure is attributed to the office that finishes the claim, even though 88 percent of claims completed in fiscal year 2017 were processed at more than one office. VBA officials acknowledged that these measures are limited and said the agency is exploring alternatives, but VBA has no specific plan or time frame for determining and implementing new measures. Without measures to more accurately assess regional office performance, VBA may be limited in its ability to make efficient and effective decisions. Veterans service organizations (VSO) and staff working for Members of Congress (congressional caseworkers) interviewed by GAO were generally satisfied with regional office communication regarding disability compensation claims. However, VBA's policy on whom VSOs should contact during different points in the process did not always align with what occurs at the offices we visited or with VSO needs. This could result in VSOs not receiving consistent and timely responses from VBA. Evaluating this policy could help VBA assist VSOs in better serving veterans. In addition, congressional caseworkers GAO interviewed identified ways that communication could be improved or that additional support could be provided, such as a list of contacts at all regional offices for claim inquiries. VBA officials GAO interviewed described an open-door policy through which they may receive feedback from caseworkers, but the agency does not formally solicit periodic feedback from them. Without such feedback, the agency may miss opportunities to identify and address caseworker communication needs that could help them better serve veterans. What GAO Recommends GAO is making five recommendations to VBA to clarify guidance for correcting errors, develop and implement measures to better assess timeliness and accuracy at regional offices, and evaluate communication with VSOs and caseworkers. The Department of Veterans Affairs concurred with GAO's recommendations.
gao_GAO-18-596T
gao_GAO-18-596T_0
Background Burn pits—shallow excavations or surface features with berms used to conduct open-air burning—were often chosen as a method of waste disposal during recent contingency operations in the CENTCOM area of responsibility, which extends from the Middle East to Central Asia and includes Iraq and Afghanistan. In 2010, we reported that there were 251 active burns pits in Afghanistan and 22 in Iraq. However, in 2016, we reported that the use of burn pits in the CENTCOM area of responsibility had declined since that time. As of June 2016, DOD officials told us that there were no military-operated burn pits in Afghanistan and only one in Iraq. According to DOD officials, the decline in the number of burn pits from 2010 to 2016 could be attributed to such factors as (1) using contractors for waste disposal and (2) increased use of waste management alternatives such as landfills and incinerators. However, DOD officials acknowledged that burn pits were being used to dispose of waste in other locations that are not military-operated. Specifically, these officials noted instances in which local contractors had been contracted to haul away waste and subsequently disposed of the waste in a burn pit located in close proximity to the installation. In such instances, officials stated that they requested that the contractors relocate the burn pit. According to a DOD official, as of May 2018 there are two active burn pits in the CENTCOM area of responsibility. Although burn pits help base commanders to manage waste, they also produce smoke and emissions that military and other health professionals believe may result in acute and chronic health effects for those exposed. We previously reported that some veterans returning from the Iraq and Afghanistan conflicts have reported pulmonary and respiratory ailments, among other health concerns, that they attributed to burn pit emissions. Numerous veterans have also filed lawsuits against a DOD contractor alleging that the contractor mismanaged burn pit operations at several installations in both Iraq and Afghanistan, resulting in exposure to harmful smoke that caused these adverse health effects. We also previously reported on the difficulty of establishing a correlation between occupational and environmental exposures and health issues. For example, in 2012 we reported that establishing causation between an exposure and an adverse health condition can be difficult for several reasons, including that for many environmental exposures, there is a latency period—the time period between initial exposure to a contaminant and the date on which an adverse health condition is diagnosed. When there is a long latency period between an environmental exposure and an adverse health condition, choosing between multiple causes of exposure may be difficult. In addition, in 2015 we reported that the Army had recently published a study that evaluated associations between deployment to Iraq and Kuwait and the development of respiratory conditions post-deployment. However, the study was unable to identify a causal link between exposures to burn pits and respiratory conditions. DOD Had Not Fully Assessed the Health Risks of Burn Pits In our 2016 report, we found that the effects from exposing individuals to burn pit emissions were not well understood, and DOD had not fully assessed these health risks. Under DOD Instruction 6055.01, DOD Safety and Occupational Health (SOH) Program, it is DOD policy to apply risk-management strategies to eliminate occupational injury or illness and loss of mission capability or resources. DOD Instruction 6055.01 also instructs all DOD components to establish procedures to ensure that risk- acceptance decisions were documented, archived, and reevaluated on a recurring basis. Furthermore, DOD Instruction 6055.05, Occupational and Environmental Health (OEH), requires that hazards be identified and risk evaluated as early as possible, including the consideration of exposure patterns, duration, and rates. Notwithstanding this guidance, which applies to burn pit emissions among other health hazards, DOD had not fully assessed the health risks of use of burn pits according to DOD officials. According to DOD officials, DOD’s ability to assess these risks was limited by a lack of adequate information on (1) the levels of exposure to burn pit emissions and (2) the health impacts these exposures had on individuals. With respect to information on exposure levels, DOD had not collected data from emissions or monitored exposures from burn pits as required by its own guidance. DOD Instruction 4715.19 requires that plans for the use of open-air burn pits include ensuring the area was monitored by qualified force health protection personnel for unacceptable exposures, and CENTCOM Regulation 200-2, CENTCOM Contingency Environmental Standards, requires steps to be taken to sample or monitor burn pit emissions. However, DOD officials stated that there were no processes in place to specifically monitor burn pit emissions for the purposes of correlating potential exposures. They attributed this to a lack of singular exposure to the burn pit emissions, or emissions from any other individual item; instead, monitoring was done for the totality of air pollutants from all sources at the point of population exposure. As we reported in September 2016, given the potential use of burn pits near installations and their potential use in future contingency operations, establishing processes to monitor burn pit emissions for unacceptable exposures would better position DOD and combatant commanders to collect data that could help assess exposure to risks. In the absence of the collection of data to examine the effects of burn pit exposure on servicemembers, the Department of Veterans Affairs in 2014 created the airborne hazards and open-air burn pit registry, which allows eligible individuals to self-report exposures to airborne hazards (such as smoke from burn pits, oil-well fires, or pollution during deployment), as well as other exposures and health concerns. The registry helps to monitor health conditions affecting veterans and servicemembers, and to collect data that would assist in improving programs to help those with deployment exposure concerns. With respect to the information on the health effects from exposure to burn pit emissions, DOD officials stated that there were short-term effects from being exposed to toxins from the burning of waste, such as eye irritation and burning, coughing and throat irritation, breathing difficulties, and skin itching and rashes. However, the officials also stated that DOD did not have enough data to confirm whether direct exposure to burn pits caused long-term health issues. Although DOD and the Department of Veterans Affairs had commissioned studies to enhance their understanding of airborne hazards, including burn pit emissions, the then- current lack of data on emissions specific to burn pits limited DOD’s ability to fully assess potential health impacts on servicemembers and other base personnel, such as contractors. For example, in a 2011 study that was contracted by the Department of Veterans Affairs, the Institute of Medicine stated that it was unable to determine whether long-term health effects are likely to result from burn pit exposure due to inadequate evidence of an association. While the study did not determine a linkage to long-term health effects, because of the lack of data, it did not discredit the relationship either. Rather, it outlined a methodology of how to collect the necessary data to determine the effects of the exposure. Specifically, the 2011 study outlined the feasibility and design issues for an epidemiologic study—that is, a study of the distribution and determinants of diseases and injuries in human populations—of veterans exposed to burn pit emissions. Further, the 2011 study reported that there were a variety of methods for collecting exposure information, but the most desirable was to measure exposures quantitatively at the individual level. Individual exposure measurements could be obtained through personal monitoring data or biomonitoring. However, if individual monitoring data were not available, and they rarely are, individual exposure data might also be estimated from modeling of exposures, self-reported surveys, interviews, job exposure matrixes, and environmental monitoring. Further, to determine the incidence of chronic disease, the study stated that servicemembers must be tracked from their time of deployment, over many years. While the Institute of Medicine outlined a methodology of how to conduct an epidemiologic study, DOD had not taken steps to conduct this type of research study, specifically one that focused on the direct, individual exposure to burn pit emissions and the possible long-term health effects of such exposure. Instead, some officials commented that there were no long-term health effects linked to the exposures of burn pits because the 2011 study did not acknowledge any. Conversely, Veterans Affairs officials stated that a study aimed at establishing health effect linkages could be enabled by the data in its airborne hazards and open-air burn pit registry, which collects self-reported information on servicemembers’ deployment location and exposure. In response to a mandate contained in section 201 of Public Law 112- 260, the Department of Veterans Affairs entered into an agreement with the National Academies of Sciences, Engineering, and Medicine to convene a committee to provide recommendations on collecting, maintaining, and monitoring information through the registry. The committee assessed the effectiveness of the Department of Veterans Affairs’ information gathering efforts and provided recommendations for addressing the future medical needs of the affected groups. The study was conducted in two phases. Phase 1 was a review of the data collection methods and outcomes, as well as an analysis of the self- reported veteran experience data gathered in the registry. Phase 2 was focused on the assessment of the effectiveness of the actions taken by the Department of Veterans Affairs and DOD and provided recommendations for improving the methods enacted. The committee released its final report in February 2017. As we reported in September 2016, considering the results of this review as well as the methodology of the 2011 Institute of Medicine study as part of an examination of the relationship between direct, individual exposure to burn pit emissions and long-term health effects could better position DOD to fully assess those health risks. In our September 2016 report we recommended that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to: take steps to ensure CENTCOM and other geographic combatant commands, as appropriate, establish processes to consistently monitor burn pit emissions for unacceptable exposures; and in coordination with the Secretary of Veterans Affairs, specifically examine the relationship between direct, individual, burn pit exposure and potential long-term health-related issues. As part of that examination, consider the results of the National Academies of Sciences, Engineering, and Medicine’s report on the Department of Veteran Affairs registry and the methodology outlined in the 2011 Institute of Medicine study that suggests the need to evaluate the health status of service members from their time of deployment over many years to determine their incidence of chronic disease, with particular attention to the collection of data at the individual level, including the means by which that data is obtained. DOD concurred with the first recommendation, stating that the department will ensure that geographic combatant commands establish and employ processes to consistently monitor burn pit emissions for unacceptable exposures at the point of exposure and if necessary at individual sources. In a May 2018 status update regarding this recommendation, DOD stated that it will be updating applicable department policy and procedures, its tactics techniques and procedures manual, and guidance for sampling and analysis plans to improve monitoring of burn pit emissions and other airborne hazard emissions. Specifically, DOD stated it will update DOD Instruction 6490.03, Deployment Health; that the update will provide revised procedures on deployment health activities required before, during, and after deployments, including Occupational and Environmental Health Site Assessments; and that it estimates this will be completed by the 4th quarter of fiscal year 2018. In addition, the department stated it will update its Occupational and Environmental Health Site Assessments tactics, techniques, and procedures manual and update guidance for sampling and analysis plans and that the updates will provide revised tactics, techniques, and procedures that will improve the quality of health risk assessment. The department expects this to be completed by the 1st quarter of fiscal year 2019. GAO believes that upon completion of these actions, DOD will have met the intent of this recommendation. With respect to our recommendation to sponsor research, in coordination with the Secretary of Veterans Affairs, to specifically examine the relationship between burn pit exposure and potential health-related issues, DOD partially concurred, stating that a considerable volume of research studies had already been completed, were ongoing, or were planned in collaboration with the Department of Veterans Affairs and other research entities to improve the understanding of burn pit and other ambient exposures to potential long-term health outcomes and that the studies, where applicable, consider and incorporate the methodology outlined in the 2011 Institute of Medicine study. In a May 2018 status update regarding this recommendation, the department stated that DOD and the Department of Veterans Affairs continue to collaborate with each other and other entities on research activities that address burn pit and other airborne exposures, and potential long-term health outcomes. Specifically, the department cited a DOD/Veterans Affairs Airborne Hazards Symposium held in May 2017; an update to the Veterans Affairs/DOD Deployment Health Working Group "Airborne Hazards Joint Action Plan" to be completed by the 3rd quarter of fiscal year 2018; and the completion of research to examine airborne hazard exposures and potential health-related issues. GAO believes that to the extent that continued studies consider and incorporate the methodology outlined the 2011 Institute of Medicine study, where appropriate, DOD will have met the intent of this recommendation. Chairman Dunn, Ranking Member Brownley, and Members of the Subcommittee, this concludes my statement for the record. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact Cary Russell, Director, Defense Capabilities and Management, at 202-512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Guy LoFaro (Assistant Director), Lorraine Ettaro, Shahrzad Nikoo, Jennifer Spence, and Matthew Young. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Burn pits help base commanders manage waste generated by U.S. forces overseas, but they also produce harmful emissions that military and other health professionals believe may result in chronic health effects for those exposed. This statement provides information on the extent to which DOD has assessed any health risks of burn pit use. This statement is based on a GAO report issued in September 2016 (GAO-16-781). The report was conducted in response to section 313 of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015. Specifically, GAO assessed the methodology DOD used in conducting a review of the compliance of the military departments and combatant commands with DOD instructions governing the use of burn pits in contingency operations and the adequacy of a DOD report for the defense committees. GAO also obtained updates from DOD on actions taken to assess health risks from burn pits since September 2016. What GAO Found GAO reported in September 2016 that the effects from exposing individuals to burn pit emissions were not well understood, and the Department of Defense (DOD) had not fully assessed the health risks associated with the use of burn pits. Burn pits—shallow excavations or surface features with berms used to conduct open-air burning—were often chosen as a method of waste disposal during recent contingency operations in the U.S. Central Command (CENTCOM) area of responsibility, which extends from the Middle East to Central Asia and includes Iraq and Afghanistan. According to DOD Instruction 6055.01, DOD Safety and Occupational Health (SOH) Program , DOD should apply risk-management strategies to eliminate occupational injury or illness and loss of mission capability or resources. The instruction also requires all DOD components to establish procedures to ensure that risk-acceptance decisions were documented, archived, and reevaluated on a recurring basis. Furthermore, DOD Instruction 6055.05, Occupational and Environmental Health (OEH), requires that hazards be identified and risk evaluated as early as possible, including the consideration of exposure patterns, duration, and rates. While DOD has guidance that applies to burn pit emissions among other health hazards, DOD had not fully assessed the health risks of use of burn pits, according to DOD officials. According to DOD officials, DOD's ability to assess these risks was limited by a lack of adequate information on (1) the levels of exposure to burn pit emissions and (2) the health impacts these exposures had on individuals. With respect to information on exposure levels, DOD had not collected data from emissions or monitored exposures from burn pits as required by its own guidance. Given the potential use of burn pits near installations and during future contingency operations, establishing processes to monitor burn pit emissions for unacceptable exposures would better position DOD and combatant commanders to collect data that could help assess exposure to risks. GAO recommended that the Secretary of Defense (1) take steps to ensure CENTCOM and other geographic combatant commands, as appropriate, establish processes to consistently monitor burn pit emissions for unacceptable exposures; and (2) in coordination with the Secretary of Veterans Affairs, specifically examine the relationship between direct, individual, burn pit exposure and potential long-term health-related issues. DOD concurred with the first recommendation and partially concurred with the second. In a May 2018 status update regarding these recommendations, DOD outlined a series of steps it had implemented as well as steps that it intends to implement. The department believes these efforts will further enhance its ability to better monitor burn-pit emissions and examine the relationship between direct, individual, burn pit exposure and potential long-term health related issues. GAO believes the steps DOD is taking are appropriate. What GAO Recommends GAO made two recommendations focused on improving monitoring of burn pit emissions and examining any associated health effects related to burn pit exposure. DOD concurred with one recommendation and partially concurred with the other. GAO continues to believe the recommendations are valid.
gao_GAO-18-694T
gao_GAO-18-694T_0
Background Oversight of nursing homes is a shared federal-state responsibility, with CMS central and regional offices overseeing activities completed by state survey agencies. Specifically, CMS central office (1) oversees the federal quality standards nursing homes must meet to participate in the Medicare and Medicaid programs and (2) establishes the responsibilities of CMS’s regional offices and state survey agencies to ensure federal quality standards for nursing homes are met. CMS regional offices oversee state activities and report results back to CMS central office. Specifically, regional offices are required to conduct annual federal monitoring surveys to assess the adequacy of surveys conducted by state survey agencies. CMS regional offices also evaluate state surveyors’ performance on factors such as the frequency and quality of state surveys. Finally, in each state, under agreement with CMS, a state survey agency assesses whether nursing homes meet CMS’s standards by conducting regular surveys and investigations of complaints regarding resident care or safety, as needed. CMS collects data on nursing home quality through annual standard surveys and complaint investigations, as well as other sources, such as staffing data and clinical quality measures. Standard surveys. By law, every nursing home receiving Medicare or Medicaid payment must undergo a standard survey during which teams of state surveyors conduct a comprehensive on-site evaluation of compliance with federal quality standards. Nursing homes with consistently poor performance can be selected for the Special Focus Facility (SFF) program, which requires more intensive oversight, including more frequent surveys. Complaint investigations. Nursing homes also are surveyed on an as-needed basis with investigations of consumer complaints. These complaints can be filed with state survey agencies by residents, families, ombudsmen, or others acting on a resident’s behalf. During an investigation, state surveyors evaluate the nursing home’s compliance with a specific federal quality standard. Staffing data. Nurse staffing levels are considered a key component of nursing home quality and are often measured in total nurse hours per resident day. Higher nurse staffing levels are typically linked with higher quality nursing home care. Clinical quality measures. Nursing homes are required to provide data on certain clinical quality measures—such as the incidence of pressure ulcers—for all residents to CMS. CMS currently tracks data for 18 clinical quality measures. CMS publicly reports a summary of each nursing home’s quality data on its Nursing Home Compare website using a five-star quality rating. The Five-Star Quality Rating System assigns each nursing home an overall rating and three component ratings—surveys (standard and complaint), staffing, and quality measures—based on the extent to which the nursing home meets CMS’s quality standards and other measures. In a 2016 report, we found that CMS did not have a systematic process for prioritizing recommended changes to improve its Nursing Home Compare website and that several factors limited the ability of CMS’s Five-Star Quality Rating System to help consumers understand nursing home quality and choose a home. We recommended that CMS establish a process to evaluate and prioritize website improvements and add explanatory information about the Five-Star System to Nursing Home Compare. HHS agreed and in 2018 completed actions on these recommendations, but has not yet acted on the other recommendations, including providing national comparison information that we maintain are important to help enable consumers to understand nursing home quality and make distinctions between nursing homes. Nursing Home Quality Data Show Mixed Results, Although Data Issues Complicate Ability to Assess Quality Trends Data on Nursing Home Quality Showed Mixed Results In our October 2015 report examining trend data that give insight into nursing home quality, we found that four key data sets showed mixed results, and data issues complicated the ability to assess quality trends. Nationally, one of the four data sets—consumer complaints—suggested consumers’ concerns over nursing home quality increased from 2005 to 2014. However, the other three data sets—deficiencies, staffing levels, and clinical quality measures—indicated potential improvement in nursing home quality (see Table 1). Specifically, we found consumer complaints—which can originate from residents, families, ombudsmen, or others acting on a resident’s behalf—had a 21 percent increase from 2005 to 2014. In contrast, nurse staffing levels increased 9 percent from 2009 to 2014 and selected quality measure scores showed decreases in the number of reported quality problems, such as falls resulting in major injury from 2011 to 2014. In addition, we identified 416 homes in 36 states that had consistently poor performance across the four data sets we examined. Of the 416 homes, 71 (17 percent) were included in the Special Focus Facility (SFF) program at some point between 2005 and 2014. Data Issues Complicated CMS’s Ability to Assess Quality Trends In our October 2015 report, we found CMS’s ability to use available data to assess nursing home quality trends was complicated by various issues with these data, which made it difficult to determine whether observed trends reflect actual changes in quality, data issues, or both. CMS has taken some actions to address these data complications, however, more work is needed. Consumer complaints: The average number of consumer complaints reported per nursing home increased in the 10 years of data we examined, although it is unclear to what extent this can be attributed to a change in quality or to state variation in the recording of complaints. Some state survey agency officials explained that changes in how they recorded complaints into CMS’s complaint tracking system could in part account for the jump in reported complaints. In addition, officials at one state survey agency explained the increase in complaints could also reflect state-level efforts to provide consumers with more user-friendly options for filing complaints. Similarly, in April 2011, we found differences in how states record and track complaints. Deficiencies cited on standard surveys: The decline in the number of serious deficiencies—deficiencies that at a minimum caused a harm to the resident—in the data we examined may have indicated an improvement in quality, although it may also be attributed to inconsistencies in measurement. For example, the use of multiple survey types, such as both traditional paper-based surveys and electronic surveys, to conduct the standard survey that every nursing home receiving Medicare or Medicaid payment must undergo complicates the ability to compare the results of these surveys nationally. In our October 2015 report, we recommended CMS implement the same survey methodology across all states; HHS agreed with this recommendation and in November 2017 completed its national implementation of this electronic survey methodology. Nurse staffing: CMS data showed the average total nurse hours per resident day increased from 2009 through 2014, although CMS did not have assurance these data were accurate. Many of the regional office and state survey agency officials we spoke with expressed concern over the self-reported nature of these data, noting that it may be easy to misrepresent nurse staff hours. At the time of our 2015 report, CMS was in the process of implementing a system to collect staffing information based on payroll and other verifiable data and has now completed that implementation, as required by law. We recommended in 2015 that CMS establish and implement a clear plan for ongoing auditing of its staffing data and other quality data. HHS agreed with this recommendation and in July 2018 CMS provided us with documentation that it was conducting regular audits of this new nurse staffing data. According to CMS, facilities experienced challenges submitting complete and accurate data in the early stages, however, as of April 2018 the agency has begun relying on the payroll data to calculate the staffing measures that it posts in Nursing Home Compare and uses in the Five-Star Quality Rating System. Selected quality measures: Nursing homes generally improved their performance on the eight selected quality measures we reviewed, although it is unclear to what extent this can be attributed to a change in quality or possible inaccuracies in self-reported data. Like the nurse staffing data used by CMS, data on nursing homes’ performance on these measures were self-reported, and until 2014 CMS conducted little to no auditing of these data to ensure their accuracy. In our 2015 report, we found CMS had begun taking steps to help mitigate the problem with self- reported data by starting to audit the data in 2015; however, the agency did not have clear plans to continue the audits beyond 2016. As such, in our recommendation we indicated the need for ongoing auditing of data used to calculate clinical quality measures. As of August 2018, CMS has not provided us a plan for ongoing auditing of its clinical quality measures and we continue to believe that CMS should establish and carry out such a plan. Collectively, these data issues have broader implications related to nursing home quality trends, including potential effects on the quality benchmarks CMS sets and consumers’ decisions about which nursing home to select. Furthermore, data used by CMS to assess quality measures are also used when determining Medicare payments to nursing homes, so data issues—and CMS’s internal controls related to the data— could affect the accuracy of payments. Moreover, the use of quality data for payment purposes will expand in fiscal year 2019 when a nursing home value-based purchasing program will be implemented, which will increase or reduce Medicare payments to nursing homes based on certain quality measures. CMS Had Modified Oversight Activities by 2015, But Had Not Monitored Potential Effect on Nursing Home Quality Oversight Our 2015 report found that CMS had made numerous modifications to its nursing home oversight activities in recent years, but had not monitored the potential effect of these modifications on nursing home quality oversight. Some of these modifications expanded or added new oversight activities—for example, CMS expanded the number of tools available to state surveyors when investigating medication-related adverse events, increased the amount of nursing home quality data available to the public, and created new trainings for surveyors on unnecessary medication usage. However, other modifications reduced existing oversight activities. In 2015, we highlighted modifications that reduced two existing oversight activities—the federal monitoring survey program and the SFF program. Federal monitoring surveys: CMS reduced the scope of the federal monitoring surveys regional offices use to evaluate state surveyors’ skills in assessing nursing home quality. CMS requires regional offices to complete federal monitoring surveys in at least 5 percent of nursing homes surveyed by the state each year. Starting in 2013, CMS required fewer federal monitoring surveys to be standard surveys and allowed more monitoring surveys to be the narrower scoped and less-resource intensive revisits and complaint investigations. Special Focus Facilities: CMS reduced the number of nursing homes participating in the SFF program. In 2013, CMS began to reduce the number of homes in the program by instructing states to terminate homes that had been in the program for 18 months without improvement from participating in Medicare and Medicaid. As we have previously reported, between 2013 and 2014, the number of nursing homes in the SFF program dropped by more than half—from 152 to 62. In 2014, CMS began the process of re-building the number of facilities in the SFF program; however, according to CMS officials, the process would be slow, and as of August 2018 there were 85 SFFs. In 2015, CMS said some of the reductions to oversight activities were in response to an increase in oversight responsibilities and limited number of staff and financial resources. Specifically, CMS officials said increasing oversight responsibilities and a limited number of staff and financial resources at the central, regional, and state levels required the agency to evaluate its activities and reduce the scope of some activities. In the October 2015 report, we recommended CMS monitor oversight modifications to better assess their effects; HHS agreed with the recommendation and told us they are beginning to take steps to address this issue. We maintain the importance of monitoring to help CMS better understand how its oversight modifications affect nursing home quality and to improve its oversight given limited resources. Chairman Harper, Ranking Member DeGette, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments For further information about this statement, please contact John E. Dicken at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the contact named above, key contributors to this statement were Karin Wallestad (Assistant Director), Sam Amrhein, Summar Corley, Pam Dooley, Will Simerl, and Jennifer Whitworth. Appendix I: Related GAO Reports Nursing Homes: Consumers Could Benefit from Improvements to the Nursing Home Compare Website and Five-Star Quality Rating System. GAO-17-61. Washington, D.C.: November 18, 2016. Skilled Nursing Facilities: CMS Should Improve Accessibility and Reliability of Expenditure Data. GAO-16-700. Washington, D.C.: September 7, 2016. Nursing Home Quality: CMS Should Continue to Improve Data and Oversight. GAO-16-33. Washington, D.C.: October 30, 2015. Health Care Transparency: Actions Needed to Improve Cost and Quality Information for Consumers. GAO-15-11. Washington, D.C.: October 20, 2014. Nursing Homes: More Reliable Data and Consistent Guidance Would Improve CMS Oversight of State Complaint Investigations. GAO-11-280. Washington, D.C.: April 7, 2011. Nursing Homes: Complexity of Private Investment Purchases Demonstrates Need for CMS to Improve the Usability and Completeness of Ownership Data. GAO-10-710. Washington, D.C.: September 30, 2010. Poorly Performing Nursing Homes: Special Focus Facilities Are Often Improving, but CMS’s Program Could Be Strengthened. GAO-10-197. Washington, D.C.: March 19, 2010. Nursing Homes: Addressing the Factors Underlying Understatement of Serious Care Problems Requires Sustained CMS and State Commitment. GAO-10-70. Washington, D.C.: November 24, 2009. Nursing Homes: Opportunities Exist to Facilitate the Use of the Temporary Management Sanction. GAO-10-37R. Washington, D.C.: November 20, 2009. Nursing Homes: CMS’s Special Focus Facility Methodology Should Better Target the Most Poorly Performing Homes, Which Tended to Be Chain Affiliated and For-Profit. GAO-09-689. Washington, D.C.: August 28, 2009. Medicare and Medicaid Participating Facilities: CMS Needs to Reexamine Its Approach for Funding State Oversight of Health Care Facilities. GAO-09-64. Washington, D.C.: February 13, 2009. Nursing Homes: Federal Monitoring Surveys Demonstrate Continued Understatement of Serious Care Problems and CMS Oversight Weaknesses. GAO-08-517. Washington, D.C.: May 9, 2008. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Approximately 15,600 nursing homes participating in the Medicare and Medicaid programs provide care to 1.4 million residents—a population of elderly and disabled individuals. To help ensure nursing home residents receive quality care, CMS defines quality standards that homes must meet to participate in the Medicare and Medicaid programs. To monitor compliance with these standards, CMS enters into agreements with state survey agencies to conduct on-site surveys of the state's homes and also collects other data on nursing home quality. Although CMS and others have reported some potential improvements in nursing home quality, questions have been raised about nursing home quality and weaknesses in CMS oversight. This statement summarizes GAO's October 2015 report, GAO-16-33 . Specifically, it describes (1) trends in nursing home quality through 2014, and (2) changes CMS had made to its oversight activities as of October 2015. It also includes the status of GAO's recommendations associated with these findings. GAO recently obtained information from CMS officials about steps they have taken to implement the 2015 GAO recommendations. What GAO Found GAO's October 2015 report found mixed results in nursing home quality based on its analysis of trends reflected in key sources of quality data that the Centers for Medicare & Medicaid Services (CMS) collects. An increase in reported consumer complaints suggested that consumers' concerns about nursing home quality increased. In contrast, trends in care deficiencies, nurse staffing levels, and clinical quality measures indicated potential improvement in nursing home quality. GAO also found that data issues complicated CMS's ability to assess nursing home quality trends. For example: CMS allowed states to use different survey methodologies to measure deficiencies in nursing home care, which complicates the ability to make comparisons nationwide. GAO recommended that CMS implement a standardized survey methodology across states, and in November 2017 CMS completed national implementation. CMS did not regularly audit selected quality data including nurse staffing and clinical data (for example, on residents with pressure ulcers) to ensure their accuracy. GAO recommended CMS implement a plan for ongoing auditing of quality data. The agency concurred with this recommendation and has been conducting regular audits of nurse staffing data but does not have a plan to audit other quality data on a continuing basis. GAO continues to believe that regular audits are needed to ensure the accuracy and comparability of nursing home quality data. GAO's October 2015 report found that CMS had made numerous modifications to its nursing home oversight activities. However, CMS had not monitored how the modifications might affect its ability to assess nursing home quality. GAO found that some modifications expanded or added new activities—such as creating new training for state surveyors on unnecessary medication usage—while others reduced existing activities. For example, CMS reduced the number of nursing homes participating in the Special Focus Facility program—which provides additional oversight of certain homes with a history of poor performance—by over half from 2013 to 2014. CMS officials told GAO that some of the reductions to oversight activities were in response to an increase in oversight responsibilities and a limited number of staff and financial resources. To help ensure modifications do not adversely affect CMS's ability to assess nursing home quality, GAO recommended that CMS monitor modifications of essential oversight activities to better understand the effects on nursing home quality oversight. CMS concurred with this recommendation and told us it has begun to take steps to address it. Such monitoring is important for CMS to better understand how its oversight modifications affect nursing home quality and to improve its oversight given limited resources.
gao_GAO-19-120T
gao_GAO-19-120T_0
Background DOD has reported that more than a decade of conflict, budget uncertainty, and reductions in force structure have degraded military readiness; in response, the department has made rebuilding the readiness of the military forces a priority. The 2018 National Defense Strategy emphasizes that restoring and retaining readiness across the entire spectrum of conflict is critical to success in the emerging security environment. Nevertheless, DOD reported readiness of the total military force remains low and has remained so since 2013. Our work has shown that Air Force readiness, in particular, has steadily declined due to a persistent demand for forces, a decline in equipment availability and experienced maintenance personnel, the effect of high deployment rates on units’ ability to conduct needed training, and a smaller inventory of aircraft. DOD has made department-wide progress in developing a plan to rebuild readiness of the military force. In August 2018, we reported that the Office of the Secretary of Defense has developed a Readiness Recovery Framework that the Department is using to guide the military services’ efforts and plans to regularly assess, validate, and monitor readiness recovery. According to officials, the Office of the Secretary of Defense and the military services are currently revising readiness goals and accompanying recovery strategies, metrics, and milestones to align with the 2018 National Defense Strategy and Defense Planning Guidance. However, additional work remains to ensure that the actions DOD is taking will ultimately achieve overall readiness goals. DOD’s readiness rebuilding efforts are occurring in a challenging context that requires the department to make difficult decisions regarding how best to address continuing operational demands while preparing for future challenges. An important aspect of this, across all of the military services, is determining an appropriate balance between maintaining and upgrading legacy weapon system platforms currently in operational use and procuring platforms able to overcome rapidly advancing future threats. Air Force leaders have stated that striking such a balance is exceptionally difficult. While each of the military services, including the Air Force, must grapple with these choices, senior leaders have called for immediate readiness rebuilding with particular focus on aviation. In a memorandum on September 17, 2018, the Secretary of Defense noted that DOD faces shortfalls in aviation squadrons across the force with the aviation inventory and supporting infrastructure suffering from systemic underperformance and unrealized capacity. In order to focus on meeting DOD’s most critical priorities first, the Secretary of Defense emphasized the need to rebuild readiness. As such, the Secretary directed the Air Force to achieve a minimum of 80 percent mission capable rates for fiscal year 2019 for the F-35, F-22, and F-16, while simultaneously reducing these platforms’ operating and maintenance costs every year starting in fiscal year 2019. Air Force Faces Several Interrelated Management and Readiness Challenges Our prior work has identified management and readiness challenges in four interrelated areas—personnel, equipment, training, and organization and utilization, and we have made recommendations to help the Air Force address rebuilding the readiness of its existing fleet. Personnel: Pilot and Aircraft Maintainer Shortfalls Have Impeded Readiness Recovery The Air Force has reported that manpower shortfalls, particularly among skilled pilots and maintainers, are a primary challenge to rebuilding readiness. As we have previously reported, developing fighter pilots requires a significant investment of time and funding. According to Air Force officials, a fighter pilot requires approximately 5 years of training to be qualified to lead flights, at a cost of between about $3 million to $11 million depending on the specific type of aircraft. In April 2018, we reported that according to Air Force pilot staffing level and authorizations data for fiscal years 2006 through 2017, the Air Force had fewer fighter pilots than authorizations for 11 of those 12 years (see fig. 1). This gap grew from 192 fighter pilots (5 percent of authorizations) in fiscal year 2006, to 1,005 (27 percent) in fiscal year 2017. According to briefing documents prepared by the Air Force, this gap was concentrated among fighter pilots with fewer than 8 years of experience. The Air Force forecasted that the fighter pilot gap will persist over time, even as the Air Force takes steps to train more fighter pilots and improve retention. Air Force officials identified multiple factors that led to low numbers of fighter pilots. For example, the military services trained fewer fighter pilots than targeted over the last decade. In fiscal years 2007 through 2016, the Air Force trained 12 percent fewer new fighter pilots than the targeted amount. In our April 2018 report, we found that the military services had not reevaluated squadron requirements to reflect increased fighter pilot workload and the emergence of unmanned aerial systems. Fighter pilots and squadron leaders from each of the military services we interviewed at the time consistently told us that the fighter pilot occupation has significantly changed in recent years due to changes in fighter aircraft tactics and technology, additional training requirements, and the removal of administrative support positions from squadrons. Without updating squadron requirements to reflect this growing administrative burden on fighter pilots, the currently identified differences between fighter pilot numbers and authorizations may be understated. By contrast, without updating future fighter pilot requirements to take into account changing roles and missions—in particular the increasing role of unmanned aerial systems in combat operations—forecasted fighter pilot gaps may be overstated. In short, we concluded that reevaluating fighter pilot requirements is a key first step to help the military services, including the Air Force, clearly determine the magnitude of the gaps and target strategies to meet their personnel needs. In our April 2018 report, we recommended that the Air Force reevaluate fighter pilot squadron requirements to ensure it has the pilots necessary for all missions. DOD concurred with this recommendation. The Air Force is also trying to manage a shortage of aircraft maintainer personnel—both uniformed personnel and depot civilians. In September 2018, we found that the Air Force reported losing experienced maintainers, either to retirement or to other programs such as the F-35 Lightning II (F-35). For example, we reported that the Air Force’s C-17, which is a long-range, heavy logistics transport aircraft, requires depot modifications to keep it viable, but there was a shortage of depot maintainer personnel due to attrition, inability to retain skilled workers, and hiring freezes. The Air Force has several initiatives underway, including hiring additional maintainer personnel and temporarily transitioning active-duty maintenance units from some legacy aircraft. As of August 2018, the Air Force had requested an increased end strength of 8,000 personnel to fill critical personnel needs in maintenance and pilots. Officials stated that progress was being made in increasing end strength and hiring additional personnel, which should address these challenges. However, according to Air Force officials, it may take several years before newly hired maintainer personnel will have the training and experience they need to improve aircraft availability rates. We have work underway to examine the Air Force’s management of its aircraft maintainer workforce and DOD depot skill gaps and plan to report on these issues over the next 6 months. Equipment: Aircraft Availability Has Been Limited by Aging Aircraft, Costly Maintenance, and Diminished Supply Support Air Force aircraft availability has been limited by challenges associated with aging aircraft, maintenance, and supply support. According to the Air Force, the average age of the fleet is 28 years. The average ages of the B-52 strategic bomber and the KC-135 tanker each exceed 50 years, and the Air Force expects to continue to use these aircraft for decades. The Air Force spends billions of dollars each year to sustain its fixed-wing aircraft fleet—comprised of both legacy and new aircraft—which needs expensive logistics support, including maintenance and repair, to meet its availability goals. We reported in September 2018 that from fiscal year 2011 through 2016, the Air Force generally did not meet aircraft availability goals while it continued to accrue increased maintenance costs. Figure 2 summarizes the sustainment challenges we reported that face selected Air Force aircraft. Sustainment challenges are not just an issue for older aircraft, but represent an enduring challenge for the Air Force. The F-35—which is intended to replace a variety of legacy fighter aircraft in the Air Force and more broadly represents the future of tactical aviation for DOD—has projected sustainment costs of over $1 trillion over a 60-year life cycle. In October 2017, we reported that DOD’s projected operating and support costs estimate for the F-35 rose by 24 percent from fiscal year 2012 to fiscal year 2016 and are not fully transparent to the military services. In October 2017, we also reported that the F-35 fleet faced sustainment challenges that pose risks to its ability to meet current and future warfighter readiness requirements. The Air Force planned to procure more than 1,700 F-35 aircraft and, as the largest participant in the F-35 program, its readiness could be disproportionately affected by the sustainment challenges facing this program. In particular, DOD’s capabilities to repair F-35 parts at military depots were 6 years behind schedule, which resulted in average part repair times of 172 days—twice that of the program’s objective. These repair backlogs have contributed to significant F-35 spare parts shortages—from January to August 7, 2017, F-35 aircraft were unable to fly 22 percent of the time because of parts shortages. As a result, the Air Force had generally not met its aircraft availability goals for its fielded F-35 aircraft (See fig. 3 for Air Force personnel performing maintenance on the F-35). Our work has shown that these challenges are largely the result of sustainment plans that do not fully include key requirements or timely and sufficient funding. In our October 2017 report, we recommended, among other things, that DOD revise sustainment plans to ensure that they include the key requirements and decision points needed to fully implement the F-35 sustainment strategy and align funding plans to meet those requirements. DOD concurred with this recommendation and DOD officials report that they are focusing actions and resources toward achieving key production, development and sustainment objectives by 2025. In addition, the conference report accompanying a bill for fiscal year 2019 defense appropriations directed a higher appropriation amount for the Air Force’s aircraft procurement than DOD requested in its budget. This appropriation may create more demand on the already strained sustainment enterprise for which DOD has not always provided timely funding (for example, funding for spare parts). Training: Units Are Challenged To Achieve Full Spectrum Readiness The Air Force has identified the need to ensure a full-spectrum capable force that can successfully perform missions addressing a broad range of current and emerging threats; however, the Air Force has had difficulty training for full spectrum readiness. For more than a decade, the Air Force focused its training on supporting operations in the Middle East, including Iraq and Afghanistan. Commanders established training requirements that they deemed necessary to prepare aircrews to conduct missions in these locations—such as close air support-to-ground forces— limiting training for other missions. In September 2016, based on our analysis of data on the completion of annual training, we found that combat fighter squadrons were generally able to complete mission training requirements for ongoing contingency operations, but were unable to meet annual training requirements across the full range of missions. Wing and squadron commanders we interviewed at the time cited several common limitations related to the challenges discussed in this testimony that affected the ability of their squadrons to complete training across the full range of missions including the maintenance unit’s ability to provide adequate numbers of aircraft for training, adversary air tasking, and manpower shortfalls in the squadrons. We also reported in September 2016 that F-22 and F-35 squadrons faced training range limitations. F-22 squadron commanders told us that the airspace available limits their ability to train for their more complex missions, including offensive counter air and defensive counter air missions. Additionally, the commanders we interviewed at the time for squadrons flying F-22 and F-35 aircraft told us that limits in training range capabilities, such as threat replicators and targets, affected the training completed at smaller regional training ranges, as well as at larger training ranges such as the Utah Test and Training Range and the Nevada Test and Training Range. According to these officials, the training ranges lacked many of the more advanced threat replication systems that can challenge F-35 and F-22 capabilities and provide effective training across their full range of missions. The 2018 National Defense Strategy cites, as the department’s principal priority, the need to prepare for threats from advanced adversaries due to the magnitude of the threat they pose. Further, the Air Force reports that it will confront an increasingly complex security environment in the coming years that will demand a wider range of skill sets and different capabilities than are currently being employed. For example, aircrews may be called upon to conduct missions that require freedom of maneuver in highly-contested air spaces. However, in our September 2016 report, we found that the Air Force has used the same underlying assumptions to establish its annual training requirements from 2012 through 2016, which may not reflect current and emerging training needs. Specifically, the total annual live-fly training sorties by aircraft, the criteria for designating aircrews as experienced or inexperienced, and the mix between live and simulator training remained the same from 2012 through 2016. We concluded that without fully reassessing the assumptions underlying its training requirements, the Air Force could not be certain that its annual training plans are aligned with its stated goals to ensure a full-spectrum capable force that can successfully achieve missions across a broad range of current and emerging threats. We recommended that the Air Force reassess its annual training requirements and make any appropriate adjustments to its future training plans to ensure that its forces can accomplish a full range of missions. The Air Force has a number of efforts under way to study or address some of the factors that limit the ability of fighter squadrons to meet annual training requirements. Organization and Utilization: Air Force Management of Its Forces Can Diminish Existing Capability The Air Force’s management of its limited force structure can also exacerbate some of the problems discussed above, as we found for the F-22 fleet. The F-22, widely regarded as the best air superiority fighter aircraft in the world, is an integral part of the U.S. military’s ability to defeat high-end adversaries (See fig. 4 for an image of the F-22). To meet its assigned air superiority responsibility, the Air Force is to provide the combatant commanders with both mission capable aircraft and pilots who are trained to fly those aircraft in the expected threat environments. However, in July 2018, we found that Air Force organization and utilization of its small fleet of F-22s has reduced its ability to provide these two elements, thereby further limiting this important capability. Specifically, we found that the Air Force’s organization of its small F-22 fleet has not maximized the availability of these 186 aircraft. Availability was constrained by maintenance challenges and unit organization. For example, maintaining the stealth coating on the outside of the F-22 aircraft was time consuming and significantly reduced the aircraft’s availability for missions. Maintenance availability challenges were exacerbated by the Air Force’s decision to organize the F-22 fleet into small units of 18 or 21 aircraft per squadron and one or two squadrons per wing. Traditional fighter wings have three squadrons per wing, with 24 aircraft in each squadron, which creates maintenance efficiencies because people, equipment, and parts can be shared, according to Air Force officials. Further, the Air Force organized F-22 squadrons to operate from a single location. However, it generally deployed only a part of a squadron, and the remaining part struggled to keep aircraft available for missions at home. Larger, traditional Air Force squadrons and deployable units provide a better balance of equipment and personnel, according to service officials. The Air Force had not reassessed the structure of its F-22 fleet since 2010 and may be foregoing opportunities to improve the availability of its small yet critical F-22 fleet, and better support combatant commander air superiority needs in high threat environments. Further, we found that the Air Force’s utilization of its F-22 fleet limited pilot opportunities to train for air superiority missions in high threat environments. To complete the annual training requirements for air superiority missions, F-22 pilots must train almost the entire year. However, F-22 pilots were not meeting their minimum yearly training requirements for air superiority missions, according to Air Force training reports and service officials. Moreover, using F-22s for exercises and operational missions that do not require the F-22’s unique capabilities interrupted pilot training and led to reduced proficiency. For example, F- 22 units were often directed to participate in partnership building exercises. However, during these exercises, F-22 pilots may be restricted from flying the F-22 the way they would fly it in combat—due to security concerns about exposing the F-22’s unique capabilities. These restrictions not only limited the value of the exercises, but also could result in pilots developing bad habits, according to Air Force officials. The Air Force also uses F-22s to support alert missions—that is, a mission that requires certain bases to have jets ready at all times to respond to threats from civil or military aviation. The alert mission does not require the advanced capabilities of the F-22, but we reported that because there are no other operational Air Force fighter squadrons based at the F-22 locations in Alaska and Hawaii, the alert mission fell on the F-22 units. Pilots and aircraft assigned to the alert mission could not be used for any other purposes, limiting opportunities for pilots to enhance air superiority skills. Unless the Air Force takes steps to assess and make necessary adjustments to the current organization and use of its F-22s, F-22 units are likely to continue to experience aircraft availability and pilot training rates that are below what they could be. As a result, the Air Force may incur increased risks in future operations in high threat areas. In July 2018, we recommended that the Air Force reassess its F-22 organizational structure and identify ways to increase F-22 pilot training opportunities for high-end missions to reduce risk to future operations. DOD concurred with both recommendations. Air Force Will Need to Balance Near-term Readiness Recovery with Plans to Grow and Modernize the Force In September 2018, the Secretary of the Air Force described the need to grow the number of Air Force squadrons from 312 to 386—a 24 percent increase—between fiscal years 2025 and 2030 in order to meet persistent operational demands and address the challenges identified in the National Defense Strategy. However, the details and costs of such growth are as yet unknown and will have to compete with other military services looking to increase their force structure and major defense capabilities that require recapitalization. For example, over the next three decades, the Navy plans to grow its fleet by nearly 25 percent—at an estimated cost of about $800 billion—and modernizing and maintaining the nation’s nuclear arsenal could cost $1.2 trillion over the same timeframe. All of these investments would need to be made amid a deteriorating national fiscal picture. Even if it grows, the Air Force will be dependent on the force of today for decades to come and will need to stay focused on rebuilding its readiness. Many of the Air Force’s fourth generation fighters will be part of the force structure for the next decade or more, and the Air Force plans to retain the F-22 aircraft until 2060. In addition, the Air Force proposed divesting the A-10 to make budgetary room for more modern aircraft. However, as we reported in August 2016, the Air Force did not fully examine the implications of this course of action and could not demonstrate how it would meet the multiple missions being performed by the aging A-10. Therefore, focusing on rebuilding the existing force will be crucial to positioning the Air Force for the future. While these challenges are particularly acute in the Air Force, the Air Force is not alone among the military services. Given persistently low readiness levels across the military, we have called for a comprehensive readiness rebuilding plan for the entire Department of Defense to guide rebuilding efforts, including setting clear goals and identifying resources required to meet those goals for all services, including the Air Force. In sum, as it plans for the future, the Air Force will need to balance the rebuilding of its existing force with its desire to grow and modernize. We have made a number of recommendations—with which the Air Force have generally concurred with but most have not yet been implemented— that provide a partial roadmap to address important readiness challenges. Implementing our recommendations to reevaluate fighter pilot squadron requirements, revise F-35 sustainment plans, reassess annual training requirements, and examine how the Air Force organizes and utilizes its F- 22 organizational structure are necessary steps to meet current and future needs and can assist the Air Force moving forward. In addition, sustained management attention and continued congressional oversight will be needed to ensure that the Air Force demonstrates progress in addressing its personnel, equipment, training, and organization and utilization challenges. Chairman Sullivan, Ranking Member Kaine, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have questions about this testimony, please contact John Pendleton, Director, Defense Capabilities and Management at (202) 512-3489 or pendletonj@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Chris Watson, Assistant Director; Nick Cornelisse, Amie Lesser, Shari Nikoo, Michael Silver, Nicole Volchko, and Lillian Yob. Appendix I: Implementation Status of Key Prior GAO Recommendations Related to Air Force Readiness Over the past three years, we issued several reports related to Air Force readiness that are cited in this statement. Table 1 summarizes the status of our key recommendations related to Air Force readiness since 2016; a total of 14 recommendations. The Department of Defense (DOD) has implemented 1 of these recommendations. For each of the reports, the specific recommendations and their implementation status are summarized in tables 2 through 7. Related GAO Products Report numbers with a C or RC suffix are classified. Report numbers with a SU suffix are sensitive but unclassified. Classified and sensitive but unclassified reports are available to personnel with the proper clearances and need to know, upon request. Weapon System Sustainment: Selected Air Force and Navy Aircraft Generally Have Not Met Availability Goals, and DOD and Navy Guidance Need to Be Clarified. GAO-18-678. Washington, D.C.: September 10, 2018. Military Readiness: Air Force Plans to Replace Aging Personnel Recovery Helicopter Fleet. GAO-18-605. Washington, D.C.: August 16, 2018. Military Aviation Mishaps: DOD Needs to Improve Its Approach for Collecting and Analyzing Data to Manage Risks. GAO-18-586R. Washington, D.C.: August 15, 2018. Military Readiness: Update on DOD’s Progress in Developing a Readiness Rebuilding Plan. GAO-18-441RC. Washington, D.C.: August 10, 2018. (SECRET) Force Structure: F-22 Organization and Utilization Changes Could Improve Aircraft Availability and Pilot Training. GAO-18-190. Washington, D.C.: July 19, 2018. Military Personnel: Collecting Additional Data Could Enhance Pilot Retention Efforts. GAO-18-439. Washington, D.C.: June 21, 2018. Air Force Readiness: Changes to Readiness Reports Could Help Stakeholders Take More Informed Actions. GAO-18-65C. Washington, D.C.: June 18, 2018. (SECRET) Force Structure: Changes to F-22 Organization and Utilization Could Improve Aircraft Availability and Pilot Training. GAO-18-120C. Washington, D.C.: April 27, 2018. (SECRET//NOFORN) Military Readiness: Clear Policy and Reliable Data Would Help DOD Better Manage Service Members’ Time Away from Home. GAO-18-253. Washington, D.C.: April 25, 2018. Warfighter Support: DOD Needs to Share F-35 Operational Lessons Across the Military Services. GAO-18-464R. Washington, D.C.: April 25, 2018. Weapon System Sustainment: Selected Air Force and Navy Aircraft Generally Have Not Met Availability Goals, and DOD and Navy Guidance Need Clarification. GAO-18-146SU. Washington, D.C.: April 25, 2018. Military Personnel: DOD Needs to Reevaluate Fighter Pilot Workforce Requirements. GAO-18-113. Washington, D.C.: April 11, 2018. Military Aircraft: F-35 Brings Increased Capabilities, but the Marine Corps Needs to Assess Challenges Associated with Operating in the Pacific. GAO-18-79C. Washington, D.C.: March 28, 2018. (SECRET) F-35 Aircraft Sustainment: DOD Needs to Address Challenges Affecting Readiness and Cost Transparency. GAO-18-75. Washington, D.C.: October 26, 2017. Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017. Air Force Training: Further Analysis and Planning Needed to Improve Effectiveness. GAO-16-864. Washington, D.C.: September 19, 2016. Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: September 7, 2016. Force Structure: Better Information Needed to Support Air Force A-10 and Other Future Divestment Decisions. GAO-16-816. Washington, D.C.: August 24, 2016. Air Force Training: Further Analysis and Planning Needed to Improve Effectiveness. GAO-16-635SU. Washington, D.C.: August 16, 2016. Force Structure: Better Information Needed to Support Air Force A-10 and Other Future Divestment Decisions. GAO-16-525C. Washington, D.C.: July 12, 2016. (SECRET//NOFORN) Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-534C. Washington, D.C.: June 30, 2016. (SECRET) Air Force: Service Faces Challenges to Rebuilding Readiness. GAO-16-482RC. Washington, D.C.: May 25, 2016. (SECRET) Force Structure: Performance Measures Needed to Better Implement the Recommendations of the National Commission on the Structure of the Air Force. GAO-16-405. Washington, D.C.: May 6, 2016. F-35 Sustainment: DOD Needs a Plan to Address Risks Related to Its Central Logistics System. GAO-16-439. Washington, D.C.: April 14, 2016. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates. GAO-14-778. Washington, D.C.: September 23, 2014. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The 2018 National Defense Strategy emphasizes that restoring and retaining readiness across the entire spectrum of conflict is critical to success in the emerging security environment. Air Force readiness has steadily declined primarily due to the persistent demand on a fleet that has aged and decreased in size since the 1990s. The Air Force is working to both rebuild the readiness of its forces and modernize its aging fleet to meet future threats. However, according to the Air Force, its readiness goals will take years to achieve as it continues to be challenged to rebuild readiness amid continued operational demands. This statement provides information on Air Force (1) readiness and management challenges including personnel, equipment, training, and organization and utilization, and (2) plans to grow and modernize its force in the context of readiness recovery across DOD. Also, GAO summarizes recommendations to address these challenges and actions taken by the Air Force. This statement is based on previously published work since 2016 related to Air Force readiness challenges, fighter pilot workforce requirements, weapon sustainment, aviation training, and force structure. What GAO Found GAO's prior work has highlighted that the Air Force faces management and readiness challenges in four interrelated areas: Personnel: The Air Force has reported that pilot and aircraft maintainer shortfalls are a key challenge to rebuilding readiness. GAO found in April 2018 that the Air Force had fewer fighter pilots than authorizations for 11 of 12 years, from fiscal years 2006 through 2017. Even as unmanned aerial systems had become more prevalent and fighter pilot workloads had increased, the Air Force had not reevaluated fighter squadron requirements. GAO recommended that the Air Force reevaluate fighter pilot squadron requirements to ensure it has the pilots necessary for all missions. Equipment: Air Force aircraft availability has been limited by challenges associated with aging aircraft, maintenance, and supply support. GAO reported in September 2018 that, from fiscal year 2011 through 2016, the Air Force generally did not meet availability goals for key aircraft. Further, in October 2017 GAO found F-35 availability was below service expectations and sustainment plans did not include key requirements. GAO recommended that DOD revise F-35 sustainment plans to include requirements and decision points needed to implement the F-35 sustainment strategy. Training: The Air Force has identified the need to ensure its forces can successfully achieve missions to address a broad range of current and emerging threats. However, GAO reported in September 2016 that Air Force combat fighter squadrons did not complete annual training requirements due to aircraft availability and training range limitations, and had used the same underlying assumptions for its annual training requirements from 2012 to 2016. GAO recommended that the Air Force reassess its annual training requirements to ensure its forces can accomplish a full range of missions. Organization and Utilization: Air Force management of its force structure can also exacerbate readiness challenges. GAO found in July 2018 that the Air Force's organization of its small F-22 fleet had not maximized aircraft availability, and that its utilization of F-22s reduced opportunities for pilots to train for missions in high-threat environments. GAO found that unless the Air Force assesses the organization and use of its F-22s, F-22 units are likely to continue to experience aircraft availability and pilot training rates that are below what they could be. GAO recommended that the Air Force reassess its F-22 organizational structure to reduce risk to future operations. Looking to the future, the Air Force will have to balance the rebuilding of its existing force with its desire to grow and modernize. To meet current and future demands, the Air Force has stated that it needs to have more squadrons. However, the costs of such growth are as yet unknown, and will have to compete with other military services looking to increase their force structure and recapitalize their forces. Even with growth, the Air Force would be dependent on the force of today for decades to come and will need to stay focused on rebuilding the readiness of existing forces. Addressing GAO's recommendations are necessary steps to meet current and future needs and can assist the Air Force moving forward. What GAO Recommends GAO has made 14 recommendations in prior unclassified work described in this statement. DOD generally concurred with most of them and has implemented 1. Continued attention to these recommendations can assist and guide the Air Force moving forward as it seeks to rebuild the readiness of its forces.
gao_GAO-18-197T
gao_GAO-18-197T_0
Select Agent Program Does Not Fully Meet Key Elements of Effective Oversight or Have Joint Strategic Planning Documents to Guide Its Efforts The Select Agent Program does not fully meet key elements of effective oversight. In particular, the program has oversight shortcomings related to each of our five key elements: independence, performing reviews, technical expertise, transparency, and enforcement. In addition, the program does not have joint strategic planning documents to guide its oversight efforts, such as a joint strategic plan and workforce plan. It did, however, begin taking steps to develop a joint strategic plan during the summer of 2017. First, regarding independence, the Select Agent Program is not structurally distinct and separate from all of the laboratories it oversees because the two components of the Select Agent Program are located in CDC and APHIS, both of which also have high-containment laboratories registered with the program. Many experts at our meeting raised concerns that the Select Agent Program cannot be entirely independent in its oversight of CDC and APHIS laboratories because the Select Agent Program is composed of divisions of those agencies. To help reduce conflicts of interest, the program has taken steps such as having APHIS lead inspections of CDC laboratories. However, it has generally done so in response to concerns raised by others. The program itself has not formally assessed all potential risks posed by its current structure and the effectiveness of its mechanisms to address those risks. The Office of Management and Budget’s Circular A-123 requires federal agencies to integrate risk management activities into their program management to help ensure they are effectively managing risks that could affect the achievement of agency objectives. In addition, federal internal control standards state that management should identify, analyze, and respond to risks related to achieving defined objectives. Without (1) regularly assessing the potential risks posed by the program’s current structure and the effectiveness of its mechanisms to address them and (2) taking actions as necessary to ensure any identified risks are addressed, the program may not be aware of or effectively mitigate impairments to its independence that could affect its ability to achieve its objectives. Second, regarding the ability to perform reviews, we found that the Select Agent Program performs several types of reviews to ensure compliance with regulatory and program requirements. However, the program may not target the highest-risk activities in its inspections, in part because it has not formally assessed which activities pose the highest risk to biological safety and security. For example, many experts at our meeting and laboratory representatives we interviewed raised concerns about the amount of time inspectors spend assessing compliance with inventory controls (e.g., by counting and examining vials containing select agents) and reviewing inventory records during the inspection process, which takes time away from inspecting other aspects of biological safety and security. Experts at our meeting said that these activities do little to reduce the risk of theft of select agents (a security concern) because samples could be clandestinely removed from vials and replicated without being detected by the inventory controls currently in place. Further, other laboratory representatives told us that activities to assess compliance with certain program requirements, such as time-consuming reviews of records, did little to reduce risk and were unnecessarily burdensome to both researchers and inspectors. These inspection activities are generally intended to address biological security concerns; however, recent high- profile incidents at registered laboratories have concerned biological safety rather than security. To improve the inspection process and identify trends and associations between inspection findings and risk, a 2015 internal review of the CDC component of the Select Agent Program recommended that the CDC and APHIS components of the program work together to analyze inspection and investigation data. According to program officials, they have not yet addressed the recommendation because they do not currently have adequate tools to do so, but the program is transitioning to a new database that will enhance their ability to identify trends and associations and thereby guide improvements to the inspection process. However, the program did not provide a plan for when or how the program will carry out these analyses to improve the inspection process. Federal internal control standards state that management should identify, analyze, and respond to risks related to achieving defined objectives. Without developing and implementing a plan to identify which laboratory activities carry the highest biological safety and security risks and to respond to those risks by aligning inspections and other oversight efforts to target those activities, the Select Agent Program will not have assurance that it is effectively balancing the potential safety and security gains from its oversight efforts against the use of program resources and the effect on laboratories’ research. We also found that the Select Agent Program did not fully meet the other three key elements of effective oversight: technical expertise, transparency, and enforcement. For example, although the program has taken steps to hire additional staff and enhance the technical expertise of its staff, workforce and training gaps remain. In addition, although the program has increased transparency about registered laboratories and violations of the select agent regulations to the public and registered laboratories since 2016, the information it shares is limited and there is no consensus about what additional information could be shared, given security concerns. Lastly, although the program has authority to enforce compliance with program requirements, it is still working to address past concerns about the need for greater consistency and clarity in actions it takes in exercising this authority. In addition to not fully meeting the five key elements of effective oversight, we found that the Select Agent Program does not have joint strategic planning documents to guide its shared oversight efforts across CDC and APHIS. For example, the program does not have a joint mission statement to collectively define what the program seeks to accomplish through its oversight. It also does not yet have a strategic plan. Agencies can use strategic plans to set goals and identify performance measures for gauging progress towards those goals. Strategic plans can also outline how agencies plan to collaborate with each other to help achieve goals and objectives. The program began taking steps to develop a joint strategic plan during the course of our review and, in August 2017, began soliciting bids from contractors for the plan’s development. The statement of work for the contract stipulates that the contractor shall develop guiding principles for the Select Agent Program along with a mission statement and strategic goals and objectives, among other requirements. However, it does not have any requirements related to development of a joint workforce plan. We have found in the past that agencies’ strategic workforce planning should be clearly linked to the agency’s mission and long-term goals developed during the strategic planning process. Developing a joint workforce plan that assesses workforce and training needs for the program as a whole would help the program to better manage fragmentation by improving how it leverages resources to ensure all workforce and training needs are met. Leveraging resources is especially important given fiscal constraints. In our report, we recommended that CDC and APHIS take several steps to address these findings. First, we made five recommendations to improve independence, including that CDC and APHIS regularly assess the potential risks posed by the program’s structure and the effectiveness of its mechanisms to address those risks, and take actions as necessary to ensure any identified risks are addressed so that impairments to independence do not affect its ability to achieve its objectives. Second, to improve the ability to perform reviews, we recommended that the directors of the Select Agent Program work together to develop and implement a plan to identify which laboratory activities carry the highest biological safety and security risks and to respond to those risks by aligning inspections and other oversight efforts to target those activities. We also made several other recommendations, including recommending that the directors of the Select Agent Program develop a joint workforce plan that assesses workforce and training needs for the program as a whole. Selected Countries and Regulatory Sectors Employ Other Approaches to Promote Effective Oversight Selected countries and regulatory sectors employ approaches to promote effective oversight that sometimes differ from those of the Select Agent Program by, for example, having regulatory bodies that are structurally independent from the entities they oversee or taking a risk-based approach to performing reviews. To illustrate, with regard to independence, Great Britain’s Health and Safety Executive, whose mission is to protect worker and public health and safety and which oversees laboratories that work with pathogens, is an independent government agency. According to officials from the Health and Safety Executive and laboratory representatives, one strength of this approach is that it avoids potential organizational conflicts of interest because none of the laboratories it oversees are part of the same agency. Some other regulatory sectors in the United States, including the Nuclear Regulatory Commission (NRC), are also structurally independent from regulated facilities as a mechanism to ensure independence. Prior to the creation of NRC in 1974, the U.S. Atomic Energy Commission was responsible for both promotion and oversight of the nuclear industry. The Energy Reorganization Act of 1974 established NRC as a separate, independent entity. According to a Senate committee report, this was a response to growing criticism that there was a basic conflict between the U.S. Atomic Energy Commission’s regulation of the nuclear power industry and its development and promotion of new technology for the industry. Related to the ability to perform reviews, regulators in Great Britain and Canada apply a risk-based approach by targeting laboratories with a documented history of performance issues or those conducting higher- risk activities. In both Great Britain and Canada, the organizations that oversee laboratories generally focus their oversight on (1) biological safety, and (2) regulation of all potentially hazardous pathogens in laboratories. In contrast, the Select Agent Program originated from security-related concerns and regulates only those pathogens identified on the U.S. select agent list and no other pathogens that may be handled in high-containment but are not select agents, such as West Nile virus. Other differences we found in approaches include relying on scientists and other laboratory personnel to have requisite technical expertise on the pathogens and activities in their laboratories, sharing incident information on their public websites, and having prosecutorial authority when incidents occur. In conclusion, CDC and APHIS share a critical role in ensuring that important research on select agents can be conducted in high- containment laboratories in a safe and secure manner. The Select Agent Program has made a number of improvements over the past few years, such as hiring additional staff and improving training to enhance expertise. Nevertheless, the program does not fully meet all key elements of effective oversight and more is needed to develop joint strategic plans to collectively guide its shared oversight efforts. In our prior work, we have found that existing federal oversight of high-containment laboratories is fragmented and largely self-policing, among other things. Our October 2017 report, in combination with these past findings, continues to raise questions about whether the current government framework and oversight are adequate. Vice Chairman Griffith, Ranking Member DeGette, and Members of the Subcommittee, this concludes our prepared statement. We would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about this statement, please contact Mary Denigan-Macauley, Ph.D., Acting Director, Health Care, at (202) 512-7114 or deniganmacauleym@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Sushil Sharma, Ph.D., Dr.PH (Assistant Director); Amy Bowser; Caitlin Dardenne, Ph.D.; John Neumann; Cynthia Norris; Timothy M. Persons, Ph.D.; and Lesley Rinner. Staff who made key contributions to the report(s) cited in the statement are identified in the source products. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Safety lapses have occurred at laboratories in the United States that conduct research on select agents—such as Ebola virus or anthrax bacteria—that may cause serious or lethal infection in humans, animals, or plants, raising concerns about whether oversight is effective. This statement summarizes information contained in GAO's October 2017 report, titled High-Containment Laboratories: Coordinated Actions Needed to Enhance the Select Agent Program's Oversight of Hazardous Pathogens ( GAO-18-145 ). What GAO Found The Federal Select Agent Program—jointly managed by the Departments of Health and Human Services (HHS) and Agriculture (USDA)—oversees laboratories' handling of certain hazardous pathogens known as select agents. However, the program does not fully meet all key elements of effective oversight. For example, the program is not structurally independent from all laboratories it oversees and has not assessed risks posed by its current structure or the effectiveness of mechanisms it has to reduce organizational conflicts of interest. Without conducting such assessments and taking actions as needed to address risks, the program may not effectively mitigate impairments to its independence. In addition, some experts and laboratory representatives GAO interviewed raised concerns that the program's reviews may not target the highest-risk activities, in part because it has not formally assessed which activities pose the highest risk. Without assessing the risk of activities it oversees and targeting its resources appropriately, the program cannot ensure it is balancing its resources against their impact. Moreover, the program does not have strategic planning documents, such as a joint strategic plan and workforce plan, to guide its oversight. Although it began taking steps to develop a joint strategic plan, the program is not developing workforce plans as part of this effort. Developing a joint workforce plan that assesses workforce and training needs for the program as a whole would help the program leverage resources to ensure all workforce and training needs are met. Selected countries and regulatory sectors GAO reviewed employ other approaches to promote effective oversight. For example, in Great Britain, an independent government agency focused on health and safety oversees laboratories that work with pathogens. In addition, in both Great Britain and Canada, regulators (1) focus their oversight on biological safety, because safety incidents provided the impetus for laboratory oversight in these countries and (2) regulate all potentially hazardous pathogens and activities in laboratories. What GAO Recommends GAO's recommendations in GAO-18-145 included that the Federal Select Agent Program (1) assess risks posed by its current structure and address risks as needed; (2) assess the risk of activities it oversees and target reviews to the highest-risk activities; and (3) develop a joint workforce plan. HHS and USDA agreed with GAO's recommendations and outlined actions they are taking, or plan to take, to address them, which GAO will continue to monitor.
gao_GAO-19-77
gao_GAO-19-77_0
Background Human trafficking exploits individuals and often involves transnational criminal organizations, violations of labor and immigration codes, and government corruption. Many forms of trafficking—including sex trafficking and labor trafficking—can take place anywhere in the world and occur without crossing country boundaries. As discussed in State’s annual Trafficking in Persons Report, trafficking victims include, for example, Asian and African women and men who migrate to the Persian Gulf region for domestic labor but then suffer both labor trafficking and sexual abuse in the homes of their employers. Some victims are children. For example, Pakistani children as young as 5 years are sold or kidnapped into forced labor to work in brick kilns, some of which are owned by government officials. Other victims are subjected to sexual exploitation. In some cases, women and girls have been bought and sold as sex slaves by members of the Islamic State. In other cases, adult men and women have been forced to engage in commercial sex, and children induced to do the same. Individuals, including men, are exploited in forced labor in a variety of industries. Burmese men, for example, have been forced to labor 20 hours a day, 7 days a week on fishing boats in Thailand. See figure 1 for examples of victims of trafficking in persons. Among other U.S. agencies involved in counter-trafficking in persons, State, DOL, USAID, DOD, and Treasury have various roles and responsibilities related to international counter-trafficking in persons, including some internationally-focused programs and activities that do not involve awards made to implementing partners, as follows: State. State leads the global engagement of the United States, and supports the coordination of efforts across the U.S government in counter-trafficking in persons. State’s Office to Monitor and Combat Trafficking in Persons (TIP Office), established pursuant to the Trafficking Victims Protection Act of 2000, is responsible for bilateral and multilateral diplomacy, targeted foreign assistance, and public engagement on trafficking in persons. The office also prepares and issues an annual Trafficking in Persons Report that assesses the counter-trafficking efforts of governments and assigns them tier rankings. Furthermore, the TIP Office develops annual regional programming strategies, awards projects to implementing partners and oversees the project award process, and provides technical assistance to implementing partners. Other parts of State, including regional bureaus that cover geographic regions and functional bureaus that cover global issues such as human rights, are also responsible for work related to combating trafficking in persons. DOL. Within DOL, the Bureau of International Labor Affairs’ (ILAB) Office of Child Labor, Forced Labor, and Human Trafficking (OCFT) conducts research, publishes reports, and administers projects awarded to implementing partners on international child labor, forced labor, and trafficking in persons. ILAB’s reports include the annual Findings on the Worst Forms of Child Labor report, which assesses the efforts of approximately 140 countries and territories to eliminate the worst forms of child labor in the areas of laws and regulations, institutional mechanisms for coordinating and enforcement, and government policies and programs. ILAB also reports on the List of Goods Produced by Child Labor or Forced Labor showing goods and their source countries which ILAB has reason to believe are produced by child labor or forced labor in violation of international standards. USAID. USAID administers projects awarded to implementing partners that address counter-trafficking in persons, including increased investments in conflict and crisis areas, and integrating such projects into broader development projects. USAID field missions manage the majority of these counter-trafficking activities through projects that address trafficking challenges specific to the field mission’s region or country. USAID’s Center of Excellence on Democracy, Human Rights and Governance (DRG Center) in Washington, D.C. is responsible for oversight of USAID’s counter- trafficking policy. The DRG Center is responsible for coordinating and reporting on USAID-wide counter-trafficking in persons efforts; oversees the implementation of USAID’s counter-trafficking in persons policy in collaboration with regional bureaus and country missions; works with regional bureaus and country missions to gather counter- trafficking best practices and lessons learned; provides technical assistance and training to field and Washington-based staff on designing, managing, and monitoring and evaluating trafficking in persons projects; and conducts and manages research and learning activities related to combating trafficking in persons to collect data to inform the design of field projects. DOD. DOD’s Combating Trafficking in Persons Program Management Office, under the Under Secretary of Defense for Personnel and Readiness in the Defense Human Resources Activity, develops trafficking awareness and training material for all DOD components. On December 16, 2002, the President signed National Security Presidential Directive 22, which declared the United States had a zero tolerance policy for trafficking in persons. The Combating Trafficking in Persons Program Management Office is responsible for overseeing, developing, and providing the tools necessary for implementing National Security Directive 22 within DOD. The office has developed several different training programs, designed to provide an overview of trafficking in persons (including signs of trafficking, key policies and procedures, and reporting procedures), as well as awareness materials for distribution to DOD components and defense contractors overseas. Treasury. Treasury has activities, but not specific programs, that may support wider U.S. efforts to address counter-trafficking in persons, according to Treasury officials. Pursuant to its mandate, components of Treasury’s Office of Terrorism and Financial Intelligence (TFI), including Financial Crimes Enforcement Network (FinCEN), Office of Terrorist Financing and Financial Crimes (TFFC), and Office of Foreign Assets Control (OFAC) work on addressing illicit finance activities that support the wider goal of combating global trafficking in persons. Pursuant to the Trafficking Victims Protection Act of 2000, the President established the President’s Interagency Task Force to Monitor and Combat Trafficking in Persons (PITF), which is a cabinet-level entity that consists of agencies across the federal government responsible for coordinating implementation of the Trafficking Victims Protection Act of 2000, among other activities. It is chaired by the Secretary of State; State, DOL, USAID, DOD, and Treasury are all PITF agencies. In addition, the Trafficking Victims Protection Act, as amended in 2003, established the Senior Policy Operating Group, which consists of senior officials designated as representatives of the PITF agencies. During Fiscal Year 2017, State, DOL, and USAID Managed 120 Counter- Trafficking in Persons Projects State, DOL, and USAID managed 120 projects in counter-trafficking in persons carried out by implementing partners during fiscal year 2017, according to information provided by officials with these agencies. These projects, as identified by agency officials, ranged from those focused on counter-trafficking in persons, to those in which counter-trafficking in persons was integrated into but was not the primary goal of the project. At these agencies, project officers work with the implementing partner on the administration and technical guidance of the project, such as reviewing progress reports. Table 1 shows a summary of these agencies’ project information; appendix II provides more detailed information on all 120 projects. During fiscal year 2017, State managed 79 counter-trafficking projects, from those focused on individual countries, to regional and global ones that covered several countries, with a total award amount of approximately $62 million, according to information provided by State officials. State TIP Office managed 75 projects with total awarded amount of around $57 million. Award amounts per project ranged from approximately $150,000 to $2.55 million. For example, State TIP Office had 11 global projects totaling about $10 million and 6 regional projects in Africa amounting to about $4 million. State TIP Office had two projects in Ghana that received the highest amount of awards, approximately $2.5 million for each project. State TIP Office had four projects in India amounting to around $3 million, and four in Thailand totaling around $2.35 million. In addition to State TIP Office’s projects, State’s Bureau of Democracy, Human Rights, and Labor (DRL) managed four counter-trafficking projects with a reported total award amount of about $5 million, with two projects in Mauritania making up around 70 percent of DRL’s total awarded amount. DOL’s ILAB/OCFT managed six projects in fiscal year 2017 with a total award amount of approximately $31 million, according to DOL officials. These projects ranged from one scheduled to last for 5 years with an awarded amount of about $1 million, to one scheduled to last for about 4 years with an awarded amount of about $14 million. Three of DOL’s projects were global projects, while two others focused on two countries each and one project focused on one country. USAID’s projects during fiscal year 2017 consisted of 2 regional projects in Asia, and 33 individual projects in 22 different countries. Some of these USAID-identified projects were integrated projects with a broader development focus that includes USAID programmatic objectives other than counter-trafficking in persons. According to information provided by USAID officials, the award amount for all counter-trafficking in persons projects active in fiscal year 2017, including all integrated projects and standalone projects with a sole focus on combatting trafficking in persons, totaled around $296 million; and USAID’s committed funding to these projects’ activities related to counter-trafficking in persons was about $79 million as of September 2018. During fiscal year 2017, USAID focused on a few countries where the agency awarded multiple counter-trafficking projects, such as four projects in Nepal and four projects in Burma. According to officials, State, DOL, and USAID generally design projects to align with the “3Ps approach”—prevention, protection, and prosecution— and to consider trends and recommendations identified in agency reports on foreign governments’ counter-trafficking efforts. According to State’s publicly available information, the “3Ps” approach serves as the fundamental counter-trafficking in persons framework used around the world, and the U.S. government follows this approach to 1. prevent trafficking in persons through public awareness, outreach, education, and advocacy campaigns; 2. protect and assist victims by providing shelters as well as health, psychological, legal, and vocational services; and 3. investigate and prosecute trafficking in persons crimes by providing training and technical assistance for law enforcement officials, such as police, prosecutors, and judges. State’s publicly available information on the 3Ps noted that prevention, protection, and prosecution efforts are closely intertwined. Prosecution, for example, can function as a deterrent, potentially preventing the occurrence of human trafficking. Likewise, protection can empower those who have been exploited so that they are not victimized again once they re-enter society. A victim-centered prosecution that enables a survivor to participate in the prosecution is integral to protection efforts. In addition to the “3Ps,” a “4th P”—for partnership—serves as a complementary means to achieve progress across the “3Ps” and enlist all segments of society in the fight against human trafficking, according to State’s publicly available information. Addressing the partnerships element, USAID’s counter-trafficking policy seeks to increase coordination across a broad range of national, regional, and global stakeholders from civil society, government, the private sector, labor unions, media, and faith-based organizations. DOL and USAID Fully Documented Their Monitoring Activities for All Selected Projects, but State Did Not Fully Document Its Activities for 16 of 37 Selected Projects State, DOL, and USAID Use Similar Tools to Monitor Performance of Their Counter-Trafficking in Persons Projects Monitoring is the collecting of data to determine whether a project is being implemented as intended and the tracking of progress through preselected performance indicators during the life of a project. State, DOL, and USAID use a number of similar tools—according to their current policies, guidance, and agency officials—to monitor the performance of their counter-trafficking in persons projects, including monitoring plans, indicators and targets, periodic progress reports, and final progress reports. The agencies also conduct site visits, but their policies vary on whether site visits are required for every project during implementation. Monitoring plan. The monitoring plan—according to monitoring policies of the three agencies—documents, among other things, all of the indicators and targets for the project as well as data collection frequency for each indicator. In addition, according to State TIP Office officials, the monitoring plan’s indicators and targets for TIP Office- managed counter-trafficking in persons projects are to be organized in a logic model, which is a visual representation that shows the linkages among the project’s goals, objectives, activities, outputs, and outcomes (see table 2). The logic model is intended to show relationships between what the project will do and what changes it expects to achieve. Indicators and Targets. Performance indicators—according to monitoring policies of the three agencies—are used to monitor progress and measure actual results compared to expected results. Targets are to be set for each performance indicator to indicate the expected results over the course of each period of performance. According to agency officials, the monitoring plan documents indicators and targets to be tracked and reported on through periodic progress reports to assess whether the project is likely to achieve the desired results. GAO has also found that a key attribute of effective performance measures is having a measurable target. Periodic progress reports. The reporting templates for the three agencies show that periodic progress reports—which are submitted at established intervals during the project’s implementation—compare actual to planned performance and indicate the progress made in accomplishing the goals and objectives of the project, including reporting on progress toward the monitoring plan’s indicator targets. Final progress report. The final progress report—according to monitoring policies of the agencies or agency officials—is a stand- alone report that provides a summary of the progress and achievements made during the life of the project. Site Visits. The three agencies policies vary on whether site visits are required for every project during implementation. For example, State’s policy notes that site visits may be conducted to review and evaluate recipient records, accomplishments, organizational procedures, and financial control systems, as well as to conduct interviews and provide technical assistance as necessary. In 2015, the State TIP Office established a goal to conduct at least one site visit during the life time of every project. While site visits during a project’s implementation are not required under DOL’s policy, DOL officials explained that they use site visits when deemed necessary to supplement information from other forms of oversight. USAID’s policy requires that a site visit be conducted for every project during implementation to provide activity oversight, inspect implementation progress and deliverables, verify monitoring data, and learn from activity implementation. In addition to these monitoring tools, State, USAID, and DOL officials told us that they rely on frequent communication with implementing partners as part of their monitoring process. Overall, monitoring is intended to help agencies determine whether the project is meeting its goals, update and adjust interventions and activities as needed, and ensure that funds are used responsibly. DOL and USAID Fully Documented Their Monitoring Activities for Selected Projects, while State Did Not We found, based on our review of 54 selected counter-trafficking in persons projects (37 State, 3 DOL, and 14 USAID), that DOL and USAID had fully documented their performance monitoring activities, while State did not fully document its activities for 16 of 37 (43 percent) of the projects we reviewed with project start dates between fiscal years 2011 to 2016. DOL’s documented monitoring activities included the monitoring plan for each project as well as fiscal year 2017 semi-annual progress reports, including indicators and targets. USAID’s documented monitoring activities included the monitoring plan for each project; fiscal year 2017 progress reports at the reporting frequency specified in the agreements for each project; the final progress report, including indicators and targets, for the three projects that ended as of December 2017; and evidence that at least one site visit was conducted during each project’s implementation. Overall, the three agencies reported having conducted at least one site visit during the life time of the project for 47 of 54 (87 percent) of the selected projects. As shown in table 3, State did not fully document its monitoring activities (monitoring plan; fiscal year 2017 quarterly progress reports; and final progress report, including indicators and targets, for projects that ended as of December 2017) for 16 of the 37 selected projects we reviewed. Specifically, State did not have nine monitoring plans, five complete progress reports, or targets for each indicator in six of seven final progress reports for projects that ended as of December 2017. (See appendix III for detailed information on each of the 37 projects.) For the nine projects for which the monitoring plan was not documented, the State TIP Office indicated that it was unable to locate these documents or they were not completed because the projects were finalized when the TIP Office was beginning to institute the monitoring plan requirement. Although TIP Office officials told us that the TIP Office piloted and began to phase in the monitoring plan requirement over the course of 2014 and early 2015, eight of the nine projects without monitoring plans started in September or October 2015. We found that each of the nine projects had a logic model used to report progress in the fiscal year 2017 quarterly progress reports we reviewed, which would have provided TIP Office officials a basis for monitoring project performance at that point. However, federal standards for internal control call for agency management to design monitoring activities so that all transactions are completely and accurately recorded and so that management can evaluate project results. Specifically, internal controls specify that monitoring should be ongoing throughout the life of the project, which is consistent with State’s current policy that generally requires completion of the monitoring plan prior to award. Without timely documentation of the monitoring plans at the start of the project, TIP Office officials may not be able to ensure that projects are achieving their goals, as intended, from the beginning of project operations. For the three projects for which the quarterly progress report for the first quarter of fiscal year 2017 had been partially completed, the State TIP Office indicated that the implementing partners began to use the TIP Office’s quarterly reporting template for subsequent reports after TIP Office officials instructed the implementing partner to do so. For the one project where the quarterly progress report was not completed for the third quarter of fiscal year 2017, or partially completed for the fourth quarter of fiscal year 2017, the project officer provided possible reasons why the documents were not in the project’s file, including that the implementing partner lacked the capacity to design a logic model. The project ended December 31, 2017. Federal standards for internal control call for agency management to design monitoring activities, such as performance reporting, so that all transactions are completely and accurately recorded, and project results can be continuously evaluated. As previously discussed, performance progress reports should compare actual to planned performance and indicate the progress made in accomplishing the goals and objectives of the project. Therefore, the TIP Office may lack information needed to assess project performance if it does not have access to complete monitoring documentation. For the six projects for which targets were not fully documented in the final progress reports, we found that targets were lacking for 110 of 253 (43 percent) of indicators across the six final progress reports. Our prior work on performance measurement identified 10 key attributes of performance measures—such as having a measurable target—that GAO has found are key to successfully measuring a project’s performance. For example, our prior work has shown that numerical targets or other measurable values facilitate future assessments of whether overall goals and objectives are achieved because comparisons can be easily made between projected performance and actual results. State TIP Office officials explained that the final progress reports we reviewed lacked targets because the TIP Office had not required targets for each indicator for the projects we reviewed that started in fiscal years 2011 to 2016. State TIP Office officials also said that project officers may not have set targets due to limited resources in previous years. A lack of actual targets limits the TIP Office’s ability to assess project performance, including effectiveness, and determine if implementation is on track or if any timely corrections or adjustments may be needed to improve project efficiency or effectiveness. According to State TIP Office officials, the TIP Office has taken steps to improve its documentation of monitoring activities, such as instituting a monitoring plan requirement; increasing staff, including hiring a monitoring and evaluation specialist; and developing standard templates for implementing partners to use for reporting. Moreover, in November 2017, State established a new policy asserting that, building on the logic model or project charter, bureaus and independent offices must set targets for each performance indicator to indicate the expected change over the course of each period of performance. It further notes that bureaus and independent offices should maintain documentation of project design, including the logic model. Additionally, State TIP Office officials said that State is developing a department-wide automated information management system (State Assistance Management System - Domestic, or SAMS-D) that officials expect to standardize entry of performance information and, under the new system targets, must be recorded for each indicator. State TIP Office officials have worked to pilot- test SAMS-D to provide feedback on the system, including suggestions to improve the completeness of data collection, according to TIP Office officials. Despite these efforts, the TIP Office’s documentation of all monitoring activities, and implementation of its November 2017 requirement to set targets for all performance indicators, is uncertain. For example, even though the TIP Office informed us that it began to institute a monitoring plan requirement over the course of 2014 and early 2015, as previously noted, eight projects we reviewed that started in September or October 2015 did not have monitoring plans. In addition, according to State officials, in SAMS-D, targets could be recorded as “to be determined” and there are no controls in place to ensure that “to be determined” entries are replaced with actual targets. State officials said that SAMS-D has the capability to implement controls to alert users to update “to be determined” targets, but pilot users of SAMS-D, which include the TIP Office, have not provided feedback for this capability so far. Furthermore, State TIP Office officials informed us that the TIP Office cannot require all implementing partners to set targets, but that the TIP Office aspires to update relevant targets regularly in the future and would encourage implementing partners to update target values when appropriate. Without controls to ensure full documentation of monitoring activities and established performance targets, State is limited in its ability to assess project performance, including project efficiency or effectiveness. State and USAID Do Not Have Sufficient Controls to Ensure the Reliability of Project Information, while DOL Had Consistent and Complete Performance Information in the Project We Reviewed In our review of selected indicators in two State TIP Office and two USAID projects, we found that State and USAID used inconsistent and incomplete performance information to monitor these projects. We found that State TIP Office and USAID do not have sufficient controls in place to ensure that the performance information they use is reliable. In contrast, we found that DOL had consistent and complete performance information in a project we reviewed, and we identified no controls in DOL’s process that were insufficient for assuring the reliability of this information. State and USAID Projects We Reviewed Showed Inconsistent and Incomplete Performance Information For selected indicators in two State TIP Office and two USAID projects, we found numerous errors or omissions in progress reports we reviewed, which resulted in inconsistent and incomplete performance information agencies used to monitor these projects. Specifically, we found examples of inconsistent information, which included many instances in which quarterly indicator totals differed from annual or cumulative totals reported separately on the same projects, and numbers reported in narrative information that differed from numbers reported as indicator values. In addition, we found examples of incomplete information, including narrative elements that were missing in whole or in part. Inconsistent Performance Information. We found numerous instances in which quarterly totals differed from annual or cumulative totals reported separately on the same projects. When these errors occurred, it was not possible to independently determine project performance based on report information. For example, For one State TIP Office project, reported cumulative progress overstated quarterly progress for at least 11 indicators (3 of which by 25 percent or more) and understated quarterly progress for at least 5 indicators (once by 25 percent or more). For example, for the indicator “number of standardized reintegration protocols/guidelines/tools developed (case forms, family assessment, etc.,)” State’s cumulative performance report as of the 4th quarter of fiscal year 2017 indicated that two tools had been developed, whereas quarterly reports showed that only one had been developed. For one USAID project, the indicator “number of assisted communes allocating and accessing funds for trafficking in persons prevention activities” showed that annual results were 60, while quarterly report data combined showed that the number was 6, which USAID officials confirmed was the correct figure. For another USAID project, the indicator, “number of food security private enterprises (for profit), producers organizations, water users associations, women’s groups, trade and business associations, and community-based organizations receiving U.S. government assistance” showed an annual result of one, while quarterly totals combined showed a total of three, which USAID officials confirmed was the correct figure. For the projects we reviewed, implementing partners produced narrative descriptions of progress made to accompany indicator results. We found cases in which numbers reported in narrative information were not consistent with numbers reported as indicator values. For example, for the State TIP Office indicator “number of criminal justice practitioners trained” for one project, indicator results for two quarters differed from results presented in the corresponding narrative during fiscal years 2016 to 2017. State officials found that the narrative information was correct for one of these inconsistencies and the indicator result was correct for the other. In addition, for one USAID indicator—number of public awareness tools on trafficking in persons developed and disseminated—the narrative report for one quarter described distributions that added up to 21,765 products, while the reported quantitative indicator total was 21,482. USAID officials confirmed that 21,765 was the correct figure. Incomplete Performance Information. Additionally, some quarterly reports had narrative elements that were incomplete in whole or in part, which made independent interpretation of project performance difficult or impossible. The implementing partner in one State TIP Office project copied and pasted significant portions of narrative information in quarterly reports for 2 years and, according to State TIP Office officials, did not fulfill a request by State TIP Office to include only current quarterly information in formal quarterly reports because it was focused on other activities. For nearly the entire period, the implementing partner indicated that it was “following up” with government entities in three countries to set up counter-trafficking in persons training for government officials, but no indication was made in formal quarterly reports about the results of any of these follow-up activities. For one State TIP Office project, the indicator “number of children receiving care, whose cases are reported to the police” had no narrative information or incomplete narrative information provided for three of the four quarters in which activity occurred during our period of review (comprising almost 90 percent of reported performance under this indicator). For a USAID project, the implementing partner reported a combined performance number of approximately 200 from the first through third quarters of fiscal year 2017 for the indicator “number of members of producer organizations and community based organizations receiving U.S. government assistance.” However, annual performance for fiscal year 2017 was reported as nearly 1,700 organizations. USAID officials explained that this difference was the result of the implementing partner’s misinterpretation of the indicator’s definition when producing the quarterly reports, but the annual report narrative did not explain this correction. Additionally, for USAID’s indicator on the “number of public awareness tools on trafficking in persons developed and disseminated,” no narrative information in the quarterly or annual reports explained how the last quarter of fiscal year 2016 performance approximately doubled from that of the previous quarter. Narrative information in the annual report described performance for the year only in general terms and did not clarify this significant change. In addition to direct project oversight, State TIP Office and USAID officials stated that performance information from progress reports that the agencies use to monitor counter-trafficking in persons projects is regularly used for internal and external reporting, program decisions, and lessons learned. For example, according to officials, this information is used by senior agency officials to inform their decision-making, in reports such as the Attorney General’s Annual Report to Congress and Assessment of U.S. Government Activities to Combat Trafficking in Persons, and to fulfil other requests from Congress. Neither State TIP Office nor USAID Has Sufficient Controls to Ensure the Reliability of Performance Information Neither State TIP Office nor USAID has sufficient controls to ensure consistent and complete performance information, and both face challenges to data reliability stemming from information reported in non- standard formats, implementing partners with limited capacities to report performance information, and the time-consuming nature of reviewing reported information. Federal internal control standards state that management should obtain data from reliable internal and external sources. According to these standards, reliable internal and external sources should provide data that are reasonably free from error and bias and faithfully represent what they purport to represent; and management should evaluate both internal and external sources of data for reliability. Without implementing additional controls to ensure that performance information are consistent and complete, State and USAID officials may not fully or accurately understand what projects are, or are not, achieving and, therefore, how their efforts could be altered as needed. Further, reports that are prepared or program decisions that are made using the TIP Office monitoring reports could be based on inconsistent or incomplete information that does not accurately present project results. State Lacks Adequate Controls to Ensure the Reliability of Performance Information State TIP Office currently receives performance information using documents submitted by implementing partners, although this information is not compiled into a single data system and is not in a standardized format. While State provides suggested templates for reporting information, officials said that they cannot require implementing organizations to use these templates and we found that implementing partners provided information in varying formats. According to State TIP Office officials, project officers perform manual reviews of quantitative information in monitoring reports but have insufficient time to carry out detailed reviews of data reliability for all indicators. State TIP Office project officers also stated that the process of comparing narrative information to indicator information was time consuming and difficult. According to these officials, the quality of the information in progress reports also depends on the priorities and resources—which can be limited—of the implementing partner. In addition to reviewing progress reports, State project officers we spoke to said that they rely on site visits and frequent, less formal communication as part of their oversight process. Project officers for the State TIP Office projects we reviewed stated that they did not always examine performance trends over time or review consistency in reported cumulative totals—which should be the sums of the previous and current quarters’ reported results—with quarterly totals, for reasons including the difficulty in assembling quarterly information in this manner and resource limitations. State TIP Office officials noted that they are aware of data quality problems in counter-trafficking in persons monitoring reports. State is developing SAMS-D, a system that officials expect to standardize entry of information from common performance indicators and logic models, according to State officials. These officials stated that if SAMS-D is deployed, State TIP Office could find it easier to analyze and revise logic models that implementing partners submit, as well as examine performance indicator results over time, since standardized data would be available in a centralized location. According to State officials, SAMS-D could be programmed with automatic checks or alerts under conditions defined by the TIP Office and the database programmer. For example, the system could require that fields be filled out in particular formats or provide an alert if performance under a certain indicator has significantly deviated from prior quarters or the indicator’s target. State TIP Office officials said they were uncertain whether SAMS-D would become operational in 2019, as currently planned. According to officials, State TIP Office has participated in planning and pilot activities for SAMS- D, including testing monitoring tools with implementing partners. According to these officials, additional work is needed to develop rules and controls necessary to operationalize SAMS-D to meet the TIP Office’s particular needs and ensure improved data. Another challenge to implementation of SAMS-D, according to these officials, is that some implementing partners are unable to maintain consistent internet connections necessary to upload information, impeding full roll-out of the system, and an alternative upload mechanism does not yet exist. USAID Lacks Adequate Controls to Ensure the Reliability of Performance Information According to USAID officials, overseas missions currently set many of their own policies and procedures for data quality oversight. For the two projects we reviewed, USAID relied on implementing partners to manage information, while it reviewed this information in addition to conducting site visits and communicating with implementing partners on a regular basis to monitor the projects. USAID officials attributed errors in the project reports we reviewed to factors including implementing partners’ errors in manual computation and misunderstandings of indicator definitions. According to USAID officials, data quality errors due to factors such as transcription errors can also occur in the performance information USAID uses to monitor counter-trafficking in persons projects. USAID project officers for the projects we reviewed said that they regularly conducted manual analysis of information received from implementing partners, but USAID and implementing partners are often pressed for time during the quarterly reporting cycle. According to these project officers, some of the errors GAO found had already been identified by USAID implementing partners during their annual review process and corrected in the annual reports we reviewed. For example, for the USAID indicator “value of new private sector investments in select value chains,” quarterly totals overstated corrected annual results by more than $120,000—approximately $170,000 instead of approximately $50,000. USAID officials said that they and the implementing partner had identified that the implementing partner was incorrectly including additional, unrelated data when producing its quarterly totals and while the annual total had been corrected to approximately $50,000, the annual report did not indicate that this error had occurred in the quarterly reports. USAID officials noted that the quality of the information in the progress reports also depends on the experience and capacity—which can be limited—of the implementing partner. According to USAID officials, USAID is currently building the Development Information Solution (DIS), an agency-wide information system that would provide USAID’s operating units (such as headquarters bureaus or field missions) with a tool to better collect, track, and analyze information to improve how they manage their projects and overall strategies. Implementing partners would be able to access the DIS via a portal where they would directly enter project information and upload reports and supporting information, according to this official. In addition, this information would better inform USAID’s decision-making at the operating unit level and agency level. A USAID official explained that USAID developed DIS partly as a result of USAID senior management’s concern about the lack of one corporate system to collect data in a timely fashion and improve efficiency. A USAID official responsible for managing DIS informed us that the business case for DIS was approved in fiscal year 2016. Developers have regularly solicited input from across the agency, according to this official, and a pilot with six missions is expected to begin in November 2018. This official explained that USAID plans to have DIS operational by the end of 2019, but DIS’s timeframe has been accelerated by a year, to 2019 from 2020, which may create programming and budget challenges, and unexpected challenges may also arise during the pilot process as mission needs for DIS are more fully assessed. USAID is currently developing training, deployment, and communications plans to prepare the agency for implementing DIS, according to officials. DOL Had Consistent and Complete Performance Information for the Selected Project and We Identified No Controls Insufficient to Ensure the Reliability of Performance Information We reviewed selected indicators and targets information in one DOL project and identified no significant consistency or completeness issues beyond early project stages. For example, for the indicator “number of countries that ratify the International Labor Organization Protocol on Forced Labor,” the October 2016 report contained no reported value for this indicator, while the subsequent report (April 2017) updated this figure to indicate a value of “4” for October 2016. DOL officials explained that a data reporting form had not yet been developed as of October 2016, but indicator performance was discussed in the October 2016 narrative and added to the data reporting form when it was developed. While DOL does not require that a project progress report discuss every indicator associated with an activity in the performance report narrative, according to officials, we found that explanations were present for every significant performance-related event that we identified for the fiscal year 2016 and fiscal year 2017 period. We did not identify any controls in DOL’s process that were insufficient to ensure the reliability of performance monitoring information. DOL officials said that they use a system of spreadsheets with automated calculations and validation checks that are intended to standardize information submission and assure consistency and completeness of submitted information. These officials said that the project’s Comprehensive Monitoring and Evaluation Plan defines rules for how information for indicators is to be collected and how indicators are to be computed from this information. According to these officials, DOL develops a customized indicator reporting form for each project in conjunction with implementing partners, which implementing partners complete as part of their regular reporting requirements. According to these officials, these spreadsheets contain formula checks to mitigate the risk of implementing partners making undisclosed changes to indicator results and array information in a standardized manner across reporting periods. Officials also commented that for internal reporting purposes, such as the Government Performance and Results Act, project officers can extract information from indicator templates in a manner that is not overly burdensome. According to officials, DOL is developing an enhancement to existing tools, expected in late 2019, which will provide a traceable way to send and receive reports from grant recipients; timestamps when reports are sent, received, and accepted; and tracking of performance monitoring communications between DOL and implementing partners. They plan to continue to use a spreadsheet-based system for tracking indicator information. State Does Not Have a Process to Ensure that All Performance Indicators are Useful, while USAID and DOL Have Established Processes to Regularly Review the Usefulness of Indicators State TIP Office Does Not Have a Process to Review All Indicators to Ensure Their Usefulness State TIP Office does not have a process to regularly review the number and content of indicators for counter-trafficking in persons projects to ensure that these indicators are useful and that collecting and reviewing information for them is not overly burdensome. State TIP Office officials acknowledged there are too many indicators for many counter-trafficking in persons projects. Project officers have the discretion to revise indicators if the scope of the project is not altered, according to State officials. In addition, according to these officials, changes that alter the project scope are possible with the consent of the implementing partner. However, State TIP Office project officers do not formally indicate which indicators they have determined are most useful and informed us that they have insufficient time and resources to do so as projects progress. One official who focuses on monitoring issues stated that, ideally, there should be three to five indicators per activity, and efforts have been made to reduce the number of indicators in some projects. For example, in one of the State TIP Office projects we reviewed—which was designed prior to the hiring of this official—had more than 230 indicators across 20 activities as of the first quarter of fiscal year 2017, which had been reduced to about 150 by the fourth quarter of fiscal year 2017. Our review of two State TIP Office projects showed that indicators did not change in some situations even when the project officer considered the indicator to have become less relevant. State project officers explained that, instead of only relying on indicator information, they regularly spoke with implementing partners for an understanding of what performance level to expect. While acknowledging errors in the numerical information for some indicators, project officers for the two projects we reviewed said that they sometimes overlooked reviews of all reported indicators in the quarterly progress reports because they consider some indicators to be less useful or unimportant and not needed for monitoring purposes, and burdensome to review in depth. These officials said that project officers focus on the indicators that they consider to be most important for project oversight or congressional requests. State TIP Office officials said that logic models, which include indicators, have improved significantly in recent years (including improvements to the suggested logic model template and the glossary of definitions), partly due to hiring additional monitoring staff, but that State has found the analysis of logic models to be difficult because of the absence of centralized and standardized information and a lack of staff capacity. In addition, project officers stated that they often rely on implementing partners for suggestions with regard to changing indicators. However, according to State officials, these implementing partners may be reluctant to bring up challenges they encounter out of concern that doing so may damage their relationship with State. State’s Program Design and Performance Management Toolkit, rolled-out in 2017, states that indicators can be costly to collect and manage and should therefore be “useful,” which includes having a clear utility for learning, tracking, informing decisions, or addressing ongoing program needs. This policy further states that indicators should also be “adequate,” which includes having only as many indicators in overall monitoring plan as are necessary and feasible to track key progress and results, inform decisions, conduct internal learning, and meet any external communication or reporting requirements. Further, federal internal control standards state that management should establish and operate monitoring activities, and, after doing so, may determine how often it is necessary to change the design of the internal control system as conditions change to effectively address objectives. Without a process to ensure that the number and content of counter-trafficking in persons project indicators are reviewed and modified as needed, project monitoring may be less efficient and effective as implementing partners and State TIP Office staff spend time collecting and reviewing indicator information that is not useful for project monitoring and management. DOL and USAID Have Established Processes to Regularly Review the Usefulness of Indicators DOL and USAID had processes in place to regularly review indicators for the projects we selected. DOL officials told us that project officers work with subject-matter experts to review the relevance of indicators in each semi-annual reporting period. These officials also stated that grantees are required to review their monitoring and evaluation plan annually, which includes the project’s indicators, and to provide the most recent work plan with each semi-annual report. According to DOL officials, while not a DOL requirement, the project we reviewed incorporated a work plan for each component of the project defining when important activities were planned under each output indicator. We found that DOL and the implementing partner made regular changes to these project plans in response to changing conditions. These plans were consistently included in the monitoring documents and most elements were discussed in the associated narrative text. USAID conducts its project oversight primarily out of its overseas missions, according to USAID officials. According to USAID officials associated with the projects we reviewed, these officials should review the project’s indicators annually, as well as when they determine a review is needed, such as when projects have changes in planned activities. USAID officials stated that this annual review process may be explicitly required in some agreements. According to these officials, missions or other operating units are required to manage and update reference sheets for indicators, which officials said are intended to define each indicator and the information to be collected to measure each indicator. Changes to these reference sheets are tracked, according to these officials. Projects we reviewed showed evidence of regular changes to indicators and associated targets. We spoke to project officers about several specific changes that we had identified. For many of these changes, the project officers provided information about their work with implementing partners to appropriately adjust program goals and expectations, such as adapting the project indicators and targets to unexpected or changing conditions. Conclusions Given the grave suffering of victims and damaging effects on society that trafficking in persons imposes, and the U.S. government’s reliance on implementing partners to carry out its counter-trafficking projects, performance monitoring is important to ensure that the United States funds projects that are effective, efficient, and achieve their intended counter-trafficking goals. In fiscal year 2017, State, DOL, and USAID managed 120 counter-trafficking projects and monitored the performance of the projects. However, weaknesses in State’s and USAID’s monitoring processes limit their ability to collect reliable performance information and assess project performance. First, we found that the State TIP Office did not fully document its monitoring activities for many of the projects we reviewed that started in between fiscal years 2011 to 2016. Monitoring the implementation of projects and fully documenting the results of such monitoring are key management controls to help ensure that project recipients use federal funds appropriately and effectively. The State TIP Office was also not setting targets for some project indicators, which may have limited the TIP Office’s ability to determine if implementation was on track or if corrections needed to be made. Furthermore, we found that the State TIP Office and USAID used project performance information reported by the implementing partners—used for internal and external reporting purposes—that was not always consistent or complete, and did not have sufficient controls to ensure the reliability of performance information. Finally, to ensure effective and efficient monitoring, projects need to establish a reasonable number of indicators and update them as needed. However, we found that the State TIP Office does not regularly evaluate and revise all of its indicators for counter-trafficking in persons projects, which can have large numbers of indicators. As a result, the State TIP Office may be using information to monitor project performance that that is less useful and relevant for understanding project progress, and requires more resources and time for the implementing partners to produce and agency officials to review. State TIP Office officials noted that the TIP Office has taken steps to improve its monitoring process, and State and USAID officials explained that State and USAID are developing information management systems that may increase the quality and usefulness of the monitoring information they use. However, these systems are not fully designed or operational and their capabilities are not yet known. Thus, the potential of these systems to strengthen the ability of State and USAID to collect reliable performance information and assess their efforts to combat the serious problem of global trafficking in persons is unclear. State and USAID could benefit from making additional improvements to ensure their projects are being implemented as intended and achieving project goals to prevent trafficking in persons, protect victims, and prosecute trafficking crimes. Recommendations for Executive Action We are making a total of five recommendations, including four to State and one to USAID. Specifically: The Secretary of State should ensure that the Director of the TIP Office establishes targets for each performance indicator. (Recommendation 1) The Secretary of State should ensure that the Director of the TIP Office maintains documentation of all required monitoring activities, including monitoring plans, progress reports, and performance targets. (Recommendation 2) The Secretary of State should ensure that the Director of the TIP Office establishes additional controls to improve the consistency and completeness of performance information that the TIP Office uses to monitor counter-trafficking in persons projects. (Recommendation 3) The Secretary of State should ensure that the Director of the TIP Office establishes a process to review and update performance indicators, with the participation of implementing partners, to ensure that project monitoring remains efficient and effective. (Recommendation 4) The Administrator of USAID should establish additional controls to improve the consistency and completeness of performance information that USAID uses to monitor counter-trafficking in persons projects. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to State, DOL, USAID, DOD, and the Treasury for review and comments. In State’s and USAID’s letters, reproduced in appendixes IV and V, respectively, both agencies concurred with our recommendations and described their planned actions to address the recommendations. In addition, State’s letter indicated that our draft report did not fully recognize the investment State has made, and the changes underway, to improve the TIP Office’s performance measurement and ensure complete and consistent documentation. State cited additional dedicated financial and personnel resources for monitoring and evaluation added over the past two years. We acknowledge and report on these positive steps, including the hiring of a monitoring and evaluation specialist and other TIP Office staff, in our report. USAID’s letter included other comments that we have responded to in appendix V. Furthermore, State, DOL, USAID, and the Treasury provided technical comments, which we incorporated as appropriate. DOD had no comments. We are sending copies of this report to the appropriate congressional committees; the Secretaries of State, Labor, Defense, and Treasury; and the Administrator of USAID. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7141, or groverj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology The National Defense Authorization Act for Fiscal Year 2017 includes a provision for GAO to report on the programs conducted by the Department of State (State), the Department of Labor (DOL), the United States Agency for International Development (USAID), the Department of Defense (DOD), and the Department of the Treasury (Treasury) that address human trafficking and modern slavery, including a detailed analysis of the effectiveness of such programs in limiting human trafficking and modern slavery. Three of these agencies—State, DOL, and USAID—have programs that design and award counter-trafficking projects to implementing partners, through contracts, grants, or cooperative agreements. These agencies then oversee and monitor these projects. Since DOD and Treasury officials did not identify these types of projects as part of their counter-trafficking in persons efforts, we provided background information on their efforts but did not cover these agencies in our reporting objectives. This report (1) identifies the recent projects in international counter-trafficking in persons that key U.S. agencies have awarded to implementing partners, and for selected projects, assesses the extent to which key agencies have (2) documented their monitoring activities, (3) ensured the reliability of the performance information they use in monitoring projects, and (4) reviewed the usefulness of the performance indicators they use in monitoring projects. To address these objectives, we reviewed relevant agency documents and interviewed agency officials. To report on agencies’ programs, we asked knowledgeable officials at State, DOL, USAID, DOD, and Treasury to identify their projects that (1) had an international focus; (2) were delivered by implementing partners to external recipients, such as trafficking victims or host governments, as project beneficiaries; and (3) addressed trafficking in persons, modern slavery, or forced labor. Because State, DOL, and USAID managed such projects, we focus on them as the three key agencies for the purposes of our reporting objectives. According to officials from these three agencies, the projects they identified range from those with counter- trafficking in persons as a primary goal, to those in which this goal was integrated as part of each agency’s activities. We used the lists of projects that these agencies provided to report the relevant counter- trafficking projects that agencies awarded to implementing partners to carry out the projects. For our first objective, we determined the projects that were active during fiscal year 2017, including those which began, were ongoing, or ended during fiscal year 2017, and interviewed agency officials to confirm project information. To analyze the effectiveness of agencies’ programs in limiting human trafficking and modern slavery, we assessed the key agencies’ monitoring efforts for selected projects by examining the extent to which agencies have documented their monitoring activities, ensured the reliability of the performance information, and reviewed the usefulness of the performance indicators they use in monitoring projects. To assess the extent to which State, DOL, and USAID documented their monitoring activities for selected counter-trafficking in persons projects, we reviewed these agencies’ monitoring policies and related guidance as well as the full agreements for the projects to identify specific required monitoring activities. The policies and related guidance included State’s Grants Policy Directive Number 42 (GPD-42) related to monitoring assistance awards; Federal Assistance Policy Directive (FAPD), which according to a State official superseded State’s grants policy directives, including GPD-42; Federal Assistance Directive, which superseded the FAPD; Program Design and Performance Management Toolkit; and Program and Project Design, Monitoring, and Evaluation Policy. We also reviewed State’s Office to Monitor and Combat Trafficking in Persons standard operating procedures. For DOL, we reviewed its Management Procedures and Guidelines (MPG) as well as the Comprehensive Monitoring and Evaluation Plan Guidance Document referenced in the fiscal year 2017 MPG. For USAID, we reviewed—from its Automated Directives System or ADS—Chapter 203 on Assessing and Learning and Chapter 201 on Program Cycle Operational Policy, which according to USAID officials superseded Chapter 203. Once we determined what tools the agencies use to monitor their counter-trafficking in persons projects, we sought documentation of those tools to determine whether agencies were implementing those tools. To assess the agencies’ monitoring efforts, we identified all of State’s, DOL’s, and USAID’s projects that started before or during October 2015, which corresponds to the first quarter of fiscal year 2016, and were active through September 30, 2017, which corresponds to the fourth and last quarter of fiscal year 2017. This produced a list of a total of 57 State, DOL, and USAID projects. Out of these 57 projects, we excluded 3 projects from our selection for various reasons. We excluded one DOL project because DOL identified the project as being a research project for which certain agency performance monitoring requirements (e.g., indicators, targets) are not applicable. We also excluded two USAID projects because USAID identified each project as including several projects with various start and end dates, thus making it difficult to determine their time frames for inclusion in our report. This resulted in a selection of 54 projects—37 from State, 3 from DOL, and 14 from USAID. We reviewed documentation of key monitoring activities as specified in agency policy or the project award agreements to determine the extent to which the agencies had full documentation of key monitoring activities. We also applied federal standards for internal control, which call for agency management to design monitoring activities so that all transactions are completely and accurately recorded, and GAO’s key attributes of effective performance measures, specifically the attribute of having a numerical target. We made our determinations of the extent to which agencies had full documentation of key monitoring activities, as follows: State (37 projects). To determine whether State had fully documented its monitoring activities, we reviewed the monitoring plan for each project; fiscal year 2017 quarterly progress reports for each project; and the final progress report, including indicators and targets, for the seven projects that ended as of December 2017. We determined that State had “fully documented” the monitoring plan, if State provided a monitoring plan worksheet for the project. If State did not provide a monitoring plan worksheet for the project, we determined the monitoring plan was “not documented.” For each quarterly progress report for fiscal year 2017 as well as the final progress report for projects that ended as of December 2017, we determined that State had “fully documented” the report, if the report included both a qualitative and quantitative summary of progress. For the State TIP Office projects we reviewed, the qualitative summary of progress is captured in a narrative and the quantitative summary of progress is captured in the logic model. For the State DRL project we reviewed, the qualitative summary of progress is captured in a narrative and the quantitative summary of progress is captured in the monitoring plan. If either component—narrative or quantitative summary—was not documented, we determined that the report was “partially documented.” If both components were not documented, we determined that the report was “not documented.” We determined that State had “fully documented” indicators and targets for projects that ended as of December 2017, if the final progress report for the project included indicators as well as targets for each indicator. If the final progress report included indicators but did not specify targets for each indicator, we determined that indicators and targets were “partially documented.” If the final progress report did not include indicators and targets, we determined that indicators and targets were “not documented.” (We did not find any instances of “not documented.”) DOL (3 projects). To determine whether DOL had full documentation of its monitoring activities, we reviewed the monitoring plan as well as fiscal year 2017 semi-annual progress reports for each project. Because DOL’s three projects were ongoing as of December 2017, we reviewed the second semi-annual progress report for fiscal year 2017 to determine whether DOL had “fully documented” indicators and targets for each project. Overall, we determined that DOL had “fully documented” (1) the monitoring plan for each project, if the monitoring plan documented the performance metrics and data collection frequency for the project; (2) each fiscal year 2017 semi- annual progress report for the project, if the report included a qualitative and quantitative summary of progress for the period of performance; and (3) indicators and targets for the project, if the second semi-annual progress report included indicators as well as targets for each applicable indicator. USAID (14 projects). To determine whether USAID had full documentation of its monitoring activities, we reviewed the monitoring plan for each project; fiscal year 2017 progress reports at the reporting frequency specified in the agreements for each project; and the final progress report, including indicators and targets, for the three projects that ended as of December 2017. We also reviewed evidence of site visits conducted during the life time of the projects. Overall, we determined that USAID had “fully documented” (1) the monitoring plan for each project, if the monitoring plan documented performance metrics for the project; (2) the periodic progress reports for fiscal year 2017 as well as the final progress report for projects that ended as of December 2017, if the report included a qualitative and quantitative summary of progress for the period of performance; and (3) indicators and targets for the three projects that ended as of December 2017, if the final progress report included indicators as well as targets for each applicable indicator. We determined that USAID “fully documented” a project’s site visit, if USAID provided evidence of having conducted at least one site visit during the life time of the project. Additionally, we interviewed knowledgeable monitoring officials from each agency to understand agencies’ monitoring process and application of monitoring requirements for counter-trafficking in persons projects. Because State and DOL officials also identified site visits as a key tool they use to monitor their counter-trafficking in persons projects, we reviewed evidence of site visits conducted during the life time of the projects to report on these efforts. We also interviewed State TIP Office officials to discuss instances in which the agency did not have full documentation of key monitoring activities. To assess the extent to which key agencies have ensured the reliability of the performance information they use to monitor selected projects, we selected for review a nongeneralizable sample of 5 projects—2 State projects, 1 DOL project, and 2 USAID projects—out of the 54 counter- trafficking in persons projects identified by agencies that started before or during October 2015 and were active through fiscal year 2017. We based our selection of these projects primarily on largest total award amounts. For these selected projects, we obtained 2 years of progress reports and other documents to assess the quantitative and qualitative performance information. We developed a standardized template to capture all quarterly or semi-annual indicator performance information reported for each of these projects and assessed whether quarterly or semi-annual totals were consistent with annual and cumulative totals where these were reported. Using this quantitative information, we judgmentally selected indicators for inclusion in agency interviews where it appeared likely that numerical errors had occurred or there appeared to be significant project events, such as large over- or under-performance or the elimination of the indicator. We interviewed agency officials, including managers of these five projects, about the consistency and completeness of monitoring information in these projects for about 60 indicators identified through our analysis. Additionally, we questioned these officials about performance report narrative information describing project activities that, in our judgement, appeared to be incomplete or inconsistent with respect to indicator results. We also used these interviews to determine whether our findings for these selected projects reflected general agency policies and procedures. We assessed the completeness and consistency of project performance data that State, DOL, and USAID use to monitor projects as part of our data reliability assessment. We found State and USAID data to be unreliable in the projects we reviewed. We discuss the implications of these unreliable data for State and USAID’s project management and reporting in our findings and recommendations. We found the performance data that DOL used were consistent and complete for the project we reviewed. While we examined indicator data and narrative information for consistency and completeness, we did not verify the accuracy of performance information. To assess the extent to which key agencies have reviewed the usefulness of the performance indicators they use to monitor selected projects, we used the same nongeneralizable sample of five projects— two State projects, one DOL project, and two USAID projects. We interviewed agency officials, including managers of these five projects, about processes and systems they use to review the usefulness of indicators on an ongoing basis, such as when conditions in the project activity region change or if the agency and implementing partner learn that certain project activities are less effective than expected. We identified examples of indicators that had apparently been discontinued, as well as continued indicators that showed minimal progress, and we asked these officials to explain what had or had not been discontinued. We also used these interviews to determine whether our findings for these selected projects reflected general agency policies and procedures. We conducted this performance audit from October 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Three Key U.S. Agencies’ Counter-trafficking in Persons Projects, Active in Fiscal Year 2017 The Departments of State (State) and Labor (DOL), and U.S. Agency for International Development (USAID) managed 120 projects in counter- trafficking in persons carried out by implementing partners during fiscal year 2017, according to information provided by officials with these agencies. The three agencies used different approaches to identify relevant projects. For example, State reported projects with a primary goal of counter-trafficking in persons, while DOL and USAID included projects that may not have counter-trafficking in persons as a primary goal. Table 4 lists these agencies’ reported project information for projects that were active during fiscal year 2017. Appendix III: State Documentation for Its Performance Monitoring Activities for 37 Counter-Trafficking in Persons Projects The Department of State (State) did not fully document its monitoring activities (monitoring plan; fiscal year 2017 quarterly progress reports; and final progress report, including indicators and targets, for projects that ended as of December 2017) for 16 of the 37 selected projects we reviewed with start dates between fiscal years 2011 to 2016. (See table 5.) For example, State’s Office to Monitor and Combat Trafficking in Persons did not have monitoring plans for nine projects or targets for each indicator in six of seven final progress reports for projects that ended as of December 2017. Appendix IV: Comments from the Department of State Appendix V: Comments from the U.S. Agency for International Development GAO Comments 1. USAID commented that it does not believe that our draft report reflected the existing controls the USAID mission in Ghana shared with us, and that the mission had furnished us with a file that, according to USAID, contained correct information for all indicators and their results from the time the activity began until our audit. While the mission provided us with a spreadsheet, this document included only annual performance totals for several years without accompanying quarterly totals, or quarterly or annual narrative information. We focused our analysis on the quarterly and annual performance reports to understand the extent to which USAID was ensuring the consistency and completeness of performance information, including associated narratives, underlying its aggregate and higher-level performance reports. We reported on inconsistent or incomplete performance information only after discussing and substantiating the specific errors we identified with USAID officials. Further, we recognize USAID’s efforts to address errors that the agency identified prior to our review and we provide an example of such efforts in the report. 2. We have incorporated USAID’s comment. Our report no longer characterizes USAID’s regular activity monitoring and conversations with implementing partners as “informal.” 3. USAID noted that our report does not discuss how the USAID mission in Ghana uses its third-party monitoring project—Monitoring, Evaluation and Technical Support Services (METSS)—to work with local organizations to improve their collection and analysis of data. We have added a reference to USAID’s third-party monitoring project to the report where we discussed limited capacity of local partners as a cause of data reliability issues. 4. USAID commented that one of the Ghana counter-trafficking in persons indicators we examined in the integrated project (“value of new private sector investments in selected value-chains”), was not related to trafficking in persons and, therefore, was not directly related to the focus of our audit. As discussed in the Objectives, Scope, and Methodology section of our report (see app. I), we selected projects, including the integrated project in Ghana, based on a list of counter- trafficking in persons projects provided by USAID. Because the same operational policy that sets the monitoring and evaluation standards for the agency applied to all indicators within a given project, we examined available quarterly or semi-annual indicator data for all reported indicators in selected projects to determine the completeness and consistency of the data. We then conducted interviews with agency officials to discuss instances in which we identified potentially incomplete and inconsistent performance information, as well as whether our findings about the management of performance information for these selected projects reflected general agency policies and procedures. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Leslie Holen (Assistant Director), Victoria Lin (Analyst-in-Charge), Esther Toledo, and Andrew Kurtzman made key contributions to this report. The team benefited from the expert advice and assistance of Neil Doherty, Justin Fisher, Benjamin Licht, Grace Lui, and Aldo Salerno. Related GAO Products Human Trafficking: State Has Made Improvements in Its Annual Report but Does Not Explicitly Explain Certain Tier Rankings or Changes, GAO-17-56 (Washington, D.C.: December 5, 2016). Human Trafficking: Oversight of Contractors’ Use of Foreign Workers in High-Risk Environments Needs to Be Strengthened. GAO-15-102 (Washington, D.C.: November 18, 2014). Human Trafficking: Monitoring and Evaluation of International Projects Are Limited, but Experts Suggest Improvements. GAO-07-1034 (Washington, D.C.: July 26, 2007). Human Trafficking: Better Data, Strategy, and Reporting Needed to Enhance U.S. Antitrafficking Efforts Abroad. GAO-06-825 (Washington, D.C.: July 18, 2006).
Why GAO Did This Study Human trafficking is a pervasive problem throughout the world. Victims are often held against their will in slave-like conditions. The National Defense Authorization Act for Fiscal Year 2017 includes a provision for GAO to report on the programs conducted by specific agencies, including State, DOL, and USAID, that address trafficking in persons. Among other objectives, this report (1) identifies the recent projects in international counter-trafficking in persons that key U.S. agencies have awarded to implementing partners; and, for selected projects, assesses the extent to which key agencies have (2) documented their monitoring activities and (3) ensured the reliability of project performance information. GAO reviewed State, DOL, and USAID project documents and interviewed agency officials. GAO reviewed monitoring documents for 54 of the 57 projects that were active from the beginning of fiscal year 2016 through the end of fiscal year 2017. Of these 54 projects, GAO selected a nongeneralizable sample of 5 projects, based primarily on largest total award amounts, for review of the reliability of project performance information. What GAO Found The Departments of State (State), Labor (DOL), and the U.S. Agency for International Development (USAID)—through agreements with implementing partners—managed 120 international counter-trafficking in person projects during fiscal year 2017. GAO reviewed a selection of 54 counter-trafficking projects (37 State, 3 DOL, and 14 USAID), and found that DOL and USAID had fully documented their monitoring activities, while State had not. All three agencies used similar tools to monitor the performance of their projects, such as monitoring plans, performance indicators and targets, progress reports, and site visits. GAO found, however, that State did not fully document its monitoring activities for 16 of its 37 projects (43 percent). GAO found that State did not have the monitoring plans or complete progress reports for one-third of its projects and often lacked targets for performance indicators in its final progress reports. State officials said they had not required targets for each performance indicator for the projects GAO reviewed, or had not set targets due to limited resources in prior years. State has taken steps to improve its monitoring efforts, including issuing a November 2017 policy that requires targets to be set for each performance indicator and developing an automated data system that would require targets to be recorded. However, because the pilot data system allows targets to be recorded as “to be determined” and does not have controls to ensure entry of actual targets, it is uncertain whether performance targets will be regularly recorded. Without full documentation of monitoring activities and established performance targets, State has limited ability to assess project performance, including project efficiency or effectiveness. GAO reviewed the reliability of project performance information for 5 of the 54 counter-trafficking projects (2 State, 1 DOL, and 2 USAID) and found that State and USAID used inconsistent and incomplete performance information, while DOL used consistent and complete information. For example, some quarterly indicator results in State and USAID progress reports were inconsistent with annual total results, and narrative explanations for significant deviations from performance targets were sometimes not present in quarterly reports. According to agency officials, performance information from these projects is regularly used not only for direct project oversight but also for internal and external reporting, program decisions, and lessons learned. GAO found that State's and USAID's processes lack sufficient controls to ensure the reliability of project performance information, but did not find inadequate controls in DOL's process. For example, neither State nor USAID consistently used automated checks on indicator results to ensure consistency and completeness of performance indicator result calculations. In contrast, DOL used automated checks as part of its process. Without implementing controls to ensure that performance information is consistent and complete, State and USAID officials cannot fully or accurately understand what projects are, or are not, achieving, and how their efforts might be improved. What GAO Recommends GAO is making four recommendations to State and one recommendation to USAID, including that both agencies establish additional controls to improve the consistency and completeness of project performance information, and that State maintain monitoring activity documentation and establish targets for each performance indicator. State and USAID concur with GAO's recommendations.
gao_GAO-18-678
gao_GAO-18-678_0
Background Information on Selected Air Force and Navy Fixed- Wing Aircraft The inventories of the selected Air Force and Navy fixed-wing aircraft in our review totaled 2,823 aircraft and required approximately $20 billion to operate and support in fiscal year 2016. The inventory, aircraft status, initial operational capability, and service life forecast for each of the 12 selected fixed-wing aircraft are shown in figure 1. Policy and Guidance for the Sustainment of Fixed- Wing Aircraft Sustainment of fixed-wing aircraft and other weapon systems comprises the logistics and personnel services required to maintain and prolong operations, and DOD policy provides direction to service components on sustainment planning across the life cycle of the weapon system. Specifically, DOD policy requires the services to develop and implement a sustainment strategy, such as a Life-cycle Sustainment Plan, for sustaining its weapon systems. According to DOD’s policy, this strategy should be the basis for all sustainment efforts, including sustainment metrics mapped to key performance parameters and key system attributes, such as aircraft availability, to manage sustainment performance. The policy states that, after initial operating capability, programs should update the sustainment plan whenever there are major changes to its strategy for sustaining the weapon system, or every 5 years, whichever occurs first. The Air Force and the Navy also have guidance that implements the requirements of the DOD guidance. These services’ guidance include sustainment-planning requirements for life-cycle sustainment and assurance of affordability. Roles and Responsibilities for the Sustainment of Fixed-Wing Aircraft There are a variety of DOD offices that have roles and responsibilities related to sustaining fixed-wing aircraft. For instance, the Under Secretary of Defense for Acquisition and Sustainment (USD ), is the principal staff assistant and advisor to the Secretary of Defense for all matters concerning acquisition and sustainment. Specifically, USD (A&S) is responsible for establishing policies for logistics, maintenance, and sustainment support for all elements of DOD, including fixed-wing aircraft. The Assistant Secretary of Defense for Logistics and Materiel Readiness (ASD ) serves as the principal staff assistant and advisor to the USD (A&S) on logistics and materiel readiness within DOD. Specifically, the ASD (L&MR) is responsible for (1) establishing DOD policies and procedures for logistics, maintenance, materiel readiness, strategic mobility, and sustainment support; (2) providing related guidance to the Secretaries of the military departments, including developing the Life- cycle Sustainment Plan outline; and (3) monitoring and reviewing programs associated with these areas, among other duties and responsibilities. For the Air Force, the Air Force Materiel Command develops, acquires, and sustains weapon systems through research, development, testing, evaluation, acquisition, maintenance, and program management of the systems and their components. This command provides acquisition and life-cycle management services and logistics support, among other things. The Air Force Life Cycle Management Center within the Air Force Materiel Command is responsible for the life-cycle management of weapon systems from inception to retirement. A Program Executive Officer—responsible for managing a specific portfolio of weapon systems—is responsible for each of the selected fixed-wing aircraft. The Program Executive Officer oversees the program office that manages each weapon system. For the Navy and Marine Corps, the Naval Air Systems Command is responsible for providing the full life-cycle support of naval aviation aircraft, weapons, and systems. This support includes research, design, development and systems engineering; acquisition; test and evaluation; training facilities and equipment; repair and modification; and in-service engineering and logistics support. As with the Air Force, Program Executive Officers oversee their assigned program managers. DOD relies on program managers to lead the development, delivery, and sustainment of individual weapon systems through their life cycles. The program managers are the designated individuals with responsibility for and authority to accomplish the program’s sustainment objectives to meet the users’ operational needs. Product support managers, who work within the program offices, are responsible for developing and implementing support strategies for weapon systems that maintain readiness and control life-cycle costs. Weapon systems are sustained under various arrangements that may include contractors, DOD organic facilities, or some combination of the two. For example, the Air Force Sustainment Center provides depot maintenance through its Air Logistics Complexes for weapon systems. Naval Air Systems Command is responsible for the Navy Fleet Readiness Centers, which provide depot-level maintenance for Navy and Marine Corps fixed-wing aircraft. Additionally, the Air Force Sustainment Center and the Navy Supply Systems Command, as well as the Defense Logistics Agency, manage inventories of repair parts, and individual weapon systems programs are typically supported by a complex supplier network that includes a prime contractor, subcontractors, and various tiers of parts suppliers. On the other hand, sustainment responsibilities—in their entirety or particular elements—may be contracted out as part of a public-private partnership or a performance-based logistics agreement, such as with the F-22 Raptor. Key Sustainment Metrics for Fixed-Wing Aircraft The Air Force and Navy monitor the readiness status of selected fixed- wing aircraft through numerous performance metrics. Specifically, the Air Force measures how well a fleet is performing by calculating the availability of the fleets’ aircraft, which is the number of aircraft that are available for flight operations. The Navy measures its aircraft availability through two metrics: (1) Ready-Basic-Aircraft (RBA)—the number of aircraft that are able to safely fly—and (2) Ready-for-Tasking (RFT)—the number of aircraft that are able to conduct specific missions. Both the Air Force and Navy have established goals associated with aircraft availability. In addition to measuring the availability of the aircraft against the associated goals, the Air Force and Navy track the reasons for aircraft not being available or able to conduct missions. Specifically, the Air Force and Navy track the following: Aircraft in depot: Aircraft unavailable to conduct missions because of scheduled or unscheduled depot maintenance or modification. Not mission capable maintenance: Aircraft that are not in depot and not capable of performing any of their assigned missions because of maintenance. Not mission capable supply: Aircraft that are not in depot and not capable of performing any of their assigned missions because of the lack of a repair part. In addition to these three metrics, the Air Force also tracks the following: Not mission capable for both supply and maintenance: Aircraft that are not in depot and not capable of performing any of their assigned missions because of both maintenance and the lack of a repair part. Units possessed not reported: Aircraft that are not available for use for reasons other than depot and not mission capable status, but possessed by the squadron. Operating and Support Costs for Major Weapon Systems There are various costs associated with operating and supporting weapon systems. DOD’s Operating and Support Cost-Estimating Guide provides direction to the service components on developing estimates to support various analyses and reviews throughout the program life cycle. According to the guide, as a program matures, it remains necessary to continue to track and assess O&S costs and trends to ensure that the program remains sustainable, affordable, and properly funded. Each military department maintains a database that collects historical data on the O&S costs for major fielded weapon systems. DOD’s Office of Cost Assessment and Program Evaluation provides policy guidance on this requirement, known as the Visibility and Management of Operating and Support Costs program; specifies the common format in which the data are to be reported; and monitors its implementation by each of the military departments. O&S costs are categorized using the following six overarching elements: unit level manpower—cost of operators, maintainers, and other support manpower assigned to operating units; unit operations—cost of unit operating materiel such as fuel, and training material, unit support services, and unit travel; maintenance—cost of system maintenance including depot- and sustaining support—cost of system support activities that are provided by organizations other than the system’s operating units; continuing system improvements—cost of system hardware and software modifications; and indirect support—cost of activities that provide general services that lack the visibility of actual support to specific force units or systems. Air Force and Navy Fixed-Wing Aircraft Availability and O&S Costs Have Varied, and Aircraft Availability Goals Generally Were Not Met For the selected Air Force and Navy fixed-wing aircraft in our review, aircraft availability and O&S cost trends varied over the 6-year period between fiscal years 2011 and 2016, and the aircraft generally did not meet availability goals. We found that 6 of 12 fixed-wing aircraft—3 from each service—experienced decreased aircraft availability between fiscal years 2011 and 2016. One aircraft met availability goals each year between fiscal year 2011 and 2016. Conversely, six aircraft met the goals in some years but not others, and five aircraft did not meet the goals in any year. In the latest year included in our review—fiscal year 2016—9 of 12 of the fixed-wing aircraft did not meet their associated availability goals. With respect to O&S costs, the overall O&S total for all 12 aircraft was about $20 billion annually over the 6-year period; some aircraft experienced increases while the costs to operate and support others decreased. The reasons for changes in costs included increases in maintenance costs for 8 of 12 fixed-wing aircraft. Below we summarize these trends, and the “Sustainment Quick Looks” in appendices II–XIII provide detailed information on the trends associated with each of the 12 fixed-wing aircraft and appendix XV provides additional information on operating and support cost per aircraft. Air Force Aircraft Availability and O&S Cost Trends Varied across the Selected Fixed-Wing Aircraft Air Force Aircraft Availability Trends Varied, and Three of Five Aircraft Did Not Meet Availability Goals since 2011 Our analysis found that: between fiscal years 2011 and 2016, aircraft availability for two of five selected Air Force fixed-wing aircraft fluctuated and for three decreased; between fiscal 2011 and 2016, two aircraft met availability goals in some years, and three aircraft did not meet availability goals in any of the years; and in fiscal year 2016, four of the five aircraft did not meet availability goals. Specific details regarding aircraft availability and not mission capable status for maintenance, supply, and both maintenance and supply were omitted because DOD deemed this information as sensitive (i.e., For Official Use Only). According to officials, when aircraft availability goals are not met, training and operational missions may not be fulfilled as timely as needed. For example, F-22 squadron officials explained that the lack of available aircraft creates a shortage of trained pilots. F-22 pilots need extensive training to fulfill their air-superiority role. Further, command officials explained that when aircraft availability goals are not met, there may not be enough aircraft to respond to contingency requirements. Officials expressed concern that, given the capability and expectation of the F-22 to be available to create air superiority in any operation, missions may not be met. Additionally, E-8C program office officials stated that missions are often limited to top priority, which means supported combatant commands may not obtain all needed capabilities, such as the E-8C not being able to provide surveillance capability to particular combatant commands. Air Force O&S Cost Trends Have Varied, and Maintenance Costs Generally Increased since 2011 From fiscal years 2011 through 2016, O&S costs for the Air Force aircraft in our review totaled about $13 billion annually. These costs decreased for the C-17, F-16, and the F-22, but increased for the B-52 and E-8C, as shown in figure 2. For example, the F-16’s total annual O&S costs decreased by about $943 million (or about 19 percent) because of decreases in all cost elements—the largest decrease being unit operations—except sustaining support. According to officials, the decrease in unit operations can be attributed to the retiring of aircraft and the consolidation of squadrons. The C-17’s and F-22’s O&S costs decreased mainly because of decreases in two cost elements: continuing system improvements and unit operations. In contrast, the B-52’s and the E-8C’s O&S costs increased, by $76 million (or about 6 percent) and $41 million (or about 6 percent), respectively. The increases occurred because two of the cost elements—continuing system improvements and maintenance costs—increased more than the other costs elements decreased. Based on our analysis of the O&S cost elements, maintenance cost generally is one of the largest portions—on average about 36 percent—of total O&S costs for each aircraft. As shown in figure 3, maintenance costs for four of the five aircraft generally have increased from fiscal years 2011 through 2016. Specifically, maintenance costs for the C-17, E-8C, and F- 22 increased because of additional depot maintenance needs. B-52 maintenance costs fluctuated year to year, but increased overall during this period. The overall maintenance costs for the F-16 decreased by approximately $140 million. According to our analysis, even though there was an increase in some of the F-16 maintenance cost elements, the fleet’s executed flying hours decreased. Therefore, the flying hour depot- level reparable costs decreased by approximately $123 million and engine repair decreased by $115 million, causing the overall maintenance cost to decrease. Navy Aircraft Availability Trends and O&S Cost Trends Varied across the Selected Fixed-Wing Aircraft Navy Aircraft Availability Trends Varied, and Five of Seven Aircraft Generally Did Not Meet Availability Goals since 2011 Our analysis found that: between fiscal years 2011 and 2016, aircraft availability increased for three of the seven Navy fixed-wing aircraft, fluctuated for one, and decreased for the remaining three aircraft; between fiscal 2011 and 2016, one aircraft met aircraft availability goals in each year, and four aircraft met goals in some years, while two aircraft did not meet goals in any of the years; and in fiscal year 2016, the Navy did not meet aircraft availability goals for five of the seven aircraft. Specific details regarding aircraft availability and not mission capable status for maintenance and supply were omitted because DOD deemed this information as sensitive (i.e., For Official Use Only). To address decreases in aircraft availability, the Navy has moved available aircraft between squadrons to help ensure deploying squadrons are fully equipped for their assigned missions. In November 2017, the Commander of Naval Air Forces testified before the House Armed Services Committee that to equip the air wings with the required number of mission capable aircraft for the deployment of three aircraft carriers in 2017, the Navy had to transfer 94 strike fighters to and from the maintenance depots or between squadrons. This transfer included pulling aircraft from fleet replacement squadrons, where the focus should be on training new aviators. The Commander of Naval Air Forces, in his November 2017 testimony, summarized the issue: “That strike fighter inventory management, or shell game, leaves non-deployed squadrons well below the number of jets required to keep aviators proficient and progressing toward their career qualifications and milestones, with detrimental impacts to both retention and future experience levels.” Furthermore, based on our analysis, F/A- 18A-D squadrons have underexecuted their flight hours by an average of 4 percent from fiscal years 2011 through 2016. According to officials, this is largely due to low aircraft availability. Additionally, placing further strain on aircraft availability, the F/A-18A-D inventory has decreased from 581 aircraft in fiscal year 2011 to 537 aircraft in fiscal year 2016. The Navy’s O&S and Maintenance Cost Trends Varied From fiscal years 2011 through 2016, O&S costs for the Navy’s seven selected fixed-wing aircraft totaled about $7 billion annually. Also, the Navy has experienced varying O&S and maintenance costs since fiscal year 2011 for these aircraft. Specifically, annual O&S costs decreased for the AV-8B, C-2A, E-2C, and F/A-18A-D, and increased for the E-2D, EA- 18G, and F/A-18E-F, as shown in figure 4. We found that O&S costs for the F/A-18A-D decreased by about 22 percent from about $3.1 billion in fiscal year 2011 to about $2.4 billion in fiscal year 2016. According to officials, this decrease can be attributed to the decrease in inventory as aircraft are retired and squadrons transition to the F-35 Joint Strike Fighter. In another example, O&S costs for the E- 2D increased from about $1.6 million in fiscal year 2012 to about $125 million in fiscal year 2016. The size of the fleet has increased by 17 aircraft—from 3 to 20 since fiscal year 2011. According to officials, this aircraft remains in production with a projected fleet size of 75; as inventory increases, so will O&S costs. Based on our analysis of the O&S cost elements, maintenance cost generally is one of the largest portions—about 42 percent—of total O&S costs for the seven aircraft in our review. Annual maintenance costs have increased for the C-2A, E-2D, EA-18G, and F/A-18E-F, and decreased for the AV-8B, E-2C, and F/A-18A-D, as shown in figure 5. We found that maintenance cost for the C-2A increased by about 7 percent from about $89 million in fiscal year 2011 to about $95 million in fiscal year 2016. According to officials, the increase in maintenance cost can be attributed to increased demand for outer wing panels, which resulted in a $16 million increase in depot-level repair costs and a more than 10 percent increase in executed flight hours, among other things. In another example, maintenance cost for the AV-8B decreased by about 9 percent from about $375 million in fiscal year 2011 to about $341 million in fiscal year 2016. According to officials, these decreases can be attributed to the AV-8B no longer being used in Operation Enduring Freedom in 2012, the loss of six aircraft, and the transition of AV-8B squadrons to the F-35 Joint Strike Fighter. The Air Force and Navy Face Similar Sustainment Challenges That Affect Aircraft Availability and O&S Costs across the Selected Fixed-Wing Aircraft The Air Force and Navy face similar sustainment challenges that relate to aging, maintenance, and supply support that affect aircraft availability and O&S costs for the 12 aircraft selected in our review, as shown in figure 6. Specifically, 10 of 12 aircraft are experiencing sustainment challenges related to aging; all 12 are experiencing challenges related to maintenance; and all 12 are also experiencing challenges related to supply support. Below is a brief overview of these challenges: Aging: A number of these aircraft are aging and operating beyond their planned service life, partly because of delays in replacement aircraft. Specifically, the Air Force and Navy plan to replace the F-16, AV-8B, and F/A-18A-D with the F-35 Joint Strike Fighter. The Navy is expected to transition the F/A-18A-D through 2030 and the Marine Corps is planning to use the F/A-18A-D beyond 2030 (although these time frames have been extended several times already). The Navy plans to retire the AV-8B in 2026. On the other hand, the Air Force is not expected to retire the F-16 until at least 2040. Because of aging, according to officials, there are parts on some aircraft that need to be repaired and replaced that were not accounted for during initial sustainment analysis. To mitigate some challenges associated with the age of the fixed-wing aircraft, the Air Force and Navy program officials have decided to extend the service life of some aircraft by repairing and overhauling airframes and components, as well as developing the engineering specifications for parts that were never planned to be repaired or replaced. Maintenance: Delays in getting aircraft into and through depot maintenance, as well as shortages of skilled maintainers, are contributing to some aircraft missing their availability goals. Both services reported losing experienced maintainers, either to retirement or to other programs such as the F-35 Joint Strike Fighter. To address maintenance challenges, program offices for the selected aircraft have improved the efficiency and speed of depot maintenance, as well as are working to ensure there are sufficient numbers of trained maintainers. Supply Support: Some aircraft are encountering supply shortages as a result of parts not being available, in some cases due to obsolescence issues or diminishing manufacturer sources. Overcoming part shortages through either searching for replacement parts or reengineering parts takes time, which can contribute to aircraft being unavailable for longer periods. To mitigate supply challenges, officials have proactively upgraded aircraft before obsolescence occurs or located available parts and reengineered parts that are no longer in production, as well as identified suitable manufacturers in advance, among other things. For more specific information on sustainment challenges related to aging aircraft, maintenance, and supply support for each of the fixed-wing aircraft, see the “Sustainment Quick Looks” in appendixes II–XIII. Sustainment Strategies Were Documented for Some but Not All Aircraft, and the Air Force and Navy Have Other Efforts to Review and Improve Aircraft Availability The Air Force has documented sustainment strategies for its five selected aircraft in our review, but the Navy has not documented sustainment strategies or updated the strategies for four of seven of its aircraft in our review. The Air Force and Navy also regularly reviewed sustainment metrics and have implemented plans to improve aircraft availability. The Air Force Has Documented Sustainment Strategies, but Some of the Navy’s Strategies Are Not Documented or Up-to- Date The Air Force has documented sustainment strategies for the five selected fixed-wing aircraft and updated them in accordance with Air Force guidance. However, the Navy has not documented a sustainment strategy or updated the strategies for four of the seven aircraft in our review since 2012. See figure 7 for the year of the most recent update to the sustainment strategy for the aircraft in our review. While sustainment strategies do not guarantee successful outcomes, they serve as a tool to guide operations as well as support planning and implementation of activities through the life-cycle of the aircraft. Specifically, at a high-level the strategy is aimed at integrating requirements, product support elements, funding, and risk management to provide oversight of the aircraft. For example, these sustainment strategies can be documented in a life-cycle sustainment plan, postproduction support plan, or an in-service support plan, among other types of documented strategies. Additionally, program officials stated that an aircraft’s sustainment strategy is an important management tool for the sustainment of the aircraft by documenting requirements that are known by all stakeholders, including good practices identified in sustaining each aircraft. For example: The strategy for the Air Force B-52 has been updated several times in recent years because of several major modifications. For example, in 2014 the Air Force issued an updated sustainment plan within the life- cycle management plan to update the combat network communications technology program because the B-52’s communications system is still the original from the 1950s and has limitations related to making mission or target changes in flight. The plan addresses the testing, resource management, and numerous program performance indicators and requirements of the system. The strategy for the Air Force F-16 outlines plans for the aircraft’s service life extension and includes proactive measures and data forecasting to bundle depot modifications in order to minimize fleet- wide effects on aircraft availability. The service life extension for the F- 16 is designed to extend the service life of 300 F-16 aircraft from 8,000 to 13,856 flight hours at an estimated cost of $740 million (as of June 2016). The strategy for the Navy E-2D provides a systematic approach to ensure that a comprehensive support package is in place to support the sustainment of the aircraft. Also, it describes the overall plan for the management and execution of the product support package by communicating the sustainment strategy to stakeholders in the acquisition, engineering, and logistics communities. However, the Navy had not documented a sustainment strategy for the C- 2A because a strategy was not required when the aircraft, now a legacy system, was going through the acquisition process prior to 1965. According to Navy officials, while they have not documented a strategy for the C-2A, they are undertaking efforts, such as updating technical publications, performing maintenance analysis on the landing gear, and evaluating depot tasks to decrease turnaround time, among other efforts, to sustain the aircraft. However, a documented sustainment strategy for the C-2A would help guide the planning and implementation of these efforts, as well as serve as a management tool by documenting these requirements that are known by all stakeholders. In addition, the Navy’s sustainment strategies for the E-2C (2011), EA- 18G (2006), and F/A-18A-D (2001) were developed prior to 2012 and thus have not been updated in over 5 years. With respect to the EA-18G, Navy officials told us that the sustainment strategy should be updated in accordance with DOD’s acquisition policy—DOD Instruction 5000.02— since the EA-18G is still in the acquisition process, as it continues to be produced. For the E-2C and F/A-18A-D, Naval Air Systems Command officials and program office officials told us that they were not required to document sustainment strategies because these aircraft were legacy systems at the time the requirement to develop and maintain a sustainment strategy was implemented. Therefore, according to these officials, the DOD requirements to document and update sustainment strategies every 5 years in DOD Instruction 5000.02 were not applicable. DOD Instruction 5000.02 requires weapon systems to have some form of a sustainment strategy that is not older than 5 years; however, it is unclear whether this policy is applicable to legacy weapon systems. Specifically, the policy states that program managers for all programs are responsible for developing and maintaining a sustainment strategy, such as a Life-cycle Sustainment Plan, beginning at the risk-reduction decision point (i.e., Milestone A of the acquisition process). However, based on our discussions with Navy program officials for our selected aircraft and our review of the policy, it is unclear whether the policy—as currently written—is applicable to legacy systems that were no longer in production and thus had completed the risk-reduction decision point (or Milestone A) prior to the requirement to update a sustainment strategy every 5 years. According to DOD officials, the intent of the policy is for all programs, including legacy weapon systems, to develop and maintain a sustainment strategy; however, the policy does not explicitly state that legacy systems are expected to fulfill this requirement. In May 2017, the Air Force updated its sustainment guidance to require sustainment strategies for legacy systems and for those strategies to be updated every 5 years. Air Force officials told us that they did this because the DOD policy was unclear whether it was applicable to legacy systems and it was a good practice to ensure the guidance was explicit for all weapon systems to document and update a sustainment strategy. This instruction explicitly states that the requirement to document a sustainment strategy and update it every 5 years is applicable to all weapon systems, including legacy systems that are in the O&S phase of their life cycles. Additionally, the Air Force Instruction states that these legacy systems are not required to retroactively meet requirements identified for previous phases of the acquisition life-cycle, but should meet the requirements needed for continued operations of the system. However, the Navy has not made the requirement explicit for legacy systems in its guidance. Specifically, Navy guidance does not explicitly state that documenting a sustainment strategy and updating that strategy every 5 years is a requirement for legacy systems. While Navy guidance requires the development and use of sustainment metrics for legacy systems and requires the Naval Air Systems Command be responsible for aviation weapon systems in sustainment, the Navy does not address any requirement for sustainment strategies for legacy systems. The lack of clarity in DOD Instruction 5000.02 and the Navy guidance regarding whether legacy systems are required to document a sustainment strategy and update that strategy every 5 years has resulted in confusion regarding sustainment planning requirements among Navy program offices and could cause confusion with other weapon system program offices across DOD. Standards for Internal Control in the Federal Government state that management should define objectives in specific terms so they are understood at all levels of the entity. The standards also state that guidance should clearly define what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. As indicated by the Air Force’s 2017 update to its sustainment guidance, clarifying DOD and Navy guidance and the applicability of sustainment strategy requirements to legacy systems could be done through very small additions and clarifications to the applicable guidance documents. Until DOD and the Navy update or issue new guidance clarifying the requirements for documenting sustainment strategies for legacy systems, weapon system program offices, such as those for fixed-wing aircraft, as well as Naval Air Systems Command and DOD may not have full visibility of necessary requirements to achieve program objectives or any related risks associated with the sustainment of these weapon systems. While the DOD policy and Navy guidance is unclear, Naval Air Systems Command and Navy program offices for the four aircraft—C-2A, E-2C, EA-18G, and F/A-18A-D—that either do not have a sustainment strategy or have not updated the strategy within the last 5 years are taking actions to document or update the sustainment strategies for these aircraft. According to Naval Air Systems Command officials, once it was brought to their attention that the intent of DOD Instruction 5000.02 was for legacy systems to have an updated documented sustainment strategy, they began to take action to develop or update the respective sustainment strategies. Specifically, according to C-2A, E-2C, and E-2D program officials, they are currently updating the E-2D strategy for its 5-year update, which is due in fiscal year 2018, and it will include updates for the C-2A and E-2C since the airframe for all three aircraft are very similar. Also, program officials for the EA-18G and F/A-18A-D told us that they are currently updating the strategies for these aircraft and are expected to complete the process in fiscal year 2018. Given that the Navy is already taking action to update its sustainment strategies and has established timelines for these updates, we are not making any recommendations to the Navy regarding updating the respective sustainment strategies. The Air Force and Navy Regularly Reviewed Sustainment Metrics and Implemented Improvement Plans to Address Aircraft Availability The Air Force and the Navy have (1) regularly reviewed sustainment metrics for fixed-wing aircraft and (2) implemented improvement plans to address aircraft availability. The Air Force and Navy Have Conducted Regular Reviews of Sustainment Metrics The Air Force and Navy have regularly monitored the condition of their fixed-wing aircraft, which includes measuring aircraft availability against planned goals as well as monitoring other sustainment metrics. Specifically, the Air Force Materiel Command monitors aircraft availability and other sustainment metrics through quarterly Weapon System Enterprise Review (WSER) briefings. The program office in conjunction with the Air Force Life Cycle Management Center generates the WSER, which is briefed through Air Force Materiel Command and the Program Executive Offices to the Air Force Chief of Staff. The WSER delivers insight into the comprehensive health of a system by flagging gaps in performance and identifying mitigating actions, which is used to conduct crosscutting enterprise analysis and provide input into readiness reviews. In addition to the WSER, the program offices manage their performance through their Health of the Fleet briefs. These briefs—conducted monthly or quarterly depending upon the aircraft—include readiness assessments that provide insight on maintenance and management practices. The assessment is delivered by the program’s maintenance group, and includes aircraft performance metrics, issues, actions, and schedules to inform program leadership on fleet status and to help prioritize and make decisions concerning the issues. The Navy monitors aircraft availability through its aircraft status dashboard for each aircraft, which provides specific information, such as goals, actual availability, and gaps between the two. More specifically, the Navy tracks the status of each of its aircraft through the dashboard, including those aircraft that are available (i.e., Ready-Basic-Aircraft ), are in depot maintenance, or are not mission capable due to maintenance or supply, among other metrics. The dashboard is updated monthly, and there are weekly meetings with key stakeholders, including Naval Air Systems Command officials, industry partners, and depot officials, to monitor the performance of each aircraft and make adjustments to improve aircraft availability. Additionally, all program offices have processes in place to manage the fleet within their portfolios, including semiannual or annual program reviews such as Program Management Reviews and Executive Steering Reviews. These reviews focus on readiness, cost drivers, and initiatives to address program risk and ways to resolve issues affecting each aircraft. Further, the Marine Corps Commandant for Aviation leads biannual Executive Steering Summits to assess readiness issues affecting Marine Corps aircraft. The Air Force and Navy Have Implemented Improvement Plans to Address Aircraft Availability The Air Force and Navy have implemented improvement plans to address aircraft availability for each of the selected fixed-wing aircraft. Air Force program offices for the fixed-wing aircraft in our review have plans for improving availability. Since 2005, the Air Force Materiel Command has had an annual process to improve aircraft availability, which is known as the Aircraft Availability Improvement Program. The process enables the program offices to assess and limit risk, incorporate available support funding, and specifically address where there are effects on availability, such as aircraft in depot. This process also incorporates projecting historical and goal rates in order to leverage scheduled and modernization maintenance. Program offices create plans, known as aircraft availability improvement plans, based on these projections to forecast improvements that can facilitate increased availability and reduction of costs, among other things. The Air Force provides guidance in the form of a template to ensure consistency amongst the plans, which typically must include improvement initiatives with milestone goals. This information includes projected aircraft availability rates for mission capable, units possessed not reported, not mission capable for supply, not mission capable for maintenance, and depot possession. Officials noted that the program office creates an improvement plan each year, regardless of whether it is short of its availability goal, since the plan serves as a forecasting measure. The program is designed to ensure the program offices have plans in place to meet target goals, and the information and milestones laid out in the plans feed into the WSER briefings to senior management. For example: The B-52 plan for fiscal year 2017 discusses the process and milestones for replacing actuator seals for the fleet, the costs of the repair, and the expected benefit to B-52 availability—1.05 percent improvement to the not mission capable supply metric. The C-17 plan for fiscal year 2017 identifies the current and future modifications, timelines for beginning and completion, and the effect on availability. For example, the future replacement of a legacy computer system with a modernized system and display is set to begin in fiscal year 2019 with an estimated completion date of 2026. This replacement is planned to be done concurrently with other maintenance, and to prevent future declines in the C-17’s availability. The F-22 plan for fiscal year 2017 identifies several projects taking place between 2016 and 2021 that are expected to improve availability by almost 2 percent. Further, officials said they are currently working with the Assistant Secretary of the Air Force (Acquisition) to develop an Air Force manual that would make developing an Aircraft Availability Improvement Plan a requirement. This manual will become a supplement to Air Force Instruction 63-101/20-101, according to the officials. Navy program offices for all seven fixed-wing aircraft in our review also have plans for improving availability. According to Navy officials, they started preparing “summary playbooks,” which is the Navy’s term for improvement plans, in late 2015 and started implementing these plans in early 2016 to increase aircraft availability. Officials told us that there was a limitation in funding because of sequestration prior to fiscal year 2017, which hampered their ability to fully implement the playbooks. At a broad level, the Navy’s playbooks include efforts such as maintenance planning, supply support, aircraft material condition and management, and technical data, among other things. These efforts are linked to specific initiatives such as working with the manufacturer and contractors to provide maintenance support, identifying obsolete parts, conducting aircraft fatigue analysis, and updating technical publications, among other things, which have been identified by the program office as ways to improve aircraft availability. Additionally, these playbooks include the extent to which these initiatives are funded, underfunded, or partially funded and the appropriation account that would fund each initiative. The playbooks include the status of each initiative, and some of the playbooks also provide an approximate time frame for implementing each initiative. For example: The playbook for the C-2A has a fatigue analysis initiative focused on analyzing the landing gear to update its design, provide a depot repair manual, and increase its service life, among other things. This initiative is considered funded, is expected to improve aircraft availability, and has an estimated time frame for implementation between fiscal years 2017 and 2021. The playbook for the E-2D contains a maintenance initiative focused on improving the maintenance planning process of the C-2A, E-2C, and E-2D aircraft by completing elements of the product support package, such as training, publications, support equipment, and tools, among others. This initiative is considered partially funded, is expected to improve aircraft availability by decreasing the not mission- capable rates related to maintenance and supply and decreasing maintenance down time, and has an estimated time frame for implementation between fiscal years 2017 through 2019. The playbook for the F/A-18A-D includes a product improvement initiative to conduct a case study to assess the condition of the wiring of the aircraft in the fleet. This initiative is considered funded and is expected to help to sustain aircraft availability. However, there is no time frame for implementing this initiative. The playbook for the F/A-18E-F contains a service life modification initiative focused on extending the service life of the aircraft through modifications. According to officials, this initiative is considered partially funded, is expected to help to sustain aircraft availability, and is expected to help the fleet realize an 80 percent cost avoidance because the Navy will not have to pay the cost to replace these aircraft. Also, this initiative has an estimated time frame for implementation between fiscal years 2018 through 2040. Conclusions The Departments of the Air Force and Navy spend tens of billions of dollars each year to sustain their fixed-wing aircraft, which need expensive logistics support, including maintenance and repair, to meet goals for availability. The departments spent at least $20 billion annually since 2011 to sustain the 12 aircraft that we examined. The Air Force and Navy share a variety of sustainment challenges, including the age of their aircraft as well as maintenance and supply support issues. These challenges have led to half (6 of 12) of the aircraft in our review experiencing decreasing availability and to the aircraft in general not being able to meet aircraft availability goals. For example, 9 of 12 aircraft did not meet availability goals in fiscal year 2016. These trends are occurring even though the Air Force and Navy regularly review sustainment metrics for the aircraft and are implementing plans for improving aircraft availability. However, DOD’s policy and the Navy’s guidance are not clear on whether the services should have a current sustainment strategy for legacy weapon systems, including fixed-wing aircraft, and on whether the strategies are required to be updated every 5 years. Without clarity about whether the DOD instruction and the Navy guidance apply to legacy systems, program officials will not know whether they are required to have a sustainment strategy or are required to update the plan for their respective fixed-wing aircraft. Furthermore, the program offices, the services, and DOD may not have full visibility of necessary requirements to document program objectives, related risks, and the effectiveness of the program, ultimately jeopardizing the sustainability and affordability of each of the programs. Recommendations for Executive Action We are making the following two recommendations to DOD: The Secretary of Defense should ensure that the Under Secretary of Defense for Acquisition and Sustainment updates or issues new policy clarifying the requirements for documenting sustainment strategies for legacy weapon systems, including fixed-wing aircraft. (Recommendation 1) The Secretary of the Navy should update or issue new guidance clarifying the requirements for documenting sustainment strategies for legacy weapon systems, including fixed-wing aircraft. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of the sensitive report to DOD for review and comment. In written comments that are reproduced in appendix XVI, DOD concurred with our recommendations and noted planned actions to address each recommendation. The Air Force and Navy also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Navy and Air Force; the Commandant of the Marine Corps; the Under Secretary of Defense for Acquisition and Sustainment; and the Director, Defense Logistics Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at merrittz@gao.gov or (202) 512-5257. GAO staff who made key contributions to this report are listed in appendix XVII. Appendix I: GAO’s Recent Prior Work on Sustainment Issues within the Department of Defense Over the past several years, we have conducted work on a number of issues that affect the ability of the Department of Defense (DOD) to sustain its weapon systems. In September 2017, we found that several factors were important to the success of Product Support Managers. These factors included teamwork and collaboration, early implementation of the Product Support Manager position, and organizational support and emphasis on sustainment. We also found that in response to our 2014 recommendations regarding the implementation of the Product Support Manager position, DOD had developed a comprehensive career path and associated guidance to develop, train, and support future Product Support Managers. Additionally, DOD revised guidance to define roles, responsibilities, and reporting relationships between support staff and Product Support Managers. However, DOD was still in the process of implementing our other three recommendations, such as issuing clear, comprehensive, centralized guidance regarding the roles and responsibilities of PSMs and collecting and evaluating information on the effects, if any, that Product Support Managers are having on life-cycle sustainment decisions for their assigned weapon systems. In September 2017, we also found that DOD does not have complete information to identify and manage single-source-of-supply risks. Specifically, some parts are provided by a single source of supply (e.g., one manufacturing facility), and if that single source were no longer able to provide the part, DOD could face challenges in maintaining weapon systems. DOD concurred with our six recommendations focused on improving the completeness of information for single-source-of-supply risks, including issuing department-wide policy that clearly defines requirements of Diminishing Manufacturing Sources and Material Shortages management, and details responsibilities and procedures to be followed to implement the policy. DOD is in the process of taking action to implement these recommendations. In June 2016, we found that the Defense Logistics Agency and the military services have not adopted metrics to measure the accuracy of planning factors, such as the accuracy of part lists, or the costs created by backorders. As a result, depot maintenance may not be efficient or cost-effective, resulting in unnecessary delays in the repair of weapon systems. DOD concurred with our six recommendations to develop metrics to monitor the accuracy of demand planning factors and disruption costs created by the lack of parts at depot maintenance sites and is in the process of taking action to implement these recommendations. For a listing of relevant past GAO work, see the Related GAO Products list at the end of this report. Sustainment: Depot maintenance conducted organically at the designated air logistics complex and contractually for some depot- level repairs at contractor facilities. The B-52 is a long-range, heavy bomber that can perform a variety of missions, including strategic attack, close air support, air interdiction, maritime operations, and offensive counter-air missions. It can carry nuclear or precision-guided conventional ordnance with worldwide precision navigation capability. However, the B-52s are some of the oldest aircraft in the Air Force’s fleet, and will continue to operate until at least 2040 (see fig. 8). Operating and support (O&S) costs for the B-52s have remained relatively steady, generally fluctuating around $1.2 billion–$1.3 billion per year. As a predominantly military-maintained system, most of that O&S cost is related to maintenance and manpower, with depot maintenance and depot-level reparables—direct labor and materials for item repairs, transportation, and storage, among other things—accounting for most of the maintenance cost. Sustainment Challenges and Mitigation Actions Technology Program (2014) is focused on upgrading outdated communications technology. The communications modification requires 7,000 hours of work and is estimated to be complete by 2020. The fleet has active sustainment plans for other components of the aircraft, such as the B-52 Anti-skid Replacement Life Cycle Sustainment Plan (2015), which is estimated to cost over $40 million and be completed by 2019. The B-52 faces sustainment challenges related to its age and, according to officials, replacement parts are difficult to obtain. Several modernization efforts are under way (communications, engines, etc.), and is working with vendors and its own service engineers to identify problem areas and plan ahead so that replacement parts will be available. Depot maintenance on the B-52 is managed by the program office and conducted at Oklahoma City Air Logistics Complex depot. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. The C-17 is a long-range, heavy logistic transport aircraft powered by four F-117 turbofan engines with air-refueling capability that was first manufactured in 1987 (see fig. 10). It is capable of rapid strategic delivery of troops and all types of cargo to main operating bases or to bases in any forward deployment area. The C-17 can perform tactical airlift and airdrop missions and can transport ambulatory patients during aeromedical evacuations, when required. The C-17 can carry virtually all air-transportable equipment. Total operating and support (O&S) costs for the C-17 have decreased from about $5.3 billion in fiscal 2011 to about $4.0 billion in fiscal year 2016. Specifically, unit operations decreased, while maintenance costs have generally increased during this period due to contractor logistics support because the C-17 is a predominantly contractor-managed aircraft. Average number of flying hours: 13,141 hours per aircraft Depot maintenance activity and squadron locations: The C-17 Enterprise Life Cycle Management Plan and Life Cycle Sustainment Plan (2014) documents current and future acquisition, sustainment, and integration efforts of the aircraft. It also addresses contractual arrangements and partnership support agreements between Air Force, Boeing, and other service providers for aircraft sustainment. Boeing provides continued sole-source life-cycle support for the C-17 under the terms of the Globemaster Integrated Sustainment Program (2013). Under this program, Boeing is responsible for sustainment, to include material management and depot maintenance support. The C-17 participates in a virtual fleet arrangement, a global network of 43 additional C-17 aircraft, which allows participants total aircraft parts access from any fleet participant worldwide. Sustainment Challenges and Mitigation Actions The C-17 is an aircraft being modified to meet its requirements as well as to address maintenance and supply issues. The Air Force’s actions to mitigate these challenges include processes to increase the service life of the aircraft, allowing managers to quickly hire skilled workers for critical positions, and locating other vendor source for parts. Logistics Complex, and at its facility in San Antonio; landing gear overhaul occurs at Ogden Air Logistics Complex, and engine overhaul occurs at Oklahoma City Air Logistics Complex in partnership with Pratt & Whitney, the original equipment manufacturer on the F-117 turbofan engine. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. Sustainment: Depot maintenance conducted by Northrop Grumman, and field maintenance conducted organically, by the National Guard. The E-8C Joint Surveillance Target Attack Radar System (E-8C) was first manufactured in 1967 (see fig. 12). Its primary mission is to provide theater ground and air commanders with ground surveillance to support attack operations and targeting that contributes to the delay, disruption, and destruction of enemy forces. Total operating and support (O&S) costs for the E-8C have generally increased from about $686 million in fiscal year 2011 to about $734 million in fiscal year 2016. Specifically, maintenance cost has increased partly because of increases in contractor logistics support since the E-8C is maintained by Northrop Grumman. Sustainment Challenges and Mitigation Actions E-8C aircraft were formerly used as commercial airliners and purchased by the Air Force. Therefore, the exact usage of the aircraft was unknown with any degree of specificity. The program office has utilized new analysis conducted by Boeing to develop an improved method of determining and tracking service life for the E-8C aircraft. The new method uses a quantitative analysis capability to identify safety of flight structural concerns, allowing for planning and execution of risk mitigation. The E-8C is an aircraft with significant maintenance and supply issues according to Air Force officials. The Air Force’s actions to mitigate these challenges include updating the Maintenance Plan and the Corrosion Plan for the E-8C (formerly a commercial airframe) to bring them in line with military standards. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support (O&S) Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. Sustainment: Depot maintenance conducted organically at the designated air logistics complex, and field maintenance conducted organically and by contractors. The F-16 Fighting Falcon is a compact, single-engine, multirole fighter aircraft first manufactured in 1978 (see figure 14). It is highly maneuverable and participates in air-to-air combat and air-to-surface attack. There are four versions of the F-16: A, single-seat model; B, two-seat model with tandem cockpits; C and D, single- and two-seat models, respectively, incorporating newer capabilities. Total operating and support (O&S) cost for the F-16 decreased from about $5 billion in fiscal year 2011 to about $4 billion in fiscal year 2016 because of a 6 percent reduction of inventory. Specifically, maintenance cost has generally decreased during this same period as a result of a decrease in cost of depot maintenance. Operating and Support Costs Program Office Comments Manufacturer: Lockheed Martin and Boeing Sustainment: Performance-based logistics contract with depot maintenance subcontracted to Ogden Air Logistics Complex, Utah, and field maintenance performed organically. and is designed to project air dominance, rapidly and at great distances, and defeat threats. Overall operating and support costs (O&S) for the F-22 have decreased about $248 million overall since fiscal year 2011. Maintenance issues continue to be an area of concern for the aircraft, and these costs increased approximately $255 million from fiscal years 2011 to 2016, due to increases in contractor logistics costs. Depot maintenance activity and squadron locations: Sustainment Challenges and Mitigation Actions maintaining a comprehensive diminishing manufacturing sources program and proactively supporting the continued sustainment of component parts of the aircraft through various replacement programs, such as the F-22 Reliability and Maintainability Maturation. This initiative is an ongoing effort to drive continuous improvement in availability. The F-22 faces issues with its low- observable coating and supply funding. Actions to mitigate these challenges include contracting a repair facility to conduct coating reversion repair and securing additional spares funding. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments The program office provided technical comments, which were incorporated as appropriate. The program noted the following: The Air Force and supporting industry are aggressively addressing sustainment challenges by investing in improvements to improve durability and maintainability, to include the low-observable coating. Additionally, for fiscal year 2017, 14.5 percent of not mission capable for maintenance aircraft have been available for pilots to fly. Also, supply chains are built on a network of partnerships that optimally thrive on consistent and predictable workflows. When unplanned changes occur in budgets, the forecasted flying hours, or major target objectives like AA, it creates major effects on the supply networks and there is rarely a quick fix. Another challenge affecting F-22 sustainment cost-effectiveness and responsiveness is the exit of many second- and third-tier suppliers driven by a lower business demand due to a significantly reduced fleet size (186 from the original 750 planned). The program office expects sustainment costs to stabilize as investments in fleet-wide repair processes and improved materials come to fruition. The AV-8B Harrier (AV-8B) is a Vertical/Short Take-off and Landing attack aircraft first manufactured in 1984 (see fig. 18). The AV-8B has the capability of conducting close air support using conventional weapons for intermediate range intercept and attack missions. The AV-8B is capable of deploying and operating on aircraft carriers and other suitable seagoing platforms, advanced bases, expeditionary airfields, and remote tactical landing sites. Total operating and support (O&S) costs for the AV-8B have decreased from about $815 million in fiscal year 2011 to about $646 million in fiscal year 2016. Specifically, unit-level manpower and operations as well as maintenance costs have decreased partly because the inventory is decreasing as AV-8B squadrons transition to the F-35 Joint Strike Fighter. Average number of flying hours: 4,711 hours per aircraft Operating and support cost: $646 million Depot maintenance activity and squadron locations: AV-8B Program Strategic Sustainment and Warfighting Relevance Plan (2013) addresses strategic sustainment and warfighting requirements to ensure relevance, reliability, safety, and sustainability through five pillars: recruit and retain high-quality people, develop a comprehensive readiness and sustainment plan, meet combatant commander requirements, retain and sustain government and industry agencies to support engineering and logistics requirements, and integrate capabilities to remain tactically relevant and operationally effective. AV-8B is maintained organically at Navy Fleet Readiness Centers under planned maintenance intervals occurring every 1,500 flight hours; supply support is provided organically by Naval Supply Systems Command and Defense Logistics Agency; contractor support services are provided by Boeing. Sustainment Challenges and Mitigation Actions The AV-8B is operating beyond its planned service life with maintenance and supply issues. The Marine Corps’ actions to mitigate these challenges include moving aircraft to deploying squadrons, upgrading aircraft components, and locating other vendor sources for parts. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. Manufacturer: Grumman Corporation (acquired by Northrop Grumman) The C-2A Greyhound Logistics Aircraft (C-2A) is a high-wing, twin-engine monoplane cargo aircraft first manufactured in 1965 (see fig. 20). It is designed to land on aircraft carriers, with a primary mission of providing critical logistics support to Carrier Strike Groups by transporting high-priority cargo, mail, and passengers between carriers and shore bases. The original C-2A aircraft were overhauled to extend their operational life in 1973 and again from 2004 through 2011. Total operating and support (O&S) costs for the C-2A have generally decreased from about $233 million in fiscal year 2011 to about $207 million in fiscal year 2016. Specifically, unit-level manpower, unit operations, and continuing system improvements have decreased, while maintenance costs have increased. Fiscal Year 2016 Data Average age: 29 years Average number of flying hours: 10,117 hours per aircraft Operating and support cost: $207 million Depot maintenance activity and squadron locations: landing gear, and avionics system, among others. The Navy will include an appendix for the C-2A when it updates the sustainment strategy for the E 2D for its 5-year update. C-2A completed a service life extension program from 2004 through 2011 to increase flight hours from 10,000 to 15,000 and landings from 16,020 to 36,000, among other things. Aircraft are maintained organically by field maintainers and at Navy Fleet Readiness Centers under a planned maintenance interval cycle with three planned maintenance interval events occurring consecutively every 24 months, and supply support is provided organically by the Naval Supply Systems Command and Defense Logistics Agency. Sustainment Challenges and Mitigation Actions The C-2A is operating beyond its planned service life with maintenance and supply issues. The Navy’s actions to mitigate these challenges include moving aircraft to deploying squadrons, training maintainers to transition to vacated positions, and locating other vendor sources for parts. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. The E-2 Hawkeye (E-2C) is the Navy’s all-weather, carrier-based tactical battle management, surface surveillance coordination and airborne early warning, command and control aircraft, with a planned sunset in 2026 when the last E-2D is delivered (see fig. 22). The E-2 is a twin-engine, five- crewmember, high-wing turboprop aircraft with a 24-foot diameter radar rotodome attached to the upper fuselage. Total operating and support (O&S) costs for the E-2 have decreased from about $536 million in fiscal year 2011 to about $345 million in fiscal year 2016. Specifically, unit manpower and maintenance costs have decreased, partly because E-2C inventory is decreasing as E-2C squadrons transition to the E-2D fleet. Fiscal Year 2016 Data Average age: 16 years Average number of flying hours: 5,839 hours per aircraft Operating and support cost: $345 million Depot maintenance activity and squadron locations: comprehensive sustainment logistics, engineering programs, and financial resources necessary to ensure continued platform sustainment and attainment of readiness and safety operations. The Navy will include an appendix for the E-2C when it updates the sustainment strategy for the E-2D for its 5-year update. E-2C is maintained organically by field maintainers and at Navy Fleet Readiness Centers under a planned maintenance interval cycle: initial planned maintenance interval is performed by field maintainers at 42 months and the second cycle is performed at a Fleet Readiness Center 46 months after the initial planned maintenance interval. Supply support is provided organically by the Naval Supply Systems Command and Defense Logistics Agency; contractor support services are provided by General Dynamics and Wyle Labs. Sustainment Challenges and Mitigation Actions The E-2C is operating beyond its planned service life with maintenance and supply issues. The Navy’s actions to mitigate these challenges include transitioning E-2C squadrons to the E-2D fleet, conducting studies to identify maintenance tasks to mitigate potential failures, and waiting for parts to be available. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. The E-2 Advanced Hawkeye (E-2D) is the newest variant of the E-2 aircraft platform, expecting to reach full operational capability by 2027 (see fig. 24). Using the same configuration as the E-2C, the E-2D aircraft is used for surface-surveillance coordination and airborne early warning, and command control. Its mission is to provide advanced warning of approaching enemy surface units, and cruise missiles and aircraft, among other things. Total operating and support (O&S) costs for the E-2D have increased consistently since fiscal year 2011 to about $125 million in fiscal year 2016. This increase is driven by the addition of aircraft to the inventory as the Navy continues to produce E-2D aircraft through 2026. Sustainment Challenges and Mitigation Actions As a new aircraft, the E-2D is experiencing maintenance and supply issues. The Navy’s actions to mitigate these challenges include troubleshooting component failures, and cannibalizing parts— moving parts from one aircraft to another. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. The EA-18G Growler is the fourth major variant of the F/A-18 family of aircraft manufactured in 2007 to replace the EA-6B Prowler (see fig. 26). The EA-18G is the first newly designed electronic warfare aircraft produced in more than 35 years and combines the proven F/A-18 Super Hornet platform with a sophisticated electronic warfare suite. Total O&S costs for the EA-18G have consistently increased from about $334 million in fiscal year 2011 to about $868 million in fiscal year 2016. Specifically, unit manpower and maintenance costs have increased partly because the inventory is increasing, as EA-18Gs are still in production. Fiscal Year 2016 Data Average age: 5 years Average number of flying hours: 1,489 hours per aircraft Inventory: 115 aircraft Depot maintenance activity and squadron locations: design, development, and fielding of the aircraft. Some of the key support program elements include developing support equipment and technical data, testing requirements for avionics, and facilities requirements, among others. The Navy is updating this plan and expects to finalize it in 2018. The aircraft are maintained organically at Navy Fleet Readiness Centers under planned maintenance intervals, which typically occur every 72 months. Also, the Navy partners with Boeing to provide wholesale supply and depot repair support for major components, such as the engine. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Sustainment Challenges and Mitigation Actions Operating and Support Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. The F/A-18A-D Hornet Strike Fighter is a twin-engine, mid-wing, multimission tactical aircraft initially fielded in the 1980s (see fig. 28). In its fighter mode, it is used primarily as a fighter escort and for air defense; in its attack mode, it is used for force projection, interdiction, and air support. Total operating and support (O&S) costs for the F/A-18A-D have decreased consistently from about $3.1 billion in fiscal year 2011 to about $2.4 billion in fiscal year 2016. Specifically, unit manpower, operations, and maintenance costs have decreased, partly because the F/A-18A-Ds are being permanently transitioned out of service to be replaced by the F-35 Joint Strike Fighter.. Operating and support cost: $2.4 billion and financial resources necessary to ensure continued readiness and supportability for the remainder of the aircraft’s service life. The Navy is currently updating this plan and expects to finalize it in 2018. Depot maintenance activity and squadron locations: The aircraft are maintained organically at Navy Fleet Readiness Centers under planned maintenance intervals, which typically occur every 48 months for carrier-deploying aircraft, and every 72 months for land-based aircraft. The Navy implemented the High-Flight-Hour program in 2006 to extend the service life from 8,000 to 10,000 flight hours by inspecting and repairing airframes, and replacing major components and parts. The High-Flight-Hour program, along with other factors, has led to maintenance carryover (i.e., into the next fiscal year) due to maintenance events taking longer than planned. Sustainment Challenges and Mitigation Actions In 1999, the Navy entered into a contract with Boeing for engineering support to leverage resources within the technology and industrial base to improve efficiency of the maintenance process and address the maintenance backlog. The F/A-18A-D is operating beyond its planned service life with maintenance and supply issues. The Navy’s actions to mitigate these challenges include extending the service life of the aircraft, allowing maintainers to work overtime to reduce backlog, and cannibalizing parts—moving parts from one aircraft to another. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments In commenting on a draft of this assessment, the program office provided technical comments, which were incorporated where appropriate. The F/A-18E-F Super Hornet was first manufactured in 1998 (see fig. 30). The F/A-18E-F is highly capable across the full mission spectrum: air superiority, fighter escort, reconnaissance, aerial refueling, close air support, air defense suppression, and day/night precision strike. The F/A-18E-F provides aircrew the capability and performance necessary to face 21st century threats. Total operating and support (O&S) costs for the F/A-18E-4 have increased from about $2.2 billion in fiscal year 2011 to about $3.1 billion in fiscal year 2016. Specifically, unit manpower, maintenance, and continuing system support have increased, partly because the inventory is increasing, as the F/A-18E-F is still in production. Sustainment Challenges and Mitigation Actions the Navy is conducting an assessment to determine the number of flight hours the aircraft can safely continue to fly, and then extend the service life of the program through inspections, repairs, and modifications, among other things. The Navy contracted with Boeing to potentially begin these efforts by fiscal year 2018. The F/A-18E-F is a high operational tempo aircraft supporting contingency operations with maintenance and supply issues. The Navy’s actions to mitigate these challenges include plans to extend the service life of the aircraft, training maintainers to transition to vacated positions, and cannibalizing parts—removing parts from one aircraft to another. This report is a public version of a sensitive report that we issued on April 25, 2018. DOD deemed some of the information, such as aircraft availability, not mission capable rates, number of aircraft in depots, and budgeted and executed flight hours to be sensitive (i.e., For Official Use Only). This public report omits the information that DOD deemed to be sensitive. Operating and Support Costs Program Office Comments Appendix XIV: Scope and Methodology To examine the trends in aircraft availability and operating and support (O&S) costs for selected Air Force and Navy fixed-wing aircraft, including whether the aircraft met availability goals, we selected a nongeneralizable sample of 12 fixed-wing aircraft managed by the Departments of the Air Force and the Navy. These included two Marine Corps aircraft that are managed by the Department of the Navy. This nongeneralizable sample was selected to ensure a mix of aircraft, including type of aircraft (fighter, bomber, cargo, etc.), age of the aircraft, and size of inventory, and whether the aircraft were sustained organically by the Department of Defense (DOD) or through contract arrangements, such as public-private partnerships or performance-based logistics, among other factors. For the Air Force, we selected five fixed-wing aircraft—the B-52 Stratofortress, C-17 Globemaster III, E-8C Joint Surveillance and Target Attack Radar System (JSTARS), F-16 Fighting Falcon, and F-22 Raptor. For the Navy, including the Marine Corps, we selected seven fixed-wing aircraft—the AV-8B Harrier, C-2A Greyhound Logistics Aircraft, E-2 Hawkeye Early Warning and Control Aircraft, E-2 Advanced Hawkeye Early Warning and Control Aircraft, EA-18G Growler, F/A-18 Hornet Strike Fighter A-D, and F/A-18 Super Hornet E-F. The Marine Corps uses the AV-8B Harrier and also uses a variant of the F/A-18A-D. For the selected aircraft, we obtained and reviewed the aircraft availability, sustainment, and O&S data for accuracy and completeness, interviewed officials regarding their data-collection processes, and reviewed available related policies and procedures associated with the collection of the data. As a result, we found the information to be sufficiently reliable for the purposes of presenting sustainment metrics, such as aircraft availability and O&S costs. status due to maintenance, supply, and both. With respect to O&S costs, we collected and analyzed data from fiscal years 2011 through 2016. We conducted data-reliability assessments for the data provided by the Air Force and the Navy. To do this, we sent data-reliability questionnaires to both departments requesting information on the sources that generated the data. For the Air Force, we conducted data-reliability assessments on the Air Force Total Ownership Cost system and the Logistics Installation and Mission Support system. For the Navy, we conducted data-reliability assessments on the Aviation Management Supply and Readiness Reporting—Type Model Series Integrated Database, the Decision Knowledge Programming for Logistics Analysis and Technical Evaluation system, the Flying Hour Projection System / Cost Adjustment and Visibility Tracking System, and the Visibility and Management of Operating and Support Costs system. We reviewed responses from both departments on these sources as well as documentation—such as guidance, user manuals, and data dictionaries—provided to corroborate questionnaire responses, and interviewed knowledgeable officials to discuss the data. We concluded that the data provided by the Air Force and the Navy were sufficiently reliable for the purposes of reporting condition metrics such as aircraft availability; not mission capable status due to maintenance, supply, and both; depot inductions; budgeted and executed flight hours; and O&S costs for the selected fixed-wing aircraft in our review. To identify the sustainment challenges and mitigation actions for the selected aircraft, we reviewed sustainment metrics data, performance briefings, and other relevant documentation to identify specific challenges for each of the 12 aircraft in our review. We also reviewed ongoing and planned actions to address those challenges. Additionally, we interviewed program officials, depot officials, field maintainers, and squadron personnel to obtain their views on the challenges they face in sustaining the aircraft and the actions they take to mitigate those challenges. In some instances, we visited depots and squadrons to observe aircraft undergoing maintenance, discuss the respective maintenance processes, and discuss challenges and mitigation actions with officials. We then grouped the identified challenges into categories and represented them in a table to demonstrate which aircraft are experiencing specific challenges. To assess the extent to which the Air Force and the Navy have sustainment strategies, regularly review sustainment metrics, and have plans to improve aircraft availability for the selected fixed-wing aircraft, we obtained and analyzed sustainment strategies, performance management frameworks (i.e., sustainment metrics collected and monitored as well as the levels of management review), and improvement plans for each of the selected 12 fixed-wing aircraft. We also identified and reviewed DOD, Air Force, and Navy guidance to analyze the departments’ efforts in sustaining these aircraft and to determine whether these were consistent with federal standards for internal control that deal with management defining objectives in specific terms. Specifically, we reviewed DOD Instruction 5000.02, Operation of the Defense Acquisition System, which provides management principles and mandatory policies for defense acquisition systems such as fixed- wing aircraft. These policies incorporate decision processes and assessing of readiness, which includes the creation of and requirements for a Life-cycle Sustainment Plan. We also reviewed Air Force Instruction 63-101/20-101, Integrated Life Cycle Management, which implements various Air Force and DOD policy directives, including DOD Instruction 5000.02. It establishes the integrated life-cycle management guidelines and procedures for Air Force personnel who develop, review, approve, or manage the systems, subsystems, end-items, services, and activities procured by the Air Force. For the Navy, we reviewed Secretary of the Navy M-5000.2, Department of the Navy Acquisition and Capabilities Guidebook, which provides guidance for the operation of the defense acquisition system and the joint capabilities integration and development system. It also implements DOD Instruction 5000.02 for the Navy and Marine Corps, including guidance on the management and execution of a sustainment strategy. and service guidance. We also reviewed the Air Force’s and the Navy’s performance metric briefings and improvement plans to determine whether the departments regularly reviewed sustainment metrics and had plans aimed at improving aircraft availability. We interviewed DOD, Air Force, and Navy officials knowledgeable about sustainment of these selected fixed-wing aircraft to discuss DOD’s and the departments’ efforts in sustaining these aircraft, including historical information on each aircraft, applicability of policy and guidance for legacy aircraft, and overviews of performance management frameworks identified by the departments to monitor and improve aircraft availability. To develop the fixed-wing aircraft sustainment summary documents (i.e., “Sustainment Quick Looks”) in appendixes II–XIII we obtained historical and current information including background on aircraft capabilities, manufacturer, sustainment strategy, depot maintenance and squadron locations, and key dates in the life cycle of each aircraft (i.e., first manufactured, initial and full operational capability, last production, and planned sunset year). We collected and analyzed the following metrics: aircraft availability, not mission capable maintenance, not mission capable supply, and not mission capable aircraft from fiscal year 2011 through March 2017; the number of aircraft in depots for fiscal years 2011 through 2016; budgeted and executed flight hours from fiscal years 2011 through overall O&S and maintenance costs for fiscal years 2011 through 2016. We compared availability actuals to goals, aircraft in depots to availability trends, and budgeted and executed flight hours to availability trends. We analyzed O&S cost by reviewing its six elements and compared them to availability trends. We also analyzed the subcategories of the maintenance costs element. Through interviews with knowledgeable officials and reviewing documentation, we identified sustainment challenges (i.e., aging, maintenance and supply support) and mitigation actions to address these challenges for each selected fixed-wing aircraft. DOD deemed some of the information, such as aircraft availability, not mission capable status, number of aircraft in depots, and budgeted and executed flight hours, to be sensitive (i.e., For Official Use Only), which must be protected from public disclosure. This public report omits the information that DOD deemed to be sensitive. Additionally, to support our work for each objective we conducted site visits and interviewed officials to discuss data trends and identify specific sustainment challenges such as aging, maintenance, and supply support, among other challenges affecting aircraft availability, and mitigation actions to address these challenges. For the Air Force, we met with the following entities: Headquarters—Secretary of the Air Force, Logistics and Product Support and Deputy Assistant Secretary for Cost and Economics, Air Force Cost Analysis Agency; Materiel Commands—Air Force Materiel Command and Air Force Life Cycle Management Center; Program Offices—B-52 Program Office, C-17 Program Office, E-8C Program Office, F-16 Program Office, and F-22 Program Office; Depots—Tinker Air Force Base at Oklahoma City, Oklahoma (B-52); Robins Air Force Base at Warner Robbins, Georgia (C-17); Northrop Grumman facility at Lake Charles, Louisiana (E-8C); Ogden Air Logistics Center / Hill Air Force Base at Ogden, Utah (F-16 and F-22); and Squadrons—437th Maintenance Group, Joint Base Charleston, South Carolina (C-17); 461st Air Control Wing, Robins Air Force Base Georgia (E-8C); 20th Fighter Wing, Shaw Air Force Base, South Carolina (F-16); and 325th Maintenance Group, Tyndall Air Force Base, Florida (F-22). For the Navy, we met with the following entities: Headquarters—Deputy Assistant Secretary of the Navy— Expeditionary Programs and Logistics Management, Marine Corps Aviation Plans and Policy Branch, and Air Warfare Division; Materiel Commands—Commander, Fleet Readiness Center; Naval Air Systems Command; and Naval Supply Systems Command; Program Offices—Program Manager–Air (PMA)-231 (C-2A, E-2C, and E-2D); PMA- 257 (AV-8B); and PMA-265 (F/A-18A-F, and EA- 18G); Depots—Fleet Readiness Center–East at Cherry Point, North Carolina; Fleet Readiness Center–Mid Atlantic at Naval Air Station Norfolk, Virginia, and Naval Air Station Oceana, Virginia; Squadrons—Marine Corps Air Station Cherry Point, North Carolina; Marine Corps Air Station Miramar, California; Naval Air Station Norfolk, Virginia; and Naval Air Station Oceana, Virginia; and Other—Naval Center for Cost Analysis. The performance audit upon which this report is based was conducted from September 2016 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DOD from April 2018 to September 2018 to prepare this unclassified version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. Appendix XV: Air Force and Navy Average Operating and Support Cost per Aircraft for Selected Fixed-Wing Aircraft For fiscal year 2016, total operating and support (O&S) costs for the five Air Force fixed-wing aircraft selected in our review were about $12 billion, and the average O&S cost per aircraft across all five fleets was about $96 million, as shown in figure 32. Each of the C-17 and F-16 fleets accounted for about 33 percent of the total O&S cost, and the E-8C’s average cost per aircraft accounted for about 48 percent of the total average cost per aircraft. For fiscal year 2016, total O&S costs for the seven Navy fixed-wing aircraft selected in our review were about $7.7 billion, and the average O&S cost per aircraft across all seven fleets was about $44 million, as shown in figure 33. The F/A-18E-F fleet accounted for about 40 percent of the total O&S cost, and the E-2C’s average cost per aircraft accounted for about 19 percent of the total average cost per aircraft. Appendix XVI: Comments from the Department of Defense Appendix XVII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, John Bumgarner (Assistant Director), Clarine Allen, Ron Aribo, Vincent Buquicchio, Amie Lesser, Richard Powelson, Steven Putansu, Matt Spiers, and Natasha Wilder made key contributions to this report. Related GAO Products Defense Supply Chain: DOD Needs Complete Information on Single Sources of Supply to Proactively Manage the Risks. GAO-17-768. Washington, D.C.: September 27, 2017. Weapon Systems Management: Product Support Managers’ Perspectives on Factors Critical to Influencing Sustainment-Related Decisions. GAO-17-744R. Washington, D.C.: September 12, 2017. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-17-333SP. Washington, D.C.: March 30, 2017. Depot Maintenance: Executed Workload and Maintenance Operations at DOD Depots. GAO-17-82R. Washington, D.C.: February 3, 2017. Defense Inventory: Further Analysis and Enhanced Metrics Could Improve Service Supply and Depot Operations. GAO-16-450. Washington, D.C.: June 9, 2016. Weapon Systems Management: DOD Has Taken Steps to Implement Product Support Managers but Needs to Evaluate Their Effects. GAO-14-326. Washington, D.C.: April 29, 2014.
Why GAO Did This Study DOD spends billions of dollars annually to sustain its weapon systems to support current and future operations. The Air Force and Navy are operating many of their fixed-wing aircraft well beyond their original designed service lives and therefore are confronted with sustainment challenges. House Report 114-537 included a provision for GAO to evaluate the sustainment of major weapon systems. This report, among other things, (1) examines the trends in availability and O&S costs for selected Air Force and Navy fixed-wing aircraft since fiscal year 2011, including whether they met availability goals, and (2) assesses the extent that the departments documented sustainment strategies, reviewed sustainment metrics, and implemented plans to improve aircraft availability. GAO selected a nongeneralizable sample of 12 fixed-wing aircraft by considering a variety of factors, such as the type, age, and manufacturer of the aircraft, among other factors, and analyzed condition and availability data, O&S costs, and sustainment challenges from fiscal year 2011 through March 2017 for each aircraft in a “Sustainment Quick Look.” GAO also analyzed policies, strategies, and plans, and interviewed Navy and Air Force officials in program offices, squadrons, and maintenance depots. What GAO Found Between fiscal years 2011 and 2016, the Air Force and Navy generally did not meet aircraft availability goals, and operating and support (O&S) cost trends for GAO's selected fixed-wing aircraft varied. Specifically, GAO found that availability declined for 6 of 12 aircraft—3 from each service—between fiscal years 2011 and 2016; availability fell short of goals for 9 of 12 aircraft in fiscal year 2016; and O&S costs increased for 5 of the aircraft, and maintenance costs—the largest share—increased for 8 of 12 aircraft. GAO found, and officials agreed, that these aircraft face similar challenges. a Obsolescence means a part is unavailable due to its lack of usefulness or it is no longer current or available for production. b Diminishing manufacturing sources is a loss or impending loss of manufacturers or suppliers. The Air Force and Navy have documented sustainment strategies for some aircraft, regularly reviewed sustainment metrics, and implemented improvement plans. The Air Force has documented sustainment strategies for all aircraft GAO reviewed; however, the Navy has not documented or updated its sustainment strategies for four aircraft. Specifically, the Navy does not have a documented sustainment strategy for the C-2A, and has not updated the strategies for the E2C, EA-18G, and F/A-18A-D since before 2012. The Navy is in the process of documenting its strategies, but Department of Defense (DOD) policy is unclear on whether a sustainment strategy is required and has to be updated every 5 years for weapon systems that are in the operations and support phase of their life cycle (i.e., legacy systems). Also, Navy guidance does not specify a requirement for legacy systems, although Air Force guidance does. Clarifying the requirements to document sustainment strategies for legacy systems, and documenting those strategies, would add additional visibility over the availability and O&S costs of DOD aircraft and any associated sustainment risks. This is a public version of a sensitive report issued in April 2018. Information on aircraft availability and other related information was deemed to be sensitive and has been omitted from this report. What GAO Recommends GAO is recommending that DOD and the Navy update or issue new policy and guidance clarifying the requirements for documenting sustainment strategies for legacy weapon systems. DOD concurred with the recommendations.
gao_GAO-18-33
gao_GAO-18-33_0
Background DOD Definitions of Unwanted Sexual Behaviors DOD has defined various types of unwanted sexual behaviors, including sexual assault, sexual harassment, and domestic violence. Sexual assault: DOD defines sexual assault as intentional sexual contact, characterized by use of force, threats, intimidation, abuse of authority, or when the victim does not or cannot consent. The term includes a broad category of sexual offenses consisting of the following specific Uniform Code of Military Justice offenses: rape, sexual assault, aggravated sexual contact, abusive sexual contact, forcible sodomy (forced oral or anal sex), or attempts to commit these acts. Sexual harassment: DOD defines sexual harassment as a form of sex discrimination that involves unwelcome sexual advances, requests for sexual favors, and other verbal or physical conduct of a sexual nature when (1) submission to such conduct is made either explicitly or implicitly a term or condition of a person’s job, pay, or career; (2) submission to or rejection of such conduct by a person is used as a basis for career or employment decisions affecting that person; or (3) such conduct has the purpose or effect of unreasonably interfering with an individual’s work performance or creates an intimidating, hostile, or offensive working environment. However, as noted earlier, a provision of the NDAA for FY 2017 changed the definition of sexual harassment for the military for purposes of investigations by commanding officers so that it is no longer defined solely as a form of sex discrimination, but is recognized as an adverse behavior on the spectrum of behavior that can contribute to an increase in the incidence of sexual assault. We discuss this changed definition of sexual harassment later in this report. Domestic violence: DOD defines domestic violence as an offense under the United States Code, the Uniform Code of Military Justice, or state law involving the use, attempted use, or threatened use of force or violence against a person, or a violation of a lawful order issued for the protection of a person who is a current or former spouse, a person with whom the abuser shares a child in common or a current or former intimate partner with whom the abuser shares or has shared a common domicile. Sexual assault of spouses and intimate partners is a subset of domestic violence. DOD Entities with Key Roles and Responsibilities in Addressing Unwanted Sexual Behaviors Various offices and organizations within DOD play a role in addressing unwanted sexual behaviors in the military. The Under Secretary of Defense for Personnel and Readiness is responsible for developing the overall policy and guidance for the department’s efforts to prevent and respond to instances of sexual assault, except for criminal investigative policy matters assigned to the DOD Inspector General and legal processes in the Uniform Code of Military Justice. DOD’s Sexual Assault Prevention and Response Program The Under Secretary of Defense for Personnel and Readiness oversees the Sexual Assault Prevention and Response Office (SAPRO), which serves as the department’s single point of authority, accountability, and oversight for its sexual assault prevention and response program. The responsibilities of the Under Secretary of Defense for Personnel and Readiness and SAPRO with regard to sexual assault prevention and response include providing the military services with guidance and technical support and facilitating the identification and resolution of issues; developing programs, policies, and training standards for the prevention of, reporting of, and response to sexual assault; developing strategic program guidance and joint planning objectives; overseeing the department’s collection and maintenance of data on reported alleged sexual assaults involving servicemembers; establishing mechanisms to measure the effectiveness of the department’s sexual assault prevention and response program; and preparing the department’s mandated annual reports to Congress on sexual assaults involving servicemembers. The Secretaries of the military departments are responsible for establishing policies for preventing and responding to sexual assault within their respective military service, and for ensuring compliance with DOD’s policy. Further, they are responsible for establishing policies that ensure commander accountability for program implementation and execution. Each military service has established an office that is responsible for overseeing and managing the military service’s sexual assault prevention and response program. Each military service also maintains a primary policy document on its sexual assault prevention and response program. Much like DOD’s directive and instruction on sexual assault prevention and response, the military service policies outline responsibilities of relevant stakeholders, including commanders, sexual assault response coordinators, and victim advocates and training requirements for all personnel. DOD’s Military Equal Opportunity Program The Under Secretary of Defense for Personnel and Readiness has responsibility for developing the overall policy for DOD’s military equal opportunity program and monitoring compliance with the department’s policy. According to the policy, all servicemembers are afforded equal opportunity in an environment free from harassment, including sexual harassment, and unlawful discrimination on the basis of race, color, national origin, religion, sex (including gender), and sexual orientation. The chain of command is used as the primary and preferred channel to (1) identify and correct unlawful discrimination practices, (2) process and resolve complaints of unlawful discrimination or harassment, to include sexual, and (3) ensure that military equal opportunity matters are taken seriously and acted on as necessary. The Office of Diversity Management and Equal Opportunity (ODMEO) oversees the department’s efforts to promote equal opportunity, diversity, and inclusion management, and to help prevent unlawful discrimination and harassment throughout DOD. The Defense Equal Opportunity Management Institute develops training and studies on equal opportunity. Behaviors under the purview of the military equal opportunity program include unlawful discrimination on the basis of color, national origin, race, religion, or sex. The Secretaries of the military departments are responsible for developing policies to prevent unlawful discrimination and harassment, (including sexual harassment), ensuring compliance with DOD’s policy, and establishing both formal and informal means of resolving complaints. The chain of command is the primary and preferred channel for identifying and correcting discriminatory practices and resolving servicemembers’ complaints of sexual harassment. The military services encourage servicemembers to resolve any complaints of sexual harassment they may have at the lowest possible level first. For servicemembers who wish to report a complaint of sexual harassment, DOD provides two complaint options—formal and informal. A formal complaint is an allegation of sexual harassment that a complainant submits in writing to the authority designated for the receipt of such complaints in military service implementing guidance. Formal complaints require specific actions to be taken, are subject to timelines, and require documentation of the actions taken, in accordance with federal law. In contrast, an informal complaint is an allegation of sexual harassment, made either orally or in writing, which is not submitted as a formal complaint. Informal complaints may be resolved directly by the complainant, such as by confronting the individual or by involving another individual or the chain of command. Servicemembers who elect to resolve their complaints informally may submit a formal complaint if they are dissatisfied with the outcome of the informal process. In 2014, DOD directed the military services to develop implementing instructions and mechanisms for reporting instances of sexual harassment anonymously. DOD’s Family Advocacy Program The Deputy Assistant Secretary of Defense for Military Community and Family Policy under the Under Secretary of Defense for Personnel and Readiness is responsible for the development and oversight of policy for the military departments to implement a coordinated community response approach to addressing domestic abuse. The DOD Family Advocacy Program (FAP) office provides guidance and technical assistance to the military departments and DOD components to support their efforts to address, among other things, domestic abuse. The Secretaries of the military departments are responsible for developing military service-wide policies, supplementary standards, and instructions to provide for the requirements within their respective installations FAPs. Each military service has established a FAP that is responsible for overseeing and managing, among other things, the installation-level FAPs and the military service’s domestic violence and domestic abuse prevention and response programs. When domestic abuse does occur, the military service installation FAP conducts a risk assessment and works to ensure the safety of the victims and help military families overcome the effects as well as change destructive patterns. CDC and Its Sexual Violence Prevention Efforts CDC is one of the major operating components of the Department of Health and Human Services, which serves as the federal government’s principal agency for protecting the health of U.S. citizens. As part of its health-related mission, CDC serves as the national focal point for developing and applying disease prevention and control, environmental health, and health promotion and education activities. CDC, among other things, conducts research to enhance prevention, develops and advocates public health policies, implements prevention strategies, promotes healthy behaviors, fosters safe and healthful environments, and provides associated training. In 1992, CDC established the National Center for Injury Prevention and Control as the lead federal organization for violence prevention. The center’s Division of Violence Prevention focuses on stopping violence, including sexual violence, before it begins and works to achieve this by conducting research on the factors that put people at risk for violence, examining the effective adoption and dissemination of prevention strategies, and evaluating the effectiveness of violence prevention programs. In 2004, CDC published a framework for effective sexual violence prevention strategies. This framework includes prevention concepts and strategies, such as identifying risk and protective factors (i.e., factors that may put a person at risk for committing sexual assault or that, alternatively, may prevent harm). CDC’s framework defines sexual violence as including non-contact unwanted sexual behaviors, sexual harassment, and physical sexual assault. Continuum of Harm DOD has acknowledged that connections exist across the continuum of unwanted sexual behaviors including sexual harassment and sexual assault and that this continuum of harm is reflected in key documents that guide prevention and response activities. For example, SAPRO has also reported that certain behavior and activities, such as hazing, can lead to sexual assault. Additionally, DOD’s Prevention Roundtable and 2014- 2016 Sexual Assault Prevention Strategy have both adopted CDC’s definition of “prevention” as it applies to sexual violence. CDC defines sexual violence to include sexual harassment and sexual assault. In 2014 and 2017, DOD contracted with RAND to conduct independent assessments of behaviors across the continuum of harm, including sexual assault and sexual harassment. In its 2014 report, RAND found that (1) 34 percent of male servicemembers who were surveyed reported that the sexual assault was part of a hazing incident, (2) servicemembers who experienced sexual harassment or gender discrimination in the past year also experienced higher rates of sexual assault, and (3) approximately one-third of servicemembers who are sexually assaulted stated the offender sexually harassed them before the assault. In its 2017 report, RAND found that (1) people are more likely to engage in problematic behaviors, such as sexual harassment, if that person perceives that peers and leaders condone those actions and (2) some organizations responsible for addressing unlawful discrimination and sexual harassment lack adequate policies, plans, information systems, and resources needed to establish a departmental approach to certain behavioral issues, inform senior leadership about these problems, and ensure that leadership’s decisions about problematic behaviors are uniformly enforced. CDC research revealed that behaviors such as bullying and homophobic teasing in early adolescence are significant predictors of sexual harassment over time. According to the CDC, these youth are at an increased potential to perpetrate sexual violence and engage in sexually harassing behavior. In response, CDC recommends that communities work to prevent all types of violence from occurring and coordinate and integrate responses to violence in a way that recognizes these connections. CDC’s research has also established that survivors of one form of violence are more likely to be victims of other forms of violence, that survivors of violence are at higher risk for behaving violently, and that people who behave violently are more likely to commit other forms of violence. Further, CDC states that violence prevention and intervention efforts that focus on only one form of violence can be broadened to address multiple, connected forms of violence to increase the public health impact. DOD’s Policies on Sexual Harassment Include Some but Not All of CDC’s Principles and Most Relevant Legislative Elements DOD’s policies on sexual harassment include some but not all of CDC’s principles and most relevant legislative elements. OSD and military service-specific sexual harassment policies generally include prevention strategies that CDC has identified in its principles for sexual violence prevention but leave out risk and protective factors, as well as risk domains. Additionally, DOD’s sexual harassment policies include most elements identified in section 579 of the NDAA for FY 2013, but do not consistently include mechanisms for anonymous reporting. ODMEO officials stated that they plan to issue a new policy that is intended to focus on sexual harassment and other forms of harassment, but it is too early to know whether that policy will include all the CDC principles or mechanisms for anonymous reporting. We also noted during our review that most existing policies have not yet been updated to reflect a provision in the fiscal year 2017 NDAA that redefined sexual harassment for certain purposes so it is no longer defined solely as a form of sex discrimination but is recognized also as an adverse behavior on the spectrum of behaviors that can contribute to an increase in the incidence of sexual assault. DOD’s Sexual Harassment Policies Include Some of CDC’s Principles for Preventing Sexual Violence but Not Others DOD’s sexual harassment policies include some of the principles that have been developed by CDC as part of a framework for preventing sexual violence, but other principles are not included. OSD includes sexual harassment as part of its broader military equal opportunity policy. It addresses, among other things, processes for preventing and responding to cases of discrimination, including sexual harassment; education and training in equal opportunity; and complaints processing. The military services’ policies on sexual harassment cover similar topics, such as chain of command responsibilities, complaint processing, and definitions for sexual harassment; however, they have some differences. For example, while all policies include provisions on sexual harassment prevention training, the Army’s and the Navy’s policies include specific characteristics of effective training. Both policies also specify what should be included in trainings for different levels of the chain of command. The Marine Corps and Air Force policies simply state that commanders must conduct sexual harassment prevention training. CDC’s framework defines sexual violence as including non-contact unwanted sexual behaviors, sexual harassment, and physical sexual assault. We applied six principles for sexual violence prevention from CDC’s framework to DOD’s sexual harassment policies. These principles are: Risk factors: Factors that may put people at risk for sexual violence perpetration or victimization, such as an organizational climate that either explicitly or implicitly condones sexual harassment. Protective factors: Factors that may protect high-risk people from harm, such as an organizational climate that promotes respect amongst personnel at all levels. Primary strategies for prevention: Strategies that occur before sexual violence takes place to prevent initial perpetration, such as sexual harassment prevention training. Secondary strategies for prevention: Immediate responses after sexual violence has occurred to address the early identification of victims and the short-term consequences of violence, such as mechanisms for reporting instances of sexual harassment and immediate interventions. Tertiary strategies for prevention: Long-term responses after sexual violence has occurred to address the lasting consequences of violence and sex-offender treatment interventions, such as long-term treatment of the victim and perpetrator. Risk domains: Levels at which risk and protective factors should be categorized, including: individual, relationship, community, and societal. In its sexual assault prevention strategy, DOD adapted risk domains to the military population, using the levels of: individual, relationship, leaders at all levels, military community, and society. Our comparison of DOD sexual harassment policies with CDC’s framework for preventing sexual violence showed that the policies include some of the principles in the framework but not others (see table 1.) Our analysis showed that the OSD and the Air Force policies each include two of the six principles in CDC’s framework, and the Army, the Navy, and the Marine Corps policies include three principles. Specifically, the policies generally identify sexual harassment prevention training for the armed forces, a primary strategy for prevention. In addition, the policies generally outline mechanisms for reporting and responding to sexual harassment, considered a secondary strategy for prevention. The Army, the Navy, and the Marine Corps policies outline counseling support and referral services, as well as specifying the options available for administrative or judicial punishment, including discharge from service for perpetrators, which can be considered to be tertiary strategies for prevention. Common elements missing from DOD’s sexual harassment policies are risk factors and protective factors, which identify conditions or behaviors that might heighten or lower the risk of sexual harassment victimization or perpetration, respectively. Examples of risk factors for sexual violence identified by the CDC include, but are not limited to, alcohol and drug use, hypermasculinity, emotionally unsupportive family environments, general tolerance of sexual violence within the community, and societal norms that support male superiority and sexual entitlement. Examples of protective factors from the CDC include emotional health and connectedness, and empathy and concern for how one’s actions affect others. Additionally, RAND identified an organizational climate that is oppositional to sexual violence as a protective factor. The policies also did not include risk domains, which would categorize risk and protective factors at the individual, relationship, community, and society levels. ODMEO officials told us that they are familiar with the CDC framework and are considering using it as a source of best practices for a new sexual harassment prevention strategy. DOD has previously used CDC’s sexual violence prevention framework to guide its sexual assault prevention strategy. In the absence of more comprehensive policies on sexual harassment that fully include principles in the CDC framework for sexual violence prevention, DOD may be missing opportunities to address and potentially reduce incidents of sexual harassment in the military population based on risk and protective factors and effective, tested strategies. Specifically, DOD may be missing the opportunity to identify risk factors, which would help to recognize situations where individuals and populations may be at a higher risk of sexual harassment perpetration or victimization; identify protective factors to lower the risk of sexual harassment; develop mechanisms to address sexual harassment across risk domains—at the individual, relationship, community, and society levels; and develop tertiary strategies, or long-term responses after sexual violence has occurred to address the lasting consequences of violence and sex- offender treatment interventions. DOD’s Sexual Harassment Policies Include Most Elements Mandated in the NDAA for FY 2013, but Some Do Not Include Mechanisms for Anonymous Reporting DOD’s sexual harassment policies include three elements required by section 579 of the NDAA for FY 2013 but some do not include one element involving the anonymous reporting of incidents. Section 579 mandated that DOD, among other things, develop a comprehensive sexual harassment policy that includes the following elements: (1) prevention training for members of the armed forces; (2) mechanisms for reporting sexual harassment, including mechanisms for anonymous reporting; and (3) mechanisms for resolving sexual harassment that include the prosecution of perpetrators. In 2014, the Office of the Undersecretary of Defense for Personnel and Readiness issued a policy memorandum addressing the provisions of Section 579 and directed the military services to develop implementing instructions that include mechanisms for anonymous reporting. We compared DOD’s policies with the required elements in section 579 and found that OSD and military service policies generally include required elements except for the element focused on DOD including anonymous reporting in its policies for sexual harassment. The OSD, Army, and Marine Corps policies do not include anonymous reporting, while the Air Force policy and a new Navy policy do. Officials from ODMEO said that providing an option for anonymous reporting is important because it increases the odds that incidents will be reported. ODMEO officials also told us that the services have hotlines that servicemembers can use to anonymously report complaints of sexual harassment, and the Air Force and Navy policies note that their respective military servicemembers have options for anonymous reporting. While the military services may have mechanisms in place for anonymous reporting of sexual harassment, these mechanisms are not included in OSD’s policy—as required by section 579—or the policies of two of the Services, those of the Army and the Marine Corps. Without including anonymous reporting of sexual harassment complaints in DOD’s sexual harassment policies, the statutory requirement for anonymous reporting may be interpreted and applied inconsistently throughout the military services, or left unmet. Development of New OSD Harassment Policy May Provide Opportunities for Enhanced Oversight and More Consistent Approaches OSD is developing a new policy—planned to be issued in fiscal year 2018—that will specifically focus on various forms of harassment, including sexual harassment, hazing, and bullying. ODMEO officials who are developing the new policy stated that it is intended, among other things, to enhance oversight of sexual harassment prevention and response within the department. However, because the policy is under development, it is too early to determine how the policy will address the CDC principles and anonymous reporting, as discussed earlier. Further, it is unclear how OSD plans to improve oversight and whether it intends to include performance goals, objectives, milestones, and metrics as we previously recommended in 2011. Although OSD in 2014 directed the military services to improve their oversight of sexual harassment, none of the military services were able to demonstrate that they had implemented all the required elements. Specifically, DOD’s 2014 policy memorandum addressing the provisions of section 579 also directs the military services to develop a sexual harassment oversight framework to be reviewed quarterly by a senior leadership forum that includes long-term goals, objectives, and milestones; criteria for measuring progress; results-oriented performance measures to assess effectiveness of service sexual harassment policies and programs; standards for holding leaders accountable for promoting, supporting, and enforcing policies, plans and programs; and strategies to implement the oversight framework. While some of the military services have included elements of the oversight framework directive from the 2014 policy memorandum, none of them were able to provide information that demonstrated that they had fulfilled all requirements set forth by that policy memorandum. For instance, when asked, none of the military services were able to provide details that they have senior leader forums that review their oversight efforts on a quarterly basis. Officials from the Air Force told us that they were waiting for ODMEO to release a new sexual harassment policy before establishing the oversight framework. Officials from the Navy referred us to their July 2017 sexual harassment policy, which instructs the Navy Sexual Harassment Prevention and Equal Opportunity Office to develop and implement standards for holding leaders accountable for promoting, supporting, and enforcing sexual harassment prevention and response policies, plans, and programs, and to develop results-oriented performance measures to assess the effectiveness of sexual harassment prevention and response policies and programs. Officials from the Army referred us to their SHARP Campaign Plan, which outlines methods to hold leaders accountable for taking appropriate action to address sexual harassment; goals and objectives for the program; and ways to measure program effectiveness. The Marine Corps did not respond to our request for information regarding an oversight framework for sexual harassment. A new department-wide policy on sexual harassment could be helpful to the military services as they review and update their respective policies. As noted earlier, military service policies have some differences in how they address aspects of sexual harassment. The Marine Corps told us that they have been waiting for additional guidance from OSD before updating their sexual harassment policies. However, following publicized incidents of Marines posting inappropriate photos on line of female servicemembers without their consent, the Marine Corps updated its guidance in May 2017 adding “the distribution or broadcasting of an intimate image, without consent” to its list of sexual harassment incidents that mandate separation processing. Additionally, in May 2017, a Marine Corps official said the service was revising its sexual harassment policy. The Navy updated its sexual harassment policy in July 2017 without additional guidance from OSD. We also noted during our review that most existing policies have not yet been updated to reflect a provision in the fiscal year 2017 NDAA that redefined sexual harassment for certain purposes so it is no longer defined solely as a form of sex discrimination but is recognized also as an adverse behavior on the spectrum of behavior that can contribute to an increase in the incidence of sexual assault. We asked DOD officials from several offices about the implications of this change. They identified some actions they will take, but the full implications, if any, of the change are unclear. Officials from the Assistant Secretary of Defense for Readiness said that there are no significant implications of the sexual harassment definition change beyond making conforming revisions to policy documents and guidance. ODMEO officials said that adjusting to the new definition of sexual harassment would not significantly affect their work at the OSD level, since they are already updating their sexual harassment policy to reflect this change and since sexual harassment is expected to remain within the responsibilities of ODMEO. They added that the military services will likely have to adjust to the new definition of sexual harassment, but did not offer details in how they would have to adjust. The Navy’s new policy dated July 2017 reflects the new definition, but the other military services have yet to incorporate the change. Officials from SAPRO said that they are working with ODMEO to revise surveys on unwanted sexual behaviors to reflect the new definition. SAPRO officials further stated that sexual harassment should remain under ODMEO’s purview since ODMEO personnel are trained specifically in sexual harassment response. Officials from the Army’s SHARP program said that the new definition means that sexual harassment will more often be considered misconduct, and taken more seriously. Since OSD is in the process of updating its policy, we are not making any recommendations. However, it will be important for OSD and the military services to address our prior recommendation regarding improving the oversight framework as well as incorporating the new definition of sexual harassment required by the fiscal year 2017 NDAA while updating their policies. DOD Has Processes for Maintaining and Reporting Consistent Data on Incidents of Sexual Assault and Domestic Abuse That Involves Sexual Assault, but Does Not Have Reasonable Assurance of Consistent Data on Sexual Harassment DOD has processes for maintaining and reporting consistent data on sexual assault incidents and domestic violence incidents that involve sexual assault, but the department does not have similar assurance of consistent data on incidents of sexual harassment. SAPRO and FAP each use centralized databases that enable them to maintain and report consistent data on those incidents that fall under their purview. In contrast, DOD relies on military service-specific databases on sexual harassment incidents and does not have assurance of consistent data from these databases because it has not established standard data elements and definitions to guide the military services in maintaining and reporting these data. DOD Has Developed and Plans to Further Improve Centralized Databases That Enable It to Maintain and Report Consistent Data on Sexual Assault Incidents and Domestic Violence Incidents That Involve Sexual Assault DOD uses centralized databases to maintain and report data on sexual assault incidents in the military and domestic violence incidents involving sexual assault. Specifically, SAPRO and the military services use the Defense Sexual Assault Incident Database (DSAID), and FAP uses the DOD Central Registry. These databases maintain data on incidents that are included in statutorily required annual reports to Congress on sexual assaults in the military. In 2011, Congress mandated that DOD provide annual reports that include: the number of sexual assaults committed against and by members of the armed forces that were reported to military officials, including unsubstantiated and substantiated reports with a synopsis of each substantiated case organized by offense and the action taken, including disciplinary actions; the policies, processes, and procedures implemented by the Secretary concerned during the year covered by the report in response to incidents of sexual assaults; the number of substantiated sexual assault cases in which the victim is deployed where the assailant is a foreign national; and a description of the implementation of the accessibility plan, including a description of the steps taken to ensure that trained personnel, appropriate supplies, and transportation resources are available to deployed units. The most recent DOD annual report on sexual assault was issued in May 2017 and covered fiscal year 2016. The report includes data on the number of both restricted and unrestricted reports of sexual assault involving servicemembers. The report also contains separate enclosures for the Army, the Navy (including the Marine Corps), the Air Force, and the National Guard, as well as annexes on the Workplace and Gender Relations Survey of Active Duty Members (WGRA) and the Military Investigation and Justice Experience Survey (MIJES). The WGRA annex discusses topics including the continuum of harm and the MIJES annex contains information on closed cases of sexual assault. SAPRO and FAP both contributed sexual assault incident data to the fiscal year 2016 report, and our review of the underlying databases found that data elements and definitions were defined and management was able to process the data into consistent information. Specifically, the two databases used are the: DSAID Data on Sexual Assault Incidents: DSAID captures DOD-wide data on certain incidents of sexual assault that involve a servicemember or in some cases, when a sexual assault involves a servicemember’s spouse or adult family member or a DOD civilian or contractor. However, FAP-related sexual assault incidents are not captured in DSAID. Using information generated by DSAID, SAPRO includes both substantiated and unsubstantiated reports of sexual assault in its annual report. In 2017, we reviewed DSAID and found that DOD had taken steps to ensure the quality and consistency of data in DSAID as well as to monitor the data entered into the system. In addition, OSD had provided the military services with definitions for required data elements in the database, which include details on the incident, victim, and alleged offender. In addition, we identified several technical challenges with the system, including issues with the system’s speed and ease of use; interfaces with other external DOD databases; and users’ ability to query data and generate reports. At the time of the report’s release, DOD had plans to modify DSAID. As of July 2017, DOD officials told us that they are still in the process of making modifications to DSAID to resolve or alleviate the technical challenges for users. DOD Central Registry Data on Domestic Abuse Incidents Involving Sexual Assault: The DOD Central Registry captures DOD-wide data on reports of domestic abuse on populations within FAP’s purview, including on family members of servicemembers as well as on their intimate partners. The DOD Central Registry includes details of each case such as the status of cases, the demographics of the perpetrator and victim, the specific type of abuse, and other details surrounding the incident. FAP officials explained that they do not use the “substantiated” and “unsubstantiated” terminology like SAPRO does. Rather, FAP, which is not responsible for determining criminal or legal disposition, uses the terms “met criteria” and “not met criteria” for maltreatment. This difference in terminology has to do with FAP’s process for determining if an incident meets the clinical criteria to be classified as abuse for the purpose of developing an intervention/treatment plan for both the victim and the offenders involved in the allegations of domestic violence. Incidents that are determined as having met criteria are entered into the DOD Central Registry. We reviewed the DOD Central Registry and found that it includes well defined data elements and descriptions for collecting data on cases of domestic violence including those that involve sexual assault. The data in the DOD Central Registry includes 46 discrete data elements, including the relationship between the victim and perpetrator, the timeline of the case, and actions taken and treatments administered in response to the incident. The elements are defined and described in an OSD policy. In its annual reporting to Congress, FAP provides the number of domestic violence incidents involving sexual assault that met criteria and the total number of instances of domestic violence that did not meet criteria. However, FAP does not maintain or report data on the total number of reported domestic violence incidents that specifically involve sexual assault. That is because only the details of cases that meet criteria are recorded in the DOD Central Registry. A FAP official said that the military services likely have more detailed information about cases that did not meet the criteria, but it does not collect these data in the DOD Central Registry. A provision in the NDAA for FY 2017 requires DOD to submit an annual report on child abuse and domestic abuse incident data, including the number of incidents reported during the year involving the physical or sexual abuse of a spouse, intimate partner, or child. This report is to be submitted simultaneously with submission of DOD’s annual sexual assault report to Congress. FAP officials told us that they are currently working with SAPRO to ensure that all reported incidents of domestic violence involving sexual assault, including those that did not meet the criteria, are included in the annual sexual assault report. DOD Reports Annually on Sexual Harassment, but Does Not Have Reasonable Assurance That the Military Services Maintain Consistent Data on Sexual Harassment Incidents Though not required to do so, DOD has included sexual harassment incident data in an appendix of its annual report on sexual assault in the military. The appendix provides information on the total number of sexual harassment reports over the fiscal year and the total number of substantiated sexual harassment reports. It also breaks down complaints by sex, service, and pay grade. ODMEO generates the reported data through annual data calls to each military service; however, it does not have assurance that the services maintain consistent data on sexual harassment incidents consistent with federal standards of internal control. The military services maintain sexual harassment incident data in military service-specific databases, and there is no centralized database similar to DSAID or the Central Registry. The military service databases are intended to collect data on formal complaints. According to the military services, the Army, the Air Force, and the Marine Corps use web-based systems, and the Navy tracks data using an Excel spreadsheet. Each service has a discrete process for entering and performing quality checks on sexual harassment incident data in its respective database, as shown in table 2. Although the military services perform some data quality checks as shown in table 2, ODMEO does not have assurance the military services are maintaining consistent data because it has not defined standard data elements and definitions for the information in their databases. Rather, the individual military services have established their own data elements and definitions. We compared data elements and definitions from each of the military services and found that there are several data elements that remain consistent throughout the services. For example, each military service records whether the complainant and offender are in the same unit, what their relationship is to each other, and the disposition of cases. However, we also found inconsistencies in data fields and their definitions across the military services, and some of the military services have data fields and definitions that do not exist in other databases. For example, the Marine Corps records whether or not the incident involves alcohol or drug use, which the rest of the military services do not record, and the military services record dates differently between their respective databases. For example, the Air Force records an initial date, the date the complaint form was signed, the date the general court martial was sent, the date the legal review was completed, and the final review date. The Marine Corps records the date the incident was reported, the date the incident occurred and whether the incident occurred over multiple dates; the dates associated with notifications and status updates to general courts martial proceedings; the dates associated with steps in the investigation, including any extensions; the dates associated with dispositions; and the dates associated with appeals. Additionally, the military services have different ways of categorizing sexual harassment incidents, as shown in table 3. As shown in table 3, while some data descriptions are similar—for instance, each of the military services include crude/offensive behavior, unwanted sexual attention, and sexual coercion—there are differences as well. The Air Force also categorizes sexual harassment into verbal, nonverbal, physical, and other, for example, whereas the other military services’ top-level categories are different. The Navy has a “not applicable” category that it describes as sexual harassment complaints that do not fall under sexual harassment, and only the Air Force has an “other” category. Because the military services have different descriptors for similar data fields, DOD cannot ensure that the services are categorizing similar types of sexual harassment in the same way. In addition, we found that the Army is more detailed in characterizing different types of sexual harassment. Specifically, as shown in table 4, the Army has an additional data field that provides more detailed descriptions of three types of sexual harassment; the other military services’ respective databases do not have this level of detail. Because the Army has this additional data field, it can capture information on multiple types of harassment that may occur in a single incident. The other military services, in contrast, do not have this capability in their respective databases. To illustrate, if one case of sexual harassment involved both verbal and nonverbal forms of sexual harassment, the Army could choose a more specific characterization to describe the incident, while the other military services would characterize the incident in more general terms. ODMEO officials are considering adapting an existing system to track instances of sexual harassment department-wide. That system, called Force Risk Reduction (FR2), is currently used to track safety issues like military injuries, civilian workers’ compensation claims, and casualty notifications at DOD. ODMEO recently completed a pilot of the system with the Marine Corps, the Navy, and the Army to test whether it would be useable for adaptation for sexual harassment data, and is planning a second pilot to include the Air Force and the National Guard Bureau. According to ODMEO officials, their adaptation of FR2 is intended to collect aggregate-level sexual harassment data from the military services, and the military services will continue to operate and rely on their individual databases to maintain more detailed case-level information on incidents. For example, ODMEO’s adaptation of FR2 would not have details such as descriptions of specific incidents, or information on dates associated with investigations or appeals. These types of data will continue to be maintained in the service systems. ODMEO officials told us that their new data system, if implemented, is not designed to collect case-level details in order to avoid personally identifiable information. Federal internal control standards state that management should define the identified information requirements at the relevant level and requisite specificity for appropriate personnel. Management should also process the obtained data into quality information. Consistency of information meets the identified information requirements when relevant data from reliable sources are used. While DOD is exploring implementing a system to track instances of sexual harassment department-wide, as currently planned this system will not collect case-level details and individual military service systems will continue to be relied upon for this type of information. Inconsistencies in data elements and definitions among the military services generally mean that one military service may be maintaining sexual harassment data that are more or less detailed than sexual harassment data maintained by other military services, or that is simply different from the data maintained by other military services. Additionally, inconsistent data elements and definitions may create difficulties in reporting sexual harassment data from the military services to OSD for a department-wide report, since ODMEO has to adapt data from the services to fit reporting requirements. Without standard data elements and definitions for sexual harassment data, DOD will continue to lack assurance about the consistency of these data across the military services. DOD Has Several Overarching Efforts to Address Unwanted Sexual Behaviors across the Continuum of Harm DOD has several overarching efforts to address unwanted sexual behaviors across the continuum of harm. Specifically, the department established an office to oversee the integration and coordination of unwanted sexual behaviors in 2015 and is in the process of developing an overarching prevention strategy. However, because the strategy is under development, it is unclear whether it will contain key elements for long-term and results-oriented strategic planning. DOD also has ongoing collaborative efforts to address unwanted sexual behaviors along the continuum of harm. Specifically, we identified 15 collaborative efforts, including regular meetings, Integrated Product Teams, and working groups that involve multiple entities that address unwanted sexual behaviors. DOD Is Developing an Overarching Prevention Strategy to Address the Continuum of Harm, but It Is Unclear Whether DOD Will Include Key Elements of a Long-Term, Results- Oriented Strategy DOD has taken steps to integrate activities related to the continuum of harm and is in the process of developing an overarching prevention strategy. Based on its research, DOD has sought to understand and define the continuum of harm, including the shared characteristics that contribute to increased unwanted sexual behaviors along the continuum and implications for prevention and response efforts. Also, in November 2015, DOD established a new entity—the Office of the Executive Director for Force Resiliency, within the Office of the Assistant Secretary of Defense for Readiness—to oversee policies and initiatives related to the continuum of harm. Specifically, the Executive Director for Force Resiliency was expected to provide senior leader policy guidance and oversight on high visibility departments that include SAPRO and ODMEO. In November 2016, the Office of the Executive Director for Force Resiliency was absorbed under the Assistant Secretary of Defense for Readiness. According to the Assistant Secretary of Defense for Readiness, the functions of the Office of the Executive Director for Force Resiliency remain and coordination of the efforts of several offices that address the continuum of harm continues. Officials from the Assistant Secretary of Defense for Readiness and SAPRO told us that they are drafting an overarching prevention strategy to encompass behaviors along the continuum of harm. However, because the strategy is still under development, its contents and timelines are unclear. We have previously identified six elements of strategic management planning that are key for establishing a long-term, results- oriented strategic planning framework: (1) a mission statement, (2) long- term goals, (3) strategies to achieve goals, (4) external factors that could affect goals, (5) the use of metrics to gauge progress, and (6) evaluations of the plan to monitor goals and objectives. By incorporating the elements of a comprehensive and results-oriented strategy into its overarching prevention strategy, the department will be better positioned to effectively coordinate and integrate prevention activities and reduce unwanted sexual behaviors. A mission statement, along with long-term goals and strategies to achieve those goals, should help to focus efforts in integrating prevention activities, and metrics and evaluations will allow the department to gauge progress and make changes as necessary, while also accounting for external factors that may impact progress towards goals. DOD Has Ongoing Collaborative Efforts to Address Behaviors along the Continuum of Harm Our review identified 15 collaborative efforts that DOD has used to address behaviors along the continuum of harm, including sexual harassment, sexual assault, and domestic violence involving sexual assault. Three of these efforts are cross-cutting between all three of the key OSD stakeholders—ODMEO, FAP, and SAPRO—and five involve cross-cutting efforts by at least two of the key stakeholders. Figure 1 lists DOD’s 15 collaborative efforts. Regarding the three cross-cutting efforts involving all three of the key stakeholders, The Sexual Assault Prevention and Response Integrated Product Team provides a forum for OSD, the military departments, and the National Guard Bureau to address sexual assault prevention efforts. The team meets bimonthly and serves as the implementation and oversight arm for DOD’s Sexual Assault Prevention and Response (SAPR) program. The team also coordinates new policies; reviews existing SAPR policies and programs to ensure they are consistent with applicable instructions; and monitors the progress of program elements including DOD’s SAPR strategic plan tasks, DOD’s sexual assault prevention strategy tasks, and implementation of NDAA- related sexual assault issues. SAPRO leads this effort. The Prevention Collaboration Forum and working group develops coordinated prevention approaches that address factors impacting personnel readiness such as sexual harassment, sexual assault, and domestic violence involving sexual assault. According to its proposed charter, the focus of the forum is on enhancing the health of military unit and family climates as well as strengthening and promoting the resiliency and readiness of the total force through a coordinated effort around integrated policies, collaborative direction of research, alignment of resources, analysis of gaps, and synchronization of activities. The Assistant Secretary of Defense for Readiness leads this effort with SAPRO providing administrative and facilitation support. The Victim Assistance Leadership Council advises the Under Secretary of Defense for Personnel and Readiness on policies and practices related to victim assistance across DOD. According to its charter, the council provides a forum for senior leaders to exchange information and collaborate on issues affecting victims of all forms of crime and harassment within DOD, including but not limited to victims of sexual harassment, sexual assault, and domestic violence involving sexual abuse. Leadership rotates between SAPRO, FAP, and ODMEO and other offices. Regarding the cross-cutting efforts involving two of the three key stakeholders, the Sexual Harassment Prevention and Response Working Group is led by ODMEO and includes SAPRO. The group was established to evaluate how to best position sexual harassment prevention and response policy and oversight and to leverage technology to automate annual reporting requirements. The four other cross-cutting efforts are (1) the hazing and bullying working group, (2) retaliation working groups created under the SAPR Integrated Process Team, (3) domestic abuse rapid improvement events, and (4) ODMEO and SHARP meetings. The remaining collaborative efforts we identified are specific to FAP, SAPRO, and ODMEO. For example, the Sexual Assault Prevention Roundtable is a forum for representatives from OSD, the military departments, and the National Guard Bureau to share information on sexual assault prevention efforts and requirements. According to its charter, the roundtable’s activities include, among other things, sharing promising practices and prevention updates; discussing challenges in prevention program implementation, including servicemember training, and identifying approaches to address them; identifying metrics to assess the impact and effectiveness of prevention efforts, and opportunities to collaborate on research projects; and tracking the implementation of prevention tasks identified in the DOD SAPR strategy. SAPRO leads this effort. The Defense Diversity Working Group is an ODMEO-specific group that collaborates with various OSD and military service offices on military and civilian diversity and inclusion issues and implements mandated diversity plans and programs. Conclusions Studies by DOD and others have shown that unwanted sexual behaviors do not exist in isolation but are part of a range of interconnected, inappropriate behaviors that are connected to the occurrence of a sexual assault. While DOD has policies and procedures to prevent and respond to these types of unwanted behaviors, some of the policies do not include key elements like anonymous reporting of sexual harassment and principles in the CDC framework for sexual violence prevention. Fully including these elements in the department’s policies can help ensure that the military services are interpreting and applying prevention and response efforts consistently and may also decrease the risk of perpetration or victimization related to instances on unwanted sexual behaviors. Further, DOD has developed reliable data systems for collecting and reporting data on some of the unwanted sexual behaviors including sexual assault and instances of domestic violence with sexual assault. However, inconsistencies in sexual harassment data elements and definitions may be creating difficulties in developing department-wide reports on unwanted sexual behaviors. Improving and standardizing data collection efforts will not only improve the quality of data that DOD and the military services collect but may also increase the ability for DOD to further develop its understanding of the connection between unwanted sexual behaviors. Finally, DOD officials have stated that they are in the early stages of developing an overarching strategy to address the interconnected nature of the range of unwanted sexual behaviors. To ensure that the department is appropriately concentrating its efforts to prevent and respond to the full range of unwanted behaviors, it is important that DOD include elements of a long-term, results-oriented strategy into its overarching prevention strategy. In doing so, DOD will be in a better position to effectively coordinate and integrate prevention activities and ultimately reduce instances of unwanted sexual behaviors. Recommendations for Executive Action We are making the following four recommendations to DOD: The Under Secretary of Defense for Personnel and Readiness should fully include in the new policy for sexual harassment the principles in the Centers for Disease Control’s framework for sexual violence prevention, including risk and protective factors, risk domains, and tertiary strategies. (Recommendation 1) The Under Secretary of Defense for Personnel and Readiness should include in the new policy for sexual harassment mechanisms for anonymous reporting of incidents consistent with section 579 of the National Defense Authorization Act for FY 2013. (Recommendation 2) The Under Secretary of Defense for Personnel and Readiness should (1) direct the Office of Diversity Management and Equal Opportunity to develop standard data elements and definitions for maintaining and reporting information on sexual harassment incidents at the military service level, and (2) direct the military services to incorporate these data elements and definitions into their military service-specific databases. (Recommendation 3) The Under Secretary of Defense for Personnel and Readiness should direct the Assistant Secretary of Defense for Readiness to incorporate in its continuum of harm prevention strategy all the elements that are key for establishing a long-term, results-oriented strategic planning framework. The elements are (1) a mission statement, (2) long-term goals, (3) strategies to achieve goals, (4) external factors that could affect goals, (5) use of metrics to gauge progress, and (6) evaluations of the plan to monitor goals and objectives. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to DOD and CDC for review and comment. In its written comments, DOD concurred with three recommendations and partially concurred with one, noting planned actions to address this recommendation. DOD’s comments are reprinted in their entirety in appendix II. DOD and CDC also provided technical comments, which we incorporated into the report as appropriate. DOD concurred with our three recommendations that DOD fully include in the new policy for sexual harassment the principles in the CDC's framework for sexual violence prevention, that DOD also include in the new sexual harassment policy mechanisms for anonymous reporting, and that DOD incorporate in its continuum of harm strategy all the elements that are key for establishing a long-term, results-oriented strategic planning framework. With regard to our recommendation that DOD develop standard data elements and definitions for maintaining and reporting information on sexual harassment incidents and direct the military services to incorporate these into their databases, DOD partially concurred and stated that while a 2013 policy memorandum provides standard data elements and definitions, the services collect other data elements based on their unique needs. DOD stated that ODMEO will conduct a review to determine compliance with DOD reporting requirements and identify emerging policy modifications or changes/additions to standard definitions. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, and the Director, Centers for Disease Control and Prevention. In addition, the report is available at no charge on the GAO website http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To determine the extent to which the Department of Defense (DOD) has policies on sexual harassment that include Centers for Disease Control and Prevention (CDC) principles and relevant legislative elements, we obtained and reviewed Office of the Secretary of Defense (OSD) and service-level sexual harassment policies. We compared the policies with a framework developed by the CDC for preventing sexual violence, which CDC defines as including non-contact unwanted sexual behaviors, sexual harassment, and physical sexual assault. CDC’s model is based on the concept of addressing the health of a given population based on common risk and protective factors and effective, tested strategies. We reviewed CDC’s framework for preventing sexual violence as well as our report on DOD’s sexual assault prevention strategy to identify six principles that an organization can include in a sexual violence prevention strategy or policy: Risk factors: Factors that may put people at risk for sexual violence perpetration or victimization, such as an organizational climate that either explicitly or implicitly condones sexual harassment; Protective factors: Factors that may protect high-risk people from harm, such as an organizational climate that promotes respect among personnel at all levels; Primary strategies for prevention: Strategies that occur before sexual violence takes place to prevent initial perpetration, such as sexual harassment prevention training; Secondary strategies for prevention: Immediate responses after sexual violence has occurred to address the early identification of victims and the short-term consequences of violence, such as reporting mechanisms and immediate interventions; Tertiary strategies for prevention: Long-term responses after sexual violence has occurred to address the lasting consequences of violence and sex-offender treatment interventions, such as the long- term treatment of the victim and perpetrator; and Risk domains: Levels at which risk and protective factors should be categorized, including: individual, relationship, community, and society. DOD has previously adapted risk domains to the military population, using the levels of: individual, relationship, leaders at all levels, military community, and society. DOD previously used CDC’s framework for preventing sexual violence in the department’s 2014-2016 Sexual Assault Prevention Strategy. In addition, we reviewed the OSD and service-level sexual harassment policies to determine the extent to which they included three elements identified in the National Defense Authorization Act (NDAA) for FY 2013, which directed DOD to develop a comprehensive policy that includes sexual harassment prevention training for the armed forces; mechanisms for reporting incidents, including mechanisms for anonymous reporting; and mechanisms for responding to and resolving instances of sexual harassment, including for the prosecution of perpetrators. Two GAO analysts independently reviewed the policies and determined whether or not each element was included. Any discrepancies were resolved through discussion and consultation with a third analyst. We interviewed officials in the Under Secretary of Defense for Personnel and Readiness’ Office of Diversity Management and Equal Opportunity, who oversee department- wide policy on sexual harassment, to obtain an understanding of their roles and processes regarding sexual harassment as well as the status of policy development in that area. We also interviewed officials from military equal opportunity offices in the Air Force, the Navy, and the Marine Corps, as well as officials from the Army’s Sexual Harassment/Assault Response and Prevention Office to obtain an understanding of the service sexual harassment offices and roles, as well as the status of updates to their respective policies. To determine the extent to which DOD has processes for maintaining and reporting consistent data on incidents of unwanted sexual behaviors, we reviewed DOD reports to Congress that provide incident data regarding unwanted sexual behaviors, including DOD’s most recent annual report on sexual assault in the military. We identified the databases that generate the reported data and evaluated the processes for assuring the quality and consistency of data in those databases—including the Defense Sexual Assault Incident Database, which maintains sexual assault data; the Central Registry database, which maintains data on domestic violence involving sexual assault; and various military service- level databases that maintain sexual harassment data. To evaluate DOD’s reported data we reviewed pertinent statutory provisions, DOD guidance, and the Standards for Internal Control in the Federal Government that address agencies’ use of quality data and our prior reports evaluating sexual assault data. In evaluating the reported data, we obtained and reviewed statutory provisions with reporting requirements, as well as DOD guidance on data collection for sexual harassment, sexual assault, and domestic violence involving sexual abuse. With regard to DOD efforts to collect and maintain sexual assault data, we met with OSD, Navy, Air Force, and Marine Corps officials in their respective Sexual Assault Prevention and Response offices as well as officials in the Army’s Sexual Harassment/Assault Response and Prevention office. We also reviewed our prior report on DOD’s Defense Sexual Assault Incident Database and our prior report that evaluated sexual assault data across agencies. To determine whether DOD has processes for collecting and maintaining consistent data for domestic violence with sexual assault, we obtained and compared data elements and processes from DOD’s Central Registry database, which contains data for domestic violence throughout the department. We also obtained and reviewed policies that outline processes for collecting and reporting domestic violence involving sexual abuse data, and interviewed officials from Family Advocacy Program offices in OSD and the Army, Navy, Marine Corps, and Air Force to determine data reliability and comprehensiveness. To determine the extent to which reports of sexual assault, including reports of sexual assault among servicemembers and reports of domestic abuse involving sexual assault, meet statutory requirements for reporting, we reviewed DOD reports to Congress that provide sexual assault incident data, including DOD’s most recent annual report on sexual assault in the military and compared those reports with requirements in the NDAA for FY 2011, which directs DOD to report the total number of substantiated and unsubstantiated sexual assault incidents, among other things. With regard to sexual harassment data, we interviewed officials in the Under Secretary of Defense for Personnel and Readiness’ Office of Diversity Management and Equal Opportunity, as well as officials from the Military Equal Opportunity offices in the Air Force, Marine Corps, and Navy, and officials from the Army Sexual Harassment/Assault Response and Prevention office. We collected and compared data fields and data definitions from the Army, Navy, Marine Corps, and Air Force offices that address sexual harassment. We compared the data elements to determine whether the data elements and definitions across the services are consistent. To identify the extent to which DOD has overarching efforts, including a prevention strategy, to address unwanted sexual behaviors across the continuum of harm, we met with officials in the Office of the Assistant Secretary of Defense for Readiness and DOD’s Sexual Assault Prevention and Response office. We reviewed our prior work and provisions from the Government Performance and Results Act to identify key elements that should be included in strategic plans as well as standards for coordinating within agencies. Key elements include (1) mission statement, (2) long-term goals, (3) strategies to achieve goals, (4) external factors that could affect goals, (5) use of metrics to gauge progress, and (6) evaluations of the plan to monitor goals and objectives. We identified and reviewed coordinating mechanisms used by OSD and the service offices that guide and oversee efforts to address unwanted sexual behaviors. We reviewed DOD, RAND Corporation, and CDC reports that addressed the continuum of harm and the relationship between the various forms of unwanted sexual behaviors. We interviewed officials from OSD and service-level Sexual Assault Prevention and Response, Family Advocacy, and Military Equal Opportunity offices and the Army’s Sexual Harassment/Assault Response and Prevention to identify the various efforts in which they participate. We also collected and reviewed charters and meeting notes for integrated product teams and working groups to identify their intended purposes, their activities, their membership, and whether they involved multiple offices addressing unwanted sexual behaviors. In identifying DOD’s collaborative efforts, we also reviewed our prior work on collaboration among federal agencies but we did not assess the effectiveness of department’s collaborative efforts. We conducted this performance audit from August 2016 to December 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the staff named above, key contributors to this report include Thomas Gosling (Assistant Director); Isabel Band; Matthew Bond; Vincent Buquicchio; Caroline DeCelles; Mae Jones; Kirsten Lauber; and Brian Pegram. Related GAO Products Sexual Assault: Better Resource Management Needed to Improve Prevention and Response in the Army National Guard and Army Reserve. GAO-17-217. Washington, D.C.: February 27, 2017. Military Personnel: DOD Has Processes for Operating and Managing Its Sexual Assault Incident Database. GAO-17-99. Washington, D.C.: January 10, 2017. Sexual Violence Data: Actions Needed to Improve Clarity and Address Differences Across Federal Data Collection Efforts. GAO-16-546. Washington, D.C.: July 19, 2016. DOD and Coast Guard: Actions Needed to Increase Oversight and Management Information on Hazing Incidents Involving Servicemembers. GAO-16-226. Washington, D.C.: February 9, 2016. Sexual Assault: Actions Needed to Improve DOD’s Prevention Strategy and to Help Ensure It Is Effectively Implemented. GAO-16-61. Washington, D.C.: November 4, 2015. Military Personnel: Actions Needed to Address Sexual Assaults of Male Servicemembers. GAO-15-284. Washington, D.C.: March 19, 2015. Military Personnel: DOD Needs to Take Further Actions to Prevent Sexual Assault during Initial Military Training. GAO-14-806. Washington, D.C.: September 9, 2014. Military Personnel: DOD Has Taken Steps to Meet the Health Needs of Deployed Servicewomen, but Actions Are Needed to Enhance Care for Sexual Assault Victims. GAO-13-182. Washington, D.C.: January 29, 2013. Preventing Sexual Harassment: DOD Needs Greater Leadership Commitment and an Oversight Framework. GAO-11-809. Washington, D.C.: September 21, 2011. Military Justice: Oversight and Better Collaboration Needed for Sexual Assault Investigations and Adjudications. GAO-11-579. Washington, D.C.: June 22, 2011. Military Personnel: DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs Need to Be Further Strengthened. GAO-10-405T. Washington, D.C.: February 24, 2010. Military Personnel: Additional Actions Are Needed to Strengthen DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-10-215. Washington, D.C.: February 3, 2010. Military Personnel: DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs Face Implementation and Oversight Challenges. GAO-08-924. Washington, D.C.: August 29, 2008. Military Personnel: The DOD and Coast Guard Academies Have Taken Steps to Address Incidents of Sexual Harassment and Assault, but Greater Federal Oversight Is Needed. GAO-08-296. Washington, D.C.: January 17, 2008.
Why GAO Did This Study Unwanted sexual behaviors in the military—including sexual harassment, sexual assault, and domestic violence involving sexual assault—undermine core values, unit cohesion, combat readiness, and public goodwill. Recent studies suggest that these behaviors are part of a “continuum of harm,” which DOD defines as a range of interconnected, inappropriate behaviors that are connected to the occurrence of sexual assault and that support an environment that tolerates these behaviors. Senate Report 114-255 included a provision for GAO to review efforts by DOD to prevent unwanted sexual behaviors in the military. GAO assessed the extent to which DOD has (1) policies on sexual harassment that include CDC principles and relevant legislative elements; (2) processes for maintaining and reporting consistent data on incidents of unwanted sexual behaviors; and (3) overarching efforts, including a prevention strategy, to address unwanted sexual behaviors across the continuum of harm. GAO reviewed DOD policies and pertinent databases, and interviewed agency officials. What GAO Found The Department of Defense's (DOD) policies on sexual harassment include some but not all of the Centers for Disease Control's (CDC) principles for preventing sexual violence and include most relevant legislative elements. GAO identified six principles from CDC's framework for preventing sexual violence, which CDC defines as including sexual harassment. GAO found that Office of the Secretary of Defense (OSD) and military service policies generally include CDC's principles regarding prevention strategies, but none address risk and protective factors, which identify conditions or behaviors that might heighten or lower the risk of sexual harassment victimization or perpetration, respectively. Additionally, a statutory provision in fiscal year 2013 mandated that DOD, among other things, develop a comprehensive sexual harassment policy that includes prevention training, mechanisms for anonymous reporting, and mechanisms for resolving incidents of sexual harassment. OSD and service policies are generally consistent with those required elements except for the inclusion of anonymous reporting. DOD is developing a new department-wide policy that will address sexual harassment, but it is too early to determine how the policy will address these issues. Without policies that include CDC's principles and mechanisms for anonymous reporting, DOD may miss opportunities to address and potentially reduce incidents of unwanted sexual behaviors. Finally, a statutory change in fiscal year 2017 redefined sexual harassment for certain purposes so it is no longer defined solely as a form of sex discrimination but is recognized also as an adverse behavior on the spectrum of behavior that can contribute to an increase in the incidence of sexual assault. While officials indicated a need to update policies, they were unclear on the full implications, if any, of this change. DOD has processes for maintaining and reporting consistent data on incidents of unwanted sexual behaviors including sexual assault and incidents of domestic violence that involve sexual assault, but does not have similar processes for maintaining and reporting data on incidents of sexual harassment. Specifically, DOD uses centralized databases to maintain and report data on incidents of sexual assault and domestic violence that involve sexual assault, but relies on military service-specific databases for information on incidents of sexual harassment. DOD has not established standard data elements and definitions to guide the services in maintaining and reporting data on sexual harassment. Inconsistencies in data elements and definitions generally mean that one service may be maintaining data that is more or less detailed than, or that differs from, the data maintained by other services. Such inconsistencies may create difficulties in reporting department-wide sexual harassment data, since the individual service data must be adapted to fit reporting requirements. DOD has several overarching efforts to address unwanted sexual behaviors across the continuum of harm, including developing an overarching prevention strategy. However, it is unclear whether the strategy under development will contain key elements for long-term and results-oriented strategic planning such as long-term goals, strategies to achieve goals, and metrics to gauge progress. Without incorporating these elements into its overarching prevention strategy, DOD may not be in a position to effectively coordinate and integrate prevention activities and reduce instances of unwanted sexual behaviors. What GAO Recommends GAO recommends that DOD fully include in its new policy on sexual harassment CDC's principles for sexual violence prevention and mechanisms for anonymous reporting, develop standard data elements and definitions for reporting sexual harassment incidents, and incorporate in its overarching prevention strategy elements key for a long-term, results-oriented strategy. DOD generally concurred with the recommendations.
gao_GAO-18-263
gao_GAO-18-263_0
Background The BSA established reporting, recordkeeping, and other AML requirements for financial institutions. By complying with BSA/AML requirements, U.S. financial institutions assist government agencies in the detection and prevention of money laundering and terrorist financing by, among other things, maintaining compliance policies, conducting ongoing monitoring of customers and transactions, and reporting suspicious financial activity. Regulation under and enforcement of BSA involves several federal agencies. FinCEN is responsible for administering the BSA, has authority for enforcing compliance with its requirements and implementing regulations, and also has the authority to enforce the act, including through civil money penalties. FinCEN issues regulations under BSA and relies on the examination functions performed by other federal regulators, including the federal banking regulators. FinCEN also collects, analyzes, and maintains the reports and information filed by financial institutions under BSA and makes those reports available to law enforcement and regulators. FinCEN has delegated BSA/AML examination authority for banks to the federal banking regulators. The federal banking regulators have issued their own BSA regulations that require banks to establish and maintain a BSA compliance program which, among other things, requires banks to identify and report suspicious activity. The banking regulators are also required to review compliance with BSA/AML requirements and regulations which they generally do every 12 to 18 months as a part of their routine safety and soundness examinations. Federal banking regulators take a risk-based approach to BSA examinations—that is, they review key customers of risk or specific problems identified by the bank. Among other things, examiners review whether banks have an adequate system of internal controls to ensure ongoing compliance with BSA/AML regulations. The federal banking regulators may take enforcement actions using their prudential authorities for violations of BSA/AML requirements. They may also assess civil money penalties against financial institutions and individuals independently, or concurrently with FinCEN. Components of Banks’ BSA/AML Compliance Programs All banks are required to establish an AML compliance program that includes policies, procedures, and processes which, at a minimum, must provide for: a system of internal controls to ensure ongoing compliance, a designated individual or individuals responsible for managing BSA compliance (BSA compliance officer), training for appropriate personnel, independent testing for BSA/AML compliance, and appropriate risk-based procedures for conducting ongoing customer due diligence. BSA/AML regulations require that each bank tailor a compliance program that is specific to its size and own risks based on factors such as the products and services offered, customers, types of transactions processed, and locations served. BSA/AML compliance programs may include the following components: Customer Identification Program (CIP)—Banks must have written procedures for opening accounts and, at a minimum, must obtain from each customer their name, date of birth, address, and identification number before opening an account. In addition, banks’ CIPs must include risk-based procedures for verifying the identity of each customer to the extent reasonable and practicable. Banks must also collect information on individuals who are beneficial owners of a legal entity customer in addition to the information they are required to collect on the customer under the CIP requirement. Customer Due Diligence (CDD)—CDD procedures enable banks to predict with relative certainty the types of transactions in which a customer is likely to engage, which assists banks in determining when transactions are potentially suspicious. Banks must document their process for performing CDD and implement and maintain appropriate risk-based procedures for conducting ongoing customer due diligence. These procedures include, but are not limited to, understanding the nature and purpose of customer relationships for the purpose of developing a customer risk profile, and conducting ongoing monitoring to identify and report suspicious transactions and, on a risk basis, to maintain and update customer information. Enhanced Due Diligence (EDD)—Customers who banks determine pose a higher money laundering or terrorist financing risk are subject to EDD procedures. EDD for higher-risk customers helps banks understand these customers’ anticipated transactions and implement an appropriate suspicious activity monitoring system. Banks review higher-risk customers and their transactions more closely at account opening and more frequently throughout the term of their relationship with the bank. Suspicious Activity Monitoring—Banks must also have policies and procedures in place to monitor transactions and report suspicious activity. Banks use different types of monitoring systems to identify or alert staff of unusual activity. A manual transaction monitoring system typically targets specific types of transactions (for example, those involving large amounts of cash and those to or from foreign areas) and includes a manual review of various reports generated by the bank’s information systems in order to identify unusual activity. An automated monitoring system can cover multiple types of transactions and use various rules, thresholds, and scenarios to identify potentially suspicious activity. These systems typically use computer programs to identify individual transactions, patterns of unusual activity, or deviations from expected activity. Banks that are large, operate in many locations, or have a large volume of higher-risk customers typically use automated monitoring systems. Banks also must comply with certain reporting requirements, including: CTR: A bank must electronically file a CTR for each transaction in currency—such as a deposit or withdrawal—of more than $10,000. SAR: Banks are required to electronically file a SAR when a transaction involves or aggregates at least $5,000 in funds or other assets, and the institution knows, suspects, or has reason to suspect that the transaction meets certain criteria qualifying as suspicious. Regulatory Requirements Related to Account Terminations and Branch Closures Generally, the federal banking regulators do not direct banks to open, close, or maintain individual accounts. However, banks generally include policies and procedures to describe criteria for not opening, or closing, an account in their BSA/AML compliance program. For example, although there is no requirement for a bank to close an account that is the subject of a SAR filing, a bank should develop policies and procedures that indicate when it will escalate issues identified as the result of repeat SAR filings on accounts, including criteria on when to close an account. Additionally, a bank’s CIP should contain procedures for circumstances when a bank cannot verify the customer’s identity, including procedures that include circumstances in which the bank should not open an account and when the bank should close an account. Federal banking regulators also cannot prohibit banks from closing branches. However, FDIC-insured banks are required to submit a notice of any proposed branch closing to their primary banking regulator no later than 90 days prior to the date of the proposed branch closing. The notice must include a detailed statement of the reasons for closing the branch and statistical or other information in support of the reasons. Banks are also required to mail a notice to the customers of the branch proposed to be closed at least 90 days prior to the proposed closing and must post a notice to customers in the branch proposed to be closed at least 30 days prior to the proposed closing. The notice should state the proposed date of closing and either identify where branch customers may obtain service following that date or provide a telephone number for customers to call to determine such alternative sites. Characteristics and Money Laundering-Related Risks of the Southwest Border Region In October 2017, Mexico was the second largest goods trading partner of the United States in terms of both imports and exports, according to U.S. Census trade data. Trade with Mexico is an important component of Southwest border states’ economies, which benefit from their proximity to the international border and the related seaports and inland ports for the exportation and importation of goods. The fresh produce industry is an example of a key industry in the border region. The fresh produce industry encompasses several activities involved with importation, inspection, transportation, warehousing, and distribution of Mexican- grown produce to North American markets, all of which provide employment opportunities and revenues to local economies. Another key industry in the region is manufacturing. The Southwest border has played a role in a growing trend known as production sharing, in which companies—predominantly based in the United States—locate some operations in Mexico, thus achieving lower costs in the overall production process. Local Southwest border communities also benefit from pedestrians crossing into the United States from Mexico to visit and shop in their communities. For example, Department of Transportation border crossing data show that in September 2017, nearly 750,000 pedestrians entered the United States at the San Ysidro, California, border crossing— the busiest pedestrian port of entry into the country. The Department of State has identified Mexico as a major money laundering country. As a result of its proximity to Mexico, the Southwest border region faces high money laundering and related financial crime risks. The U.S.-Mexico border includes major population centers, transportation hubs, and large tracts of uninhabited desert. According to Treasury’s 2015 National Money Laundering Risk Assessment, criminal organizations have used the vast border to engage in cross-border drug trafficking, human smuggling, and money laundering. The 2015 assessment also states that bulk cash smuggling remains the primary method Mexican drug trafficking organizations use to move illicit proceeds across the Southwest border into Mexico. Some cash collected domestically to pay the drug trafficking organizations for drugs is channeled from distribution cells across the United States to cities and towns along the Southwest border, and from there is smuggled into Mexico. All counties within the Southwest border region have been identified as either a High Intensity Financial Crime Area (HIFCA) or a High Intensity Drug Trafficking Area (HIDTA) with the vast majority being identified as both (see fig. 1). HIFCAs and HIDTAs aim to concentrate law enforcement efforts at the federal, state, and local levels to combat money laundering and drug trafficking in designated high-intensity money laundering zones and in areas determined to be critical drug-trafficking regions of the United States, respectively. Southwest Border Banks Report Heightened BSA/AML Compliance Risks and Challenges Due to Volume of High- Risk Customers Several characteristics of the Southwest border region make the region a high-risk area for money laundering activity. These characteristics, which require additional efforts for Southwest border banks to comply with BSA/AML requirements, include high volumes of cash transactions, cross-border transactions, and foreign accountholders. Bank representatives we spoke with said that they manage these added BSA/AML compliance challenges through activities such as more frequent monitoring and investigating of suspicious activities, but that these efforts require an investment of resources. Volume of Cash Transactions and Cross- Border Trade Increases Risk for Money Laundering and Terrorist Financing Money laundering risk is high in the Southwest border region because of the high volume of cash transactions, the number of cross-border transactions, and foreign accountholders, according to bank representatives, federal banking regulators, and others. Cash transactions increase the BSA/AML compliance risk for banks because the greater anonymity associated with using cash results in greater risk for money laundering or terrorist financing. A regional economic development specialist noted, for example, that Mexican nationals who shop in border communities typically use cash as a payment form. Further, representatives from a regional trade group told us that border businesses prefer payment in cash over checks from Mexican banks because of potential variations in the exchange rate before a peso- denominated check clears. The trade group representatives we spoke with also noted that currency exchanges also add to the volume of cash transactions in the region. In June 2010, the Mexican finance ministry published new AML regulations that restricted the amounts of physical cash denominated in U.S. dollars that Mexican financial institutions could receive. According to FinCEN officials and some of the federal bank examiners we spoke with, these regulations altered the BSA/AML risk profile of some U.S. banks, particularly those in the Southwest border region. For example, U.S. banks started receiving bulk shipments of currency directly from Mexican nationals and businesses, rather than from Mexican banks. This increased BSA/AML compliance risk for the U.S. banks because they now had to assess the risk of each individual customer shipping them currency, rather than the collective risk from their Mexican banking counterparts. In addition, according to FinCEN, the regulations added to the level of cash in the Southwest border region because businesses in the region saw higher levels of cash payments from Mexican customers. This also created additional risk for U.S. banks when these businesses deposited the cash payments. Our review of data on banks’ CTR filings confirmed that bank branches that operate in Southwest border region counties handle more large cash transactions than bank branches elsewhere. For example, our analysis found that bank branches in Southwest border region counties generally file more CTRs than bank branches in comparable counties in the same border states or in other high-risk financial crime or drug trafficking counties that are not in border states. Specifically, in 2016, bank branches in Southwest border region counties filed nearly 30 percent more CTRs, on average, than bank branches in comparable counties elsewhere in their same state, and about 60 percent more than those in other high-risk counties outside the region. Similar differences occurred in 2014 and 2015 (see fig. 2). Cross-border transactions are also higher risk for money laundering because international transfers can present an attractive method to disguise the source of funds derived from illegal activity. Certain industries, such as agriculture, that are prevalent in the Southwest border region have legitimate business practices that could appear suspicious without sufficient context, regional representatives said. For example, representatives of one produce industry association we spoke with said produce distributors often import produce from Mexican farmers and pay them via wire transfer, which the farmers may then immediately withdraw in cash to pay laborers. Transactions that involve cross-border wire transfers and immediate withdrawals of cash may raise suspicion of money laundering that requires further scrutiny by the bank. BSA/AML regulations generally require banks to keep additional documentation for domestic and international fund transfers of $3,000 or more, including specific identifying information about the originator and beneficiary of the transaction. If the bank sends or receives funds transfers to or from institutions in other countries, especially those with strict privacy and secrecy laws, the bank should have policies and procedures to determine whether the amounts, the frequency of the transfer, and countries of origin or destination are consistent with the nature of the business or occupation of the customer. Southwest border banks cited foreign accountholders as another type of high-risk customer for money laundering and terrorist financing. These types of customers are prevalent in the Southwest border region, examiners said, and can create challenges for banks to verify and authenticate their identification, source of funds, and source of wealth. Southwest border banks and others cited these types of customers as adding BSA/AML compliance risk for banks, particularly if the accountholders do not reside in the United States. These customers may also have more frequent funds transfers to other countries. Foreign accountholders who are “senior foreign political figures” also create additional money laundering and terrorist financing risk because of the potential for their transactions to involve the proceeds from foreign-official corruption. Some Southwest border banks told us they provide accounts to senior foreign political figures, but may limit the number of those types of accounts. Southwest Border Banks’ High-Risk Customers Require More Intensive Due Diligence and Monitoring The volume of high-risk customers and cross-border transactions can lead to more intensive account monitoring and investigation of suspicious transactions, Southwest border bank representatives said. Performing effective due diligence and complying with CIP requirements for higher- risk customers and transactions can be more challenging because banks might need specialized processes for higher-risk customers and transactions than for those that are lower-risk. For example, representatives from some Southwest border banks told us their BSA/AML compliance staff travel to Mexico or collect information from sources in Mexico to establish the legitimacy of businesses across the border. Another bank said they ask to see 3 months of some high-risk businesses’ previous bank statements to determine the typical volume of cash and wire transfers and that this type of due diligence is very time- consuming. The bank also collects details about the recipients of the wired funds in an effort to determine the legitimacy of the payments. Some Southwest border banks also described using special processes to evaluate BSA/AML compliance risks for foreign customers and said they used extra caution before accepting them as customers. These special processes included translating business documents from Spanish to English to certify the legitimacy of business customers and developing internal expertise on currently acceptable identity documents issued by foreign governments. Southwest border bank representatives we spoke with said addressing these compliance challenges also can require more resources for monitoring high-risk customers and investigating suspicious transactions. High-risk customers require additional detail to be collected when accounts are opened and on an ongoing basis. Representatives of one Southwest border bank explained that they monitor high-risk customers’ transactions more frequently—every 3 months, compared to every 6 months for medium-risk customers. Further, high volumes of cash activity can generate substantial numbers of alerts in bank monitoring systems, and these alerts are evaluated by banks to determine whether SARs should be filed. Transaction structuring, which involves attempts to evade the $10,000 CTR filing requirement by, for example, making several smaller transactions, is a common source of alerts, some bank representatives said. Several banks we interviewed cited the investigation of potential structuring as one of their common BSA/AML compliance activities. Although many banks have monitoring software to generate suspicious activity alerts, representatives said the flagged transactions generally are investigated manually and can be a labor-intensive part of banks’ overall BSA/AML compliance programs. Southwest border bank representatives we spoke with also told us that their suspicious activity monitoring systems often generate “false positives”—meaning further investigation leads to a determination that no SAR filing is warranted. As a result, the total number of SAR filings can actually understate banks’ total BSA/AML compliance efforts associated with suspicious transaction monitoring. We found that bank branches in Southwest border region counties filed more SARs, on average, from 2014 through 2016 than bank branches in comparable counties in the same border states or in other high-risk financial crime or drug trafficking counties that are not in border states. For example, in 2016, bank branches in Southwest border region counties filed three times as many SARs, on average, as bank branches operating in other counties within Southwest border states and about 2.5 times as many SARs, on average, as bank branches in other high-risk financial crime or drug trafficking counties in nonborder states. These differences in SAR filings showed a similar pattern in 2014 and 2015 (see fig. 3). Federal banking regulators cited some Southwest border banks for noncompliance with BSA/AML requirements from January 2009 through June 2016. Those citations included 41 formal or informal enforcement actions taken against Southwest border banks. FinCEN also took two formal enforcement actions during that period. As part of the bank examination process, the federal banking regulators also cited Southwest border banks for 229 BSA/AML violations from January 2009 through June 2016. Of these, SAR-related violations were the most common type of violation (33 percent). This was followed closely by violations related to BSA/AML monitoring and compliance (31 percent)—a category we defined to include competencies such as having an adequate system of BSA/AML internal controls and providing adequate BSA/AML training (see fig. 4). Risks Related to Money Laundering Appear to Be a Factor in Reduced Access to Banking Services for Southwest Border Customers Our nationally representative survey found that most Southwest border banks terminated accounts for reasons related to BSA/AML risk from January 2014 through December 2016 and limited, or did not offer, accounts to certain customer types, consistent with BSA/AML purposes. However, our survey also found that many Southwest border banks may also be engaging in derisking. Nationally, our econometric analysis suggests that counties that were urban, younger, had higher income, or had higher money laundering-related risk were more likely to lose branches. Money laundering-related risks were likely to have been relatively more important drivers of branch closures in the Southwest border region. Some Account Terminations and Limitations Are Consistent with BSA/AML Purposes Most Southwest Border Banks Terminated Accounts Because of Suspicious Activity Most Southwest border banks reported terminating accounts for reasons related to BSA/AML risk. Based on our survey results, from January 1, 2014, through December 31, 2016, we estimate that almost 80 percent of Southwest border banks had terminated personal or business accounts for reasons related to BSA/AML risk. For the subset of Southwest border banks whose operations extend outside of the Southwest border region, we estimate that almost 60 percent reported that they terminated business or personal accounts domiciled in their Southwest border branches. For banks that did not operate in the Southwest border region (non-Southwest border banks), account terminations related to BSA/AML risk varied by the size of the bank. For example, an estimated 93 percent of medium banks and an estimated 95 percent of large banks terminated accounts for reasons related to BSA/AML risk, compared to an estimated 26 percent of small banks. Among the five types of businesses we identified for our survey as high risk for money laundering and terrorist financing, cash-intensive small businesses (for example, retail stores, restaurants, and used car dealers) were the most common types of business accounts that Southwest border banks reported terminating for reasons related to BSA/AML risk. For example, over 70 percent of Southwest border banks reported terminating cash-intensive small business accounts. Between 45 percent and 58 percent of Southwest border banks cited terminating accounts for the remaining four categories of high-risk business accounts we identified: money services businesses, domestic businesses engaged in cross-border trade, nontrade-related foreign businesses, and foreign businesses engaged in cross-border trade. Bank-Reported Data on Accounts Terminated in 2016 for BSA/AML Reasons In response to our survey, several banks provided data on the number of accounts they terminated in 2016 for reasons related to BSA/AML risk. We found that two extra-large banks (those banks with $50 billion or greater in assets) were responsible for the majority of these account terminations for both business and personal accounts. These terminations accounted for less than half a percent of the extra-large banks’ overall accounts. These numbers only represent account terminations for the banks that provided data and are not generalizable to the population of banks. The most common reason related to BSA/AML risk banks reported for terminating accounts from January 2014 through December 2016 was the filing of SARs associated with the accounts. Based upon our survey, we estimate that 93 percent of Southwest border banks terminated accounts because of the filing of SARs. Through discussions with Southwest border bank representatives, we found that banks vary the level of internal investigations they conduct into the suspicious activity before deciding to terminate an account as a result of a certain number of SAR filings. Representatives from 3 of the 19 Southwest border banks we spoke with told us that their account closure policies generally required the automatic termination of an account when a certain number of SARs—ranging from 1 to 4—were filed for an account. Representatives from two other Southwest border banks said a certain number of SARs filed for one account would lead to an automatic review of the account that would determine whether or not the account should be closed. Other Southwest border bank representatives we interviewed did not indicate having a specific policy for terminating accounts related to the number of SAR filings, but some of these representatives said that SAR filings were one of the factors that could lead to account terminations. Figure 5 shows the survey estimates for the other BSA/AML reasons Southwest border banks cited for terminating accounts. Some commonly cited reasons were the failure of the customer to respond adequately to requests for information as part of customer due diligence processes and the reputational risk associated with the customer type. For example, an estimated 80 percent of Southwest border banks cited the failure of the customer to respond adequately to requests for information as part of customer due diligence processes. Some Southwest border bank representatives told us that sometimes customers do not provide adequate documentation in response to their due diligence inquiries. These representatives said that after a certain number of attempts to obtain the documentation, the lack of customer responsiveness results in them terminating the account. A bank may also terminate an account if the activity of the customer could risk the reputation of the bank. About 68 percent of Southwest border banks that terminated accounts cited the reputational risk associated with the customer type as a reason for terminating an account. Some Southwest border bank representatives we spoke with said they have closed accounts due to the nature of the business. For example, some bank representatives said they have closed accounts for gambling and marijuana businesses. In addition, law enforcement officials from the Southwest Border Anti-Money Laundering Alliance told us that they thought that some of the accounts terminated by Southwest border banks were a result of the information the banks were given from local law enforcement and other federal agencies. For example, when funnel accounts—accounts in one geographic area that receive multiple cash deposits and from which funds are withdrawn in a different geographic area with little time elapsing between the deposits and withdrawals—were first identified by law enforcement as a money laundering method, banks responded by closing these types of accounts. Non-Southwest border banks generally reported the same primary reasons for terminating accounts as Southwest border banks. The top two reasons for terminating accounts cited by non-Southwest border banks that responded to the survey was the filing of SARs associated with the accounts and the failure of the customer to respond adequately to requests for information as part of customer due diligence processes. A majority of Southwest border banks and non-Southwest border banks reported limiting or not offering accounts to certain types of businesses considered high risk for money laundering and terrorist financing, particularly money services businesses and foreign businesses. For example, the estimates for Southwest border banks that have limited, or not offered, accounts to nontrade-related foreign businesses is 76 percent, money service businesses is 75 percent, and foreign businesses engaged in cross-border trade is 72 percent. The most common reason (cited by 88 percent of Southwest border banks) for limiting, or not offering, an account to these types of businesses was that the business type fell outside of the bank’s risk tolerance—the acceptable level of risk an organization is willing to accept around specific objectives. Similarly, 69 percent of Southwest border banks cited the inability to manage the BSA/AML risk associated with the customer (for example, because of resource constraints) as a factor for limiting, or not offering, accounts. Representatives from some Southwest border banks we spoke with explained that they do not have the resources needed to conduct adequate due diligence and monitoring for some of the business types considered high risk for money laundering and terrorist financing. As a result, they told us that they no longer offer accounts for certain business lines. For example, a representative from one Southwest border bank told us that the bank no longer offers accounts to money services businesses because of the BSA/AML compliance requirements and monitoring needed to service those types of accounts. In particular, they stated they do not have the resources to monitor whether the business has the appropriate BSA/AML compliance policies and procedures in place and to conduct site visits to ensure it is operating in compliance with BSA/AML requirements. Another Southwest border bank representative told us they have stopped banking services for used clothing wholesalers who export their product to Mexico because they were unable to mitigate the risk associated with these types of businesses. They explained that these companies’ business models involve many individuals crossing the U.S.- Mexico border to purchase with cash pallets of clothing to import to Mexico. The bank representative explained that the business model for this industry made it very hard to identify the source of the large volumes of cash. Other reasons Southwest border banks reported for limiting, or not offering, certain types of business accounts are shown in figure 6. Similar to the reasons given by Southwest border banks, the most common reason that non-Southwest border banks reported limiting, or not offering accounts, to certain types of businesses considered high risk for money laundering and terrorist financing was that the customer type fell outside of the bank’s risk tolerance. Other Account Terminations and Limitations Raise Concerns about Derisking The second most common reason—cited by 80 percent of Southwest border banks—for limiting, or not offering, accounts to certain types of businesses considered high risk for money laundering and terrorist financing, was that the customer type drew heightened BSA/AML regulatory oversight—behavior that could indicate derisking. For example, representatives from one Southwest border bank explained that they no longer offer accounts to money services businesses because they want to be viewed from a good standpoint with their regulator. They added that banking for these types of customers is very high risk for the bank with very little reward. Another bank that operates in the Southwest border region explained that rather than being able to focus on their own BSA/AML risk assessment and the performance of accounts, they feel pressured to make arbitrary decisions to close accounts based on specific concerns of their examiners. Several Southwest border bank representatives also described how recent BSA/AML law enforcement and regulatory enforcement actions have caused them to become more conservative in the types of businesses for which they offer accounts. For example, representatives from one Southwest border bank we spoke with stated that many of the banks that do business in the Southwest border region have stopped servicing cross-border businesses due to a large enforcement action in which the allegations against the bank cited an ineffective AML program that exposed it to illicit United States/Mexico cross-border cash transactions. A representative from another Southwest border bank explained that his bank could have a large banking business in one of the state’s border towns, but the bank has chosen not to provide services there because if BSA/AML compliance deficiencies are identified from servicing that area, the penalties could be high enough to shut down the whole bank. In addition, while banks may terminate accounts because of SAR filings as a method to manage money laundering and terrorist financing risk and to comply with BSA/AML requirements, some of these terminations may be related to derisking. For example, some Southwest border bank representatives we spoke with as part of this review, as well as other banks and credit unions we spoke with in a previous review, told us that they have filed SARs to avoid potential criticism during examinations, not because they thought the observed activity was suspicious. Non-Southwest border banks also commonly cited the inability to manage risk associated with the customer type and heightened regulatory oversight as reasons for limiting, or not offering, accounts. Our survey results and discussions with Southwest border bank representatives are consistent with what a senior Treasury official identified in a 2015 speech as causing correspondent banking and money services business account terminations. The speech noted that a number of interrelated factors may be resulting in the terminations, but that the most frequently mentioned reason related to efforts to comply with AML and terrorist financing requirements. In particular, banks raised concerns about (1) the cost of complying with AML and terrorist financing regulations, (2) uncertainty about supervisors’ expectations regarding what is appropriate due diligence, and (3) the nature of the enforcement and supervisory response if they get it wrong. The speech noted that banks said that they made decisions to close accounts not so much because they were unable to manage the illicit finance risks but because the costs associated with taking on those risks had become too high. It further stated that there is a gap between what supervisory agencies have said about the standards they hold banks to and banks’ assessment of those standards, and that there was still a perception among banks that supervisory and enforcement expectations lack transparency, predictability, and consistency. The senior Treasury official noted this perception feeds into higher anticipated compliance costs and when banks input this perceived risk into their cost-benefit analysis, it may eclipse the potential economic gain of taking on a new relationship. Southwest Border Bank Branch Closures Have Been Concentrated in a Small Number of Communities Counties in the Southwest border region have been losing bank branches since 2012, similar to national and regional trends, as well as trends in other high-risk financial crime or drug trafficking counties that are outside the region. Most of the 32 counties (18 counties or nearly 60 percent) comprising the Southwest border region did not lose bank branches from 2013 through 2016, but 5 counties lost 10 percent or more of their branches over this time period (see top panel of fig. 7). Those 5 counties are Cochise, Santa Cruz, and Yuma, Arizona; Imperial, California; and Luna, New Mexico. Within those counties we identified as having the largest percentage loss of branches, sometimes those losses were concentrated in smaller communities within the county (see bottom panel of fig. 7). For example, Calexico in Imperial County, California, lost 5 of its 6 branches from 2013 through 2016. In Santa Cruz County in Arizona, one zip code in Nogales accounted for all of the branch losses in the county from 2013 through 2016, losing 3 of its 9 branches. More generally, branch losses can vary substantially across different zip codes in a county (see for example bottom panel of fig. 7). In other instances, counties that lost a relatively small share of their branches can contain communities that lost a more substantial share—for example San Ysidro in San Diego County lost 5 of its 12 branches (about 42 percent) while the county as a whole lost only 5 percent of its branches from 2013 through 2016. Based on our analysis, counties losing branches in the Southwest border region tended to have substantially higher SAR filings, on average, than Southwest border region counties that did not lose branches. That is, counties that lost branches from 2013 through 2016 had about 600 SAR filings per billion dollars in deposits, on average, and counties that did not lose branches had about 60 SAR filings per billion dollars in deposits, on average (see fig. 8). Empirical Evidence Suggests Demographic and Money Laundering- Related Risk Factors Are Drivers of Branch Closures The econometric models we developed and estimated generally found that demographic and money laundering-related risk factors were important predictors of national bank branch closures. These models are subject to certain limitations, some of which we detail later in this section as well as appendix III, and as such, we interpret the results with some degree of caution. In general, our results suggest that counties were more likely to lose branches, all else equal, if they were (1) urban, had a higher per capita personal income, and had a younger population (proportion under 45); or (2) designated as a HIFCA or HIDTA county, or had higher SAR filings. We term the latter three characteristics (HIFCA, HIDTA, and SAR filings) “money laundering-related risk factors.” While our models are unable to definitively identify the causal effect of BSA/AML regulation on branch closures from these money laundering- related risk factors, the impact of the SAR variables, in particular, could reflect a combination of BSA/AML compliance effort and the underlying level of suspicious or money laundering-related activity in a county. Our econometric models are based on all counties with bank branches in the United States and are designed to predict whether a county will lose a branch the following year based on the characteristics of the county. The models included demographic, economic, and money laundering-related risk factors that might have influenced branch closures nationally since 2010 (see app. III for additional information on our models). The demographic factors included in our models are Rural-Urban Continuum Codes, age profile (proportion of the county over 45), and the level of per capita income. We chose these demographic factors, in particular, because they are associated with the adoption of mobile banking, which may explain the propensity to close branches in a community. The economic factors included in our models—intended to reflect temporary or cyclical economic changes affecting the county—are the growth of per capita income, growth in building permits (a measure of residential housing conditions), and growth of the population. The money laundering- related risk factors, as described previously, are whether a county has been designated a HIFCA or a HIDTA and the level of suspicious or possible money laundering-related activity reported by bank branches in the county, as represented by SAR filings. Demographic characteristics of counties were important predictors of branch closures. Our results were consistent with those demographic characteristics associated with the adoption of mobile banking. As such, our results are consistent with the hypothesis that mobile banking is among the factors leading some banks to close branches. The most urban counties were about 22 percentage points more likely to lose one or more branches over the next year than the most rural counties. A county with 70 percent of the population under 45 was about 9 percentage points more likely to lose one or more branches over the next year than a county with half the population under 45. A county with per capita income of $50,000 was about 7 percentage points more likely to lose one or more branches over the next year than a county with per capita income of $20,000. Money laundering-related characteristics of a county were also important predictors of branch closures in our models. HIDTA counties were about 11 percentage points more likely to lose one or more branches over the next year than non-HIDTA counties (the effect in HIFCA counties is less significant statistically and smaller in magnitude). A county with 200 SARs filed per billion dollars in bank deposits was about 8 percentage points more likely to lose one or more bank branches over the next year than a county where no bank branch had filed a SAR. Southwest border bank officials we spoke with generally said that SAR filings were a time- and resource-intensive process, and that the number of SARs filings—to some extent—reflected the level of effort, and overall BSA compliance risk, faced by the bank. That said, the impact of SAR variables in our models could reflect a combination of (1) the extent of BSA/AML compliance effort and risk faced by the bank, as expressed by bank officials, and (2) the underlying level of suspicious or money laundering- related activity in a county. Money laundering-related risk factors were likely to have been relatively more important drivers of branch closures in the Southwest border region because it had much higher SAR filings and a larger share of counties designated as HIDTAs than the rest of the country. More generally, given the characteristics of Southwest border counties and the rest of the United States, our models suggest that while demographic factors have been important drivers of branch closures in the United States overall, risks associated with money laundering were likely to have been relatively more important in the Southwest border region. Specifically, the Southwest border region is roughly as urban as the rest of the country, has a somewhat lower per capita income (about $35,000 in the Southwest border region versus about $41,000 elsewhere) and is somewhat younger on average (about 40 percent 45 and over in the Southwest border region versus about 45 percent elsewhere), but money laundering-related risk factors were relatively more prevalent, based on our measures, in the Southwest border region. Southwest border bank representatives we interviewed told us they considered a range of factors when deciding whether or not to close a branch. For example, most Southwest border bank representatives that we spoke with about the reasons for branch closures (6 of 10) told us that BSA/AML compliance challenges were not part of the decision to close a branch. However, most Southwest border bank representatives said that the financial performance of the branch is one of the most important factors they consider when deciding to close a branch, and as described previously, BSA/AML compliance can be resource intensive, which may affect the financial performance of a branch. Further, nearly half of the Southwest border bank representatives we spoke with (4 of 10), did mention that BSA/AML compliance costs could be among the factors considered in determining whether or not to close a branch. In addition, at least one bank identified closing a branch as one option to address considerable BSA/AML compliance challenges. Finally, some Southwest border bank representatives (3 of 10) also mentioned customer traffic in the branch or the availability of mobile banking as relevant to their decision to close a branch. Select Border Communities Raised Concerns That Branch Closures and Account Terminations Reduced Economic Growth and Access to Banking Services Communities we visited in Arizona, California, and Texas experienced multiple bank branch closures from 2013 through 2016. Some local banking customers that participated in the discussion groups we held in these communities also reported experiencing account terminations. While perspectives gathered from our visits to the selected cities cannot be generalized to all locations in Southwest border counties, stakeholders we spoke with noted that these closures affected key businesses and local economies and raised concerns about economic growth. Border Communities We Visited Experienced Account Terminations and Branch Closures According to some discussion group participants, local businesses, economic development specialists, and other stakeholders (border stakeholders) in the three Southwest border communities we visited, banks in their communities terminated the accounts of longtime established customers, sometimes without notice or explanation. They acknowledged that, because of their proximity to the U.S.-Mexico border, their communities were susceptible to money laundering-related activity and described how banks’ increased efforts to comply with BSA/AML requirements may have influenced banks’ decisions to terminate accounts. Each of the three Southwest border communities we visited— Nogales, Arizona; San Ysidro, California; and McAllen, Texas—also experienced multiple bank branch closures from 2013 through 2016 (see fig. 9). Our analysis shows that from 2013 through 2016, these communities lost a total of 12 bank branches, 9 of which were branches of large or extra- large banks, based on asset size. But the percentage of branch closures in some communities was more significant in locations where there were already a limited number of branch options. For instance, Nogales (3 of its 9 branches closed) and San Ysidro (5 of its 12 branches closed) both lost a third or more of all their bank branches compared to McAllen where approximately 6 percent of its branches were closed (4 of its 63 branches closed). Account Terminations and Branch Closures Affected Key Southwest Border Businesses and Customers and Concerns about Limited Economic Growth Were Reported According to border stakeholders we spoke with, businesses engaged in cross-border trade, cash-intensive businesses, and Mexican nationals— all significant parts of the border economy—were affected by account terminations and branch closures in the three communities we visited. For example, the cross-border produce industry accounts for almost 25 percent of jobs and wages in Nogales, according to a 2013 study prepared for Nogales Community Development. One produce business owner who had an account terminated told us that she was told that the volume of funds deposited into the account from her affiliated Mexican business created security risks that the bank was no longer willing to sustain, and she was unable to negotiate with the bank to keep it open. She said that it took almost 7 months to open a new account and that it involved coordination among bankers in multiple cities on both sides of the border. While some produce businesses and economic development specialists we spoke with explained that some regional banks in their communities have opened accounts for some small- to medium-sized produce businesses, they still have concerns about the long-term effects of limited access to banking services on smaller produce firms. One economic development specialist explained that these small companies often rely on local banks for funding, which enables them to develop and bring innovation to the produce industry. Some discussion group participants who we spoke with also described challenges related to account terminations that cash-intensive businesses face in operating in the Southwest border region because of banks’ increased emphasis on BSA/AML compliance. They explained that cash transactions raised suspicions for banks because of their associated money laundering risk; however, cash is a prevalent payment source for legitimate businesses in the region. For example, one money services business owner who participated in our discussion group in San Ysidro said that because his business generates large volumes of cash, he struggles to keep a bank account as a result of banks’ oversight of and caution regarding cash transactions. He said his business account has been closed three times over the past 35 years and that banks have declined his requests to open an account at least half a dozen times. Similarly, another discussion group participant explained that companies that import automobiles into Mexico use cash to pay for cars in the United States and that trying to make these large cash deposits raised suspicions for U.S. banks. Border stakeholders we spoke with also described how challenges associated with branch closures and terminations of accounts of Mexican nationals affected the Southwest border communities we visited. Border communities like San Ysidro are home to retail businesses, such as restaurants and clothing stores. According to our analysis of Bureau of Transportation Statistics data, an average of almost 69,000 personal vehicle passengers and 25,000 pedestrians entered the United States daily in September 2017 through the San Ysidro land port of entry. Economic development specialists told us that these visitors spend money on goods and services in local border communities. For example, one economic development specialist in Arizona estimated that Mexican nationals spend about $1 billion in Pima County alone each year, and another one estimated that 70 percent of the sales taxes collected in Nogales are paid by Mexican customers who cross the border to shop. One of the specialists explained that Mexicans—both Mexican day travelers to Tucson, as well as those who own U.S. real estate and travel to the United States for other investment business—used to visit the region and withdraw money from their U.S. bank accounts and subsequently spend money in border communities. He explained that Mexican nationals find it easier to have U.S. bank accounts to use while visiting and shopping on the U.S. side of the border. However, some discussion group participants said that because Mexican nationals have faced difficulties maintaining U.S. bank accounts, they have made fewer trips across the border and engaged in less commerce, which has affected the economies in their communities. Some participants also said that branch closures have affected businesses’ sales volumes in their communities. For example, one participant said that when branches closed in the San Ysidro Boulevard area—which is at the base of the pedestrian border crossing—businesses have had difficulty thriving due to reduced foot traffic by customers. According to border stakeholders we spoke with, branch closures also resulted in fewer borrowing options and limited investment in the communities, which they thought hindered business growth. For example, one discussion group participant explained that middle-sized businesses, such as those with revenues of approximately $2 million–$25 million, have fewer borrowing options when branches closed in the community because the remaining regional and smaller banks may not have the capital to support the lending needs of businesses that size. One economic development specialist and some discussion group participants also suggested that branch closures limited opportunities for local business expansion when banks outside the community are reluctant to lend to them. For example, in Tucson, Arizona, one specialist said that small businesses are having difficulty getting loans, which affects the ability of businesses to grow. To fill the void, some local businesses have turned to alternative lending options, such as title loan companies, accounts receivable lending companies, and family members as alternative funding sources. Rigorous academic research we reviewed suggests that branch closures reduce small business lending and employment growth in the area immediately around the branch. Our analysis of branch closure data based on estimates from this research suggests closed branches in the communities we visited could have amounted to millions of dollars in reduced lending and hundreds of fewer jobs. For example, in McAllen, Texas, this research suggests that the loss of four bank branches could have reduced employment growth by over 400 jobs and small business lending by nearly $3.5 million. Discussion Group Participants in Communities We Visited Reported Reduced Access to Banking Services Some discussion group participants said that as a result of branch closures and account terminations in the Southwest border communities we visited, they traveled further to conduct banking activities, paid higher fees for new banking alternatives, and experienced difficulty completing banking transactions. Some participants told us that they had to travel further to their new banking location, which resulted in additional costs and inconvenience for customers. For instance, some participants in Nogales and San Ysidro said they had to travel 20 to 40 minutes further to the next closest bank branch, with one participant noting that this especially created difficulty for elderly bank customers. One discussion group participant said that when their local bank branch closed, they kept their account with that bank and traveled more than 70 miles to the next closest branch because they were afraid that they would not be able to open an account with another bank. Another participant also noted the additional cost of gas and time lost for other important matters as a result of traveling further to a branch. Other participants also noted that they experienced longer lines at their new branches because of the higher volume of customers from closed branches. Some participants also found that some banking alternatives were more expensive than their previous banking options when their accounts were terminated or a local branch closed. For instance, some discussion group participants said they paid higher fees at their new bank and one participant mentioned that she received a lower interest rate on her deposits at her new bank. Some participants also mentioned that some banking alternatives they used, such as currency exchanges, were more expensive than their previous banking options. Some discussion group participants also told us that they experienced difficulty completing banking transactions in their communities as a result of branch closures or banks’ increased efforts to comply with BSA/AML requirements. For example, some participants from one discussion group session said that only an automated teller machine (ATM) was available in their community after their branch closed and it was not appropriate for all types of banking transactions. Further, some participants were unsatisfied with not being able to get in-person assistance from bank staff when their branch closed. For instance, one participant said that without a local branch, there was no nearby bank personnel to help her when the local ATM malfunctioned. Further, while acknowledging banks’ need to comply with BSA/AML requirements, some discussion group participants explained that some banking transactions have become more difficult, such as banks requiring additional forms of identification and limitations placed on cash transactions. Some participants, many who were longtime customers with their bank, also noted their disapproval with banks’ additional questioning and documentation requirements, and that there was little acknowledgment by the bank of their value as a legitimate customer or of their knowledge about them as a customer. Some participants acknowledged that they did not experience this challenge because of the increasing availability of mobile banking options, which allow customers to complete some transactions without going to a physical branch location. As another example, one business owner said she mostly used online banking and has a check reader in her office that she uses to deposit checks directly into her business accounts. Regulators Have Not Fully Assessed the BSA/AML Factors Influencing Banks to Reduce Services The results of our survey (for both Southwest border banks and non- Southwest border banks) and discussions with Southwest border bank representatives indicate that banks are terminating accounts and limiting services, in part, as a way to manage perceived regulatory concerns about facilitating money laundering. In addition, the econometric models we developed and estimated also generally found that money laundering- related risk factors that could be reflective, in part, of BSA/AML compliance effort and risks, were an important predictor of national bank branch closures, and likely to have been relatively more important in the Southwest border region. Regulators have taken some actions in response to derisking, including issuing guidance and conducting some agency reviews. Regulators have also conducted retrospective reviews on some BSA/AML requirements. However, regulators have taken limited steps aimed at addressing how banks’ regulatory concerns and BSA/AML compliance efforts may be influencing banks to engage in derisking or close branches. Regulators Have Issued Guidance and Taken Some Actions Related to Derisking FinCEN and the federal banking regulators have responded to concerns about derisking on a national level by issuing guidance to banks and conducting some evaluations within their agencies to understand the extent to which derisking is occurring. The guidance issued by regulators has been aimed at clarifying BSA/AML regulatory expectations and discouraging banks from terminating accounts without evaluating risk presented by individual customers or banks’ abilities to manage risks. The guidance has generally encouraged banks to use a risk-based approach to evaluate individual customer risks and not to eliminate entire categories of customers. Some of the guidance issued by regulators attempted to clarify their expectations specifically for banks’ offering of services to money services businesses. For example, in March 2005, the federal banking regulators and FinCEN issued a joint statement on providing banking services to money services businesses to clarify the BSA requirements and supervisory expectations as applied to accounts opened or maintained for this type of customer. The statement acknowledged that money services businesses were losing access to banking services as a result of concerns about regulatory scrutiny, the risks presented by these types of accounts, and the costs and burdens associated with maintaining such accounts. In addition, in November 2014, OCC issued a bulletin which explained that OCC-supervised banks are expected to assess the risks posed by an individual money services business customer on a case-by-case basis and to implement controls to manage the relationship commensurate with the risks associated with each customer. More recently, Treasury and the federal banking regulators issued a joint fact sheet on foreign correspondent banking which summarized key aspects of federal supervisory and enforcement strategy and practices in the area of correspondent banking. In addition to issuing guidance, FDIC and OCC have taken some steps aimed at trying to determine why banks may be terminating accounts because of perceived regulatory concerns. For example, in January 2015, FDIC issued a memorandum to examiners establishing a policy that examiners document and report instances in which they recommend or require banks to terminate accounts during examinations. The memorandum noted that recommendations or requirements to terminate accounts must be made and approved in writing by the Regional Director before being provided to and discussed with bank management and the board of directors. As of December 2017, FDIC officials stated that there were no instances of recommendations or requirements for account terminations being documented by examiners. In 2016, OCC reviewed how the institutions it supervises develop and implement policies and procedures for evaluating customer risks as part of their BSA/AML programs and for making risk-based determinations to close customer accounts. OCC focused its review on certain large banks’ evaluation of risk for foreign correspondent bank accounts. This effort resulted in OCC issuing guidance to banks on periodic evaluation of the risks of foreign correspondent accounts. The guidance describes corporate governance best practices for banks’ consideration when conducting these periodic evaluations of risk and making account retention or termination decisions on their foreign correspondent accounts. Further, OCC’s Fiscal Year 2018 Bank Supervision Operating Plan noted that examiners should be alert to banks’ BSA/AML strategies that may inadvertently impair financial inclusion. However, as of September 2017, OCC officials stated that the agency has not identified any concerns related to financial inclusion. Treasury and the federal banking regulators have also participated in a number of international activities related to concerns about the decline in the number of correspondent banking and money services business accounts. For example, FDIC, OCC, and the Federal Reserve participate in the Basel Committee on Banking Supervision’s Anti-Money Laundering/Counter Financing of Terrorism Experts Group. Recent efforts of the group involved revising guidelines to update and clarify correspondent banking expectations. Treasury leads the U.S. engagement to the Financial Action Task Force (FATF)—an inter- governmental body that sets standards for combating money laundering, financing of terrorism, and other related threats to the integrity of the international financial system—which has issued guidance on correspondent banking and money services businesses. Treasury also participates in the efforts to combat derisking that are occurring through the Financial Stability Board’s Correspondent Banking Coordination Group, the Global Partnership for Financial Inclusion, and the International Monetary Fund. The federal banking regulators also met with residents and businesses in the Southwest border region to discuss concerns related to derisking in the region. For example, FDIC officials hosted a BSA/AML workshop in Nogales, Arizona, in 2015 for banks, businesses, trade organizations, and others. Officials from the Federal Reserve and OCC also participated in the workshop during which the regulators tried to clarify BSA/AML regulatory requirements and expectations. In addition, OCC officials told us that they met with representatives of the Fresh Produce Association of the Americas, who had concerns about banks not providing services in the region. OCC officials spoke to the produce industry representatives about various money laundering schemes and the role of the agency’s examiners during the meeting. BSA/AML Regulatory Reviews Have Not Evaluated All Factors Influencing Banks to Derisk and Close Branches Evaluation of BSA/AML regulations and their implementation is essential to ensuring the integrity of the financial system while facilitating financial inclusion. Without oversight of regulations after implementation, they might prove to be less effective than expected in achieving their intended goals, become outdated, or create unnecessary burdens. Regulations may also change the behaviors of regulated entities and the public in ways that cannot be predicted prior to implementation. Some regulators and international standard setters recognize that establishing a balanced BSA/AML regulatory regime is challenging. For example, in a 2016 speech, the then Comptroller of the Currency Curry stated that preventing money laundering and terrorist financing are important goals, but that a banking system that is truly safe and sound must also meet the legitimate needs of its customers and communities. FinCEN officials also told us that while the agency’s mission is to safeguard the financial system from illicit use and combat money laundering, they also must be cautious that their efforts do not prevent people from using the system. Further, FATF acknowledged that AML and counter-terrorism financing safeguards can affect financial inclusion efforts. FATF explained that applying an overly cautious approach to safeguards for money laundering and terrorist financing can have the unintended consequence of excluding legitimate businesses and consumers from the formal financial system. Executive orders encourage and legislation requires agencies to review existing regulations to determine whether they should be retained, amended, or rescinded, among other things. Retrospective reviews of existing rules help agencies evaluate how existing regulations are working in practice. A retrospective review is an important tool that may reveal that an existing rule—while needed—has not operated as well as expected, and that changes may be warranted. Retrospective reviews seek to make regulatory programs more effective or less burdensome in achieving their regulatory objectives. Many recent presidents have directed agencies to evaluate or reconsider existing regulations. For example, in 2011 President Obama issued Executive Orders 13563 and 13579. Among other provisions, Executive Orders 13563 and 13579 require executive branch agencies and encourage independent regulatory agencies, such as the federal banking regulators, respectively, to develop and implement retrospective review plans for existing significant regulations. Further, the Trump Administration has continued to focus on the need for agencies to improve regulatory effectiveness while reducing regulatory burdens. Executive Order 13777, issued by President Trump in February 2017, also reaffirms the objectives of previous executive orders and directs agency task forces to identify regulations which, among other criteria, are outdated, unnecessary, or ineffective. In addition to the executive orders, the Economic Growth and Regulatory Paperwork Reduction Act (EGRPRA) requires federal banking regulators to review the regulations they prescribe not less than once every 10 years and request comments to identify outdated, unnecessary, or unduly burdensome statutory or regulatory requirements. FinCEN and Federal Banking Regulators’ BSA/AML Retrospective Reviews FinCEN and the federal banking regulators have all participated in retrospective reviews of different parts of the BSA/AML regulations. For example, FinCEN officials told us that they review each new or significantly amended regulation to assess its clarity and effectiveness within 18 months of its effective date. Each assessment is targeted to the specific new regulation, or significant change to existing regulations, and a determination is made on how best to evaluate its effectiveness. FinCEN officials explained that the agency consistently receives feedback from all of the relevant stakeholders, including law enforcement, regulated entities, relevant federal agencies, and the public, which informs their retrospective reviews. Based on the specific findings of an assessment, FinCEN considers whether to publish guidance or whether additional rule making is required. For example, FinCEN officials explained that they revised the money services business definitions to adapt to evolving industry practice as part of the regulatory review process. As part of fulfilling their requirements under EGRPRA, the federal banking regulators—through the Federal Financial Institutions Examination Council (FFIEC)—have also participated in retrospective reviews of BSA/AML regulations. As part of the 2017 EGRPRA review, FFIEC received several public comments on BSA/AML requirements, including increasing the threshold for filing CTRs, the SAR threshold, and the overall increasing cost and burden of BSA compliance. The federal banking regulators referred the comments to FinCEN. FinCEN is not a part of the EGRPRA review and is not required to consider the comments; however, in its response in the 2017 EGRPRA report, the agency stated that it finds the information helpful when assessing BSA requirements. FinCEN officials and the federal banking regulators stated that the agencies are working to address the BSA-related EGRPRA comments—particularly those related to CTR and SAR filing requirements—through the BSA Advisory Group (BSAAG), which established three subcommittees to address some of the concerns raised during the EGRPRA process. One subcommittee is reviewing the metrics used by industry, law enforcement, and FinCEN to assess the value and effectiveness of BSA reporting. Another subcommittee is focusing on how SAR filing requirements could be streamlined or reduced while maintaining the value of the data, and the third subcommittee is focusing on issues related to the filing of CTRs. FinCEN and the federal banking regulators are also considering, through the advisory group, the EGRPRA comments that involve the supervisory process and expectations related to BSA examinations of financial institutions. FinCEN officials stated that there have been significant discussions during two BSAAG meetings since the 2017 EGRPRA report was issued and that, as of November 2017, all of these efforts are ongoing. In addition to the BSAAG, regulators also told us that that the FFIEC BSA/AML working group has discussed EGRPRA and other compliance burden issues at its recent meetings and is trying to promote BSA examination consistency through its monthly meetings and with the interagency FFIEC BSA/AML examination manual. The actions FinCEN and the federal banking regulators have taken related to derisking—issuing guidance, conducting internal agency reviews, and meeting with affected Southwest border residents—have not been aimed at addressing and, if possible ameliorating, the full range of factors that influence banks to engage in derisking, in particular banks’ regulatory concerns and BSA/AML compliance efforts. Further, the actions regulators have taken to address concerns raised in BSA/AML retrospective reviews have focused primarily on the burden resulting from the filing of CTRs and SARs, but again, these actions have not evaluated how regulatory concerns may influence banks to engage in derisking or close branches. Federal internal control standards call for agencies to analyze and respond to risks to achieving their objectives. Further, guidance implementing Executive Orders 13563 and 13579 states that agencies should consider conducting retrospective reviews on rules that unanticipated circumstances have overtaken. Our evidence shows that derisking may be an unanticipated response from the banking industry to BSA/AML regulations and their implementation. For example, our evidence demonstrates that banks not only terminate or limit customer accounts as a way to address legitimate money laundering and terrorist financing threats, but also, in part, as a way to manage regulatory concerns. Further, our econometric models and discussions with bank representatives suggest that BSA/AML compliance costs and risks can play a role in the decision to close a branch. The actions FinCEN and the federal banking regulators have taken to address derisking and the retrospective reviews that have been conducted have not been broad enough to evaluate all of the BSA/AML factors banks consider when they derisk or close branches, including banks’ regulatory concerns which may influence their willingness to provide services. Without assessing the full range of BSA/AML factors that may be influencing banks to derisk or close branches, FinCEN, the federal banking regulators, and Congress do not have the information they need to determine if adjustments are needed to ensure that the BSA/AML regulations and their implementation are achieving their regulatory objectives in the most effective and least burdensome way. Conclusions BSA/AML regulations promote the integrity of the financial system by helping a number of regulatory and law enforcement agencies detect money laundering, drug trafficking, terrorist financing, and other financial crimes. As with any regulation, oversight after implementation is needed to ensure the goals are being achieved and that unnecessary burdens are identified and ameliorated. The collective findings from our work indicate that BSA/AML regulatory concerns have played a role in banks’ decisions to terminate and limit accounts and close branches. However, the actions taken to address derisking by the federal banking regulators and FinCEN and the retrospective reviews conducted on BSA/AML regulations have not fully considered or addressed these effects. Retrospective reviews help agencies evaluate how existing regulations are working in practice and can assist to make regulatory programs more effective or less burdensome in achieving their regulatory objectives. BSA/AML regulations have helped to detect money laundering and other financial crimes, but there are also real concerns about the unintended effects, such as derisking, that these regulations and their implementation may be having. While it is important to evaluate how effective BSA/AML regulations are in helping to identify money laundering, terrorist financing, and other financial crimes, it is also important to identify and attempt to address any unintended outcomes. We have found that reduced access to banking services can have consequential effects on local communities. However, without evaluating how banks’ regulatory concerns may be affecting their decisions to provide services, the federal banking regulators, FinCEN, and Congress do not have the information to determine if BSA/AML regulations and their implementation can be made more effective or less burdensome in achieving their regulatory objectives. Recommendations for Executive Action We are making four recommendations to FinCEN and the three federal banking regulators in our review—FDIC, the Federal Reserve, and OCC—to jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks. The Director of FinCEN should jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks with FDIC, the Federal Reserve, and OCC. This review should focus on how banks’ regulatory concerns may be influencing their willingness to provide services. In conducting the review, FDIC, the Federal Reserve, OCC, and FinCEN should take steps, as appropriate, to revise the BSA regulations or the way they are being implemented to help ensure that BSA/AML regulatory objectives are being met in the most effective and least burdensome way. (Recommendation 1) The Chairman of FDIC should jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks with the Federal Reserve, OCC, and FinCEN. This review should focus on how banks’ regulatory concerns may be influencing their willingness to provide services. In conducting the review, FDIC, the Federal Reserve, OCC, and FinCEN should take steps, as appropriate, to revise the BSA regulations or the way they are being implemented to help ensure that BSA/AML regulatory objectives are being met in the most effective and least burdensome way. (Recommendation 2) The Chair of the Federal Reserve should jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks with FDIC, OCC, and FinCEN. This review should focus on how banks’ regulatory concerns may be influencing their willingness to provide services. In conducting the review, FDIC, the Federal Reserve, OCC, and FinCEN should take steps, as appropriate, to revise the BSA regulations or the way they are being implemented to help ensure that BSA/AML regulatory objectives are being met in the most effective and least burdensome way. (Recommendation 3) The Comptroller of the Currency should jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks with FDIC, the Federal Reserve, and FinCEN. This review should focus on how banks’ regulatory concerns may be influencing their willingness to provide services. In conducting the review, FDIC, the Federal Reserve, OCC and FinCEN should take steps, as appropriate, to revise the BSA regulations or the way they are being implemented to help ensure that BSA/AML regulatory objectives are being met in the most effective and least burdensome way. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to CFPB, the Department of Justice, the Federal Reserve, FDIC, Treasury/FinCEN, and OCC. The Federal Reserve, FDIC, and OCC provided written comments that have been reproduced in appendixes IV–VI, respectively. Treasury/FinCEN did not provide a written response to the report. FDIC, Treasury/FinCEN, and OCC provided technical comments on the draft report, which we have incorporated, as appropriate. CFPB and the Department of Justice did not have any comments on the draft of this report. In their written responses, the Federal Reserve, FDIC, and OCC agreed to leverage ongoing interagency work reviewing BSA/AML regulations and their implementation for banks to address our recommendation. We agree that using existing interagency efforts is an appropriate means for conducting a retrospective review of BSA/AML regulations that focuses on evaluating how banks’ BSA/AML regulatory concerns may be influencing their willingness to provide services. The Federal Reserve, FDIC, and OCC also raised concerns with some of the findings of our report and the methodologies we used. For example, in their responses, each agency discussed that the report did not take into consideration the extent to which law enforcement activities may be a driver of account terminations and branch closures in the Southwest border region. In response to this comment, we added some information to the report that we received from law enforcement officials about instances in which some account terminations were the result of law enforcement’s identification of suspicious accounts. This type of account termination, however, is not included in our definition of the term “derisking,” because such terminations are consistent with BSA/AML purposes. In addition, when we discuss the role that enforcement actions have played in making Southwest border banks more conservative in their account offerings, we’ve clarified the language to ensure it encompasses both regulatory enforcement actions taken by the federal banking regulators and criminal enforcement actions taken by law enforcement agencies. Treasury/FinCEN’s technical comments also noted that the report did not take into consideration the 2010 Mexican exchange control regulations and their subsequent changes, which it considers to be the most important catalyst of changes to BSA risk profiles for banks in the Southwest border region. To address this comment, we added language describing these regulations and their potential effects on Southwest border banks. In its written response, the Federal Reserve stated that the report does not find a causal linkage between the agency’s regulatory oversight and derisking decisions made by some banks that operate along the Southwest border (see app. IV). OCC made a similar comment in its technical comments on the draft report. While the methodologies used in our report included a nationally representative survey of banks, econometric modeling of potential drivers of branch closures, and discussions with bank representatives, do not on their own allow us to make a definitive causal linkage between regulation and derisking, the collective evidence we gathered indicates that banks’ BSA/AML regulatory concerns have played a role in their decisions to terminate and limit accounts and close branches. We believe that, based on this evidence, further examination by the federal banking regulators and FinCEN into how banks’ perceived regulatory concerns are affecting their offering of services is warranted. OCC’s written response noted that the definition of derisking we used is inconsistent with definitions used by other regulatory bodies and that our definition encompasses a wide range of situations in which banks limit certain services or end customer relationships (see app. VI). Treasury/FinCEN also made a similar comment in its technical comments on the draft report. OCC’s letter notes that FATF and the World Bank define derisking as situations in which financial institutions terminate or restrict business relationships with entire countries or classes of customers in order to avoid, rather than to manage, AML-related risks. We, however, defined derisking for the purposes of our report as the practice of banks limiting certain services or ending their relationships with customers to, among other things, avoid perceived regulatory concerns about facilitating money laundering because it best described the bank behavior we wanted to examine. While we recognize that there are narrower definitions of derisking that focus solely on the treatment of entire countries or classes of customers, we chose to focus on banks’ perceived regulatory concerns because these concerns could influence banks’ decisions to provide services in a variety of ways. Moreover, including perceived regulatory concerns as a factor enabled us to examine whether there were ways the federal regulators may be able to improve the implementation of BSA/AML to reduce the effects of derisking on different populations of banking customers. Furthermore, our definition is broader and allows us to include individual decisions banks make to terminate or limit accounts, as well as whole categories of customer accounts. Our decision to define derisking in this manner was based on, among other things, discussions we had with representatives of Southwest border banks who indicated such behavior was occurring. We added additional information on the definition of derisking we chose to our scope and methodology section (see app. I). OCC’s response letter also notes that because we focus exclusively on BSA/AML regulatory issues, the report does not take into consideration other reasons that banks terminate account relationships. We recognize that banks may terminate accounts for a variety of reasons, some of which are not related to BSA/AML regulatory issues. However, because the focus of our review was to determine why banks are terminating accounts for BSA/AML regulatory reasons, we did not seek to identify all the potential reasons banks may terminate accounts. Finally, OCC’s letter states that the agency has concerns regarding our econometric analysis and the conclusions that can be drawn from it. FDIC made similar comments in its technical comments on the draft report. In response to these comments, we have clarified how we interpret the effect of money laundering-related risk in our models. We agree that the econometric results on their own do not provide definitive evidence that regulatory burden is causing branch closures, but our econometric models and discussions with bank representatives together suggest that BSA/AML compliance costs and risks can play a role in the decision to close a branch. FDIC’s written letter states that the report does not distinguish account or branch closures resulting from suspected money laundering or other illicit financial transactions from closures that may have resulted from ineffective or burdensome regulations. In response to this concern, we revised language in the report to ensure that we do not imply that instances in which banks limit services or terminate relationships based on credible evidence of suspicious or illegal activity reflects derisking behavior. As noted above, we also clarified how we interpret the effect of money laundering-related risk on branch closures in our models and recognize that our econometric results alone do not provide definitive evidence that regulatory burden is causing branch closures. However, our econometric models coupled with discussions we had with bank representatives suggest that BSA/AML compliance costs and risks can play a role in the decision to close a branch. FDIC’s letter also stated that our report highlighted that 1 in 10 branch closures may be due to “compliance challenges.” This statement is incorrect. The report states that nearly half of the Southwest border bank representatives (4 of 10) we spoke with mentioned that BSA/AML compliance costs could be among the factors considered in whether or not to close a branch. Further, we identified one bank that considered closing a branch as an option to address considerable BSA/AML compliance challenges. In addition, most Southwest border bank representatives we spoke with said that the financial performance of the branch is one of the most important factors they consider when deciding to close a branch, and as we describe in the report, BSA/AML compliance can be resource intensive, which may affect the financial performance of a branch. We are sending copies of this report to the appropriate congressional committees, the Director of Financial Crimes Enforcement Network, the Chairman of the Federal Deposit Insurance Corporation, the Chair of the Board of Governors of the Federal Reserve System, the Comptroller of the Currency, the Attorney General, the Acting Director of the Bureau of Consumer Financial Protection, and other interested parties. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to (1) describe the types of heightened Bank Secrecy Act/anti-money laundering (BSA/AML) compliance risks that Southwest border banks may face and the BSA/AML compliance challenges they may experience; (2) determine the extent to which banks are terminating accounts and closing bank branches in the Southwest border region and their reasons for any terminations or closures; (3) describe what Southwest border banking customers and others told us about any effects of account terminations and branch closures on Southwest border communities; and (4) evaluate how the Department of the Treasury’s (Treasury) Financial Crimes Enforcement Network (FinCEN) and the federal banking regulators—the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), and Office of the Comptroller of the Currency (OCC)—have assessed and responded to concerns about derisking in the Southwest border region and elsewhere, and the effectiveness of those efforts. We defined “derisking” to mean the practice of banks limiting certain services or ending their relationships with customers to, among other things, avoid perceived regulatory concerns about facilitating money laundering. We developed this definition by reviewing various existing definitions used by international banking industry standard setters and others, including the Financial Action Task Force (FATF)—an intergovernmental body that, among other things, sets standards for combating money laundering; the Bank for International Settlements; the World Bank; and the Global Partnership for Financial Inclusion. We also reviewed guidance and other documentation issued by the federal banking regulators, Treasury, and FinCEN; research reports on derisking; an industry survey; and testimonial evidence from several banks we interviewed. The methodologies we used allowed us to gather information on a variety of factors that may be causing banks to limit services, while our definition of derisking allowed us to focus on the role played by the federal regulators in implementing BSA/AML requirements. We defined the Southwest border region as all counties that have at least 25 percent of their landmass within 50 miles of the U.S.-Mexico border. Thirty-three counties fell within this definition. They are: Cochise, Pima, Santa Cruz, and Yuma, Arizona; Imperial and San Diego, California; Dona Ana, Hidalgo, and Luna, New Mexico; and Brewster, Brooks, Cameron, Culberson, Dimmit, Edwards, El Paso, Hidalgo, Hudspeth, Jeff Davis, Jim Hogg, Kenedy, Kinney, La Salle, Maverick, Presidio, Starr, Terrell, Uvalde, Val Verde, Webb, Willacy, Zapata, and Zavala, Texas. We excluded credit unions from the scope of our review based on discussions with and information received from the National Credit Union Administration (NCUA)—which oversees credit unions for compliance with BSA/AML requirements—and two regional credit union groups that cover the Southwest border states. These groups noted that neither branch closures nor account terminations by credit unions were prevalent in the Southwest border region. To describe the types of heightened BSA/AML compliance risks that Southwest border banks may face and the BSA/AML compliance challenges they may experience, we analyzed data from FinCEN on the volume of Suspicious Activity Reports (SAR) and Currency Transaction Reports (CTR) filed by bank branches in Southwest border counties and compared the volume of those filings to filings in similar geographic areas outside the Southwest border region from 2014 through 2016. To adjust for variances in the size of counties, which may be reflected in the number of SAR and CTR filings by counties, we standardized the quantity of SARs and CTRs filed by county by calculating the number of SAR and CTR filings per billion dollars in bank branch deposits. We used data from FDIC’s Summary of Deposits database for information on bank branch deposits. To construct comparison groups that were comparable along some key dimensions, we matched Southwest border counties to counties with the same 2013 Rural-Urban Continuum Code (RUCC), which measures how urban or rural a county is, and by population if there was more than one potential matching county. We undertook this process for two comparison groups, one for counties in Southwest border states, but not directly on the U.S.-Mexico border, and one for counties outside the Southwest border states that were designated as High Intensity Financial Crimes Areas (HIFCA) or High Intensity Drug Trafficking Areas (HIDTA). In addition, we analyzed data on BSA/AML bank examination violations using nonpublic data provided by FDIC, OCC, and the Federal Reserve from January 2009 through June 2016. We obtained data for all Southwest border banks (if they had been cited for a BSA/AML compliance violation during the period we reviewed), as well as aggregated data for all banks in the United States that received a BSA/AML compliance violation during the period we reviewed. Because each regulator categorized violations differently, we developed a set of categories to apply to violations across all three regulators. We analyzed the distribution of violations by category. In addition, we analyzed data on BSA/AML informal enforcement actions provided by the federal banking regulators and formal BSA/AML enforcement actions taken by the federal banking regulators and FinCEN from January 2009 through June 2016. We also reviewed documentation from BSA/AML examinations of selected Southwest border banks to gain additional context about BSA/AML violations. We also interviewed representatives from 19 Southwest border banks. Using data from FDIC’s Summary of Deposits database, we identified all Southwest border banks as of June 30, 2016. We then selected banks to interview in the following ways. First, we interviewed four of the five largest Southwest border banks (based on asset size). Second, as part of our site visits to communities in the Southwest border region (described below), we interviewed nine Southwest border banks that operate in or near the communities we visited— Nogales, Arizona; San Ysidro, California; and McAllen, Texas. We selected banks in these communities based on the following criteria: (1) the number of branches the bank operates in the Southwest border region, focusing on banks that operate only a few branches in the region; (2) the size of the bank based on assets; and (3) the bank’s primary federal regulator. We focused our selection on banks that operate fewer branches in the region because we interviewed four of the five largest banks in the region that operate many branches in the region. To the extent that a bank was located in the community and willing to speak with us, we interviewed at least one bank that was regulated by each federal banking regulator (Federal Reserve, FDIC, and OCC). Third, we interviewed six additional Southwest border banks as part of the development of our bank survey (described in more detail below) and also asked them questions related to their efforts to comply with BSA/AML requirements. We selected these banks using the same criteria we used for the selection of banks in our site visit communities: the bank’s primary federal regulator, size of the bank (based on assets), and number of branches. For the interviews, we used a semistructured interview protocol, and responses from bank officials were open-ended to allow for a wide variety of perspectives and responses. Responses from these banks are not generalizable to all Southwest border banks. In addition to the interviews with banks, we also interviewed officials from FDIC, Federal Reserve, and OCC, as well as BSA/AML examination specialists from each federal banking regulator to gain their perspectives on the risks faced by banks in the Southwest border region. To determine the extent to which banks are terminating accounts in the Southwest border region and the reasons for the terminations, we administered a web-based survey to a nationally representative sample of banks to obtain information on bank account terminations for reasons related to BSA/AML risk. In the survey, we asked banks about limitations and terminations of accounts related to BSA/AML risk, the types of customer categories being limited or terminated, and the reasons for these decisions. We administered the survey from July 2017 to September 2017, and collected information for the 3-year time period of January 1, 2014, to December 31, 2016. Appendix II contains information on the survey results. To identify the universe of banks, we used data from FDIC’s Statistics on Depository Institutions database. Our initial population list contained 5,922 banks downloaded from FDIC’s Statistics on Depository Institutions database as of December 31, 2016. We stratified the population into five sampling strata and used a stratified random sample. First, banks that did not operate in the Southwest border region (non-Southwest border banks) were stratified into four asset sizes (small, medium, large, and extra- large). Second, to identify the universe of Southwest border banks, we used FDIC’s Summary of Deposits database as of June 30, 2016. This is a hybrid stratification scheme. Our initial sample size allocation was designed to achieve a stratum-level margin of error no greater than plus or minus 10 percentage points for an attribute level at the 95 percent level of confidence. Based upon prior surveys of financial institutions, we assumed a response rate of 75 percent to determine the sample size for the asset size strata. Because there are only 17 extra-large banks in the population, we included all of them in the sample. We also included the entire population of 115 Southwest border banks as a separate certainty stratum. We reviewed the initial population list of banks in order to identify nontraditional banks not eligible for this survey. We treated nontraditional banks as out-of- scope. We also reviewed the initial population list to determine whether subsidiaries of the same holding company should be included separately in the sample. In addition, during the administration of our survey, we identified six banks that had been bought and acquired by another bank, as well as one additional bank that was nontraditional and, therefore, not eligible for this survey. We treated these sample cases as out-of-scope; this adjusted our population of banks to 5,805 and reduced our sample size to 406. We obtained a weighted survey response rate of 46.5 percent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. All survey results presented in the body of this report are generalizable to the estimated population of 5,805 in-scope depository institutions, except where otherwise noted. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question or sources of information available to respondents can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing the results to minimize such nonsampling error. To inform our methodology approach and our survey development, we conducted interviews with representatives from seven selected Southwest border banks. From these interviews, we gathered information on the type and amount of data banks keep on account terminations for reasons related to BSA/AML risk. The selection process used to identify these banks is described above. We conducted pretests of the survey with four banks. We selected these banks to achieve variation in geographic location (within and outside the Southwest border region) and asset size (small, large, extra large). The pretests of the survey were conducted to ensure that survey questions were clear, to obtain any suggestions for clarification, and to determine whether representatives would be able to provide responses to questions with minimal burden. We also interviewed the federal banking regulators; federal, state, and local law enforcement officials; and bank industry associations, to obtain their perspectives on banks’ experience with account terminations. To determine the extent to which banks have closed branches in the Southwest border region and the reasons for the closures, we analyzed data from a variety of sources and interviewed bank officials. To assess trends in bank branch closures, we analyzed data from FDIC’s Summary of Deposits database on the size and location of bank branches. Our measure of bank branches includes both full-service and limited-service branches. Limited-service branches provide some conveniences to bank customers but generally offer a reduced set of bank services. As of 2016, limited-service branches were about 2.5 percent of branches in the Southwest border region. We compared growth rates for all branches in the Southwest border region and only full-service branches, for 2013 through 2016, and found that they were almost identical (-5.92 percent and -5.93 percent, respectively). We combined the Summary of Deposits data on the size and location of bank branches with demographic, economic, and money laundering-related risk data from the U.S. Census Bureau, U.S. Department of Commerce’s Bureau of Economic Analysis, and FinCEN, among other sources. We then utilized the merged dataset to conduct an econometric analysis of the potential drivers of branch closures (see app. III for information on the econometric analysis). We also compared trends in branch closures in the Southwest border region to national trends, as well as trends in counties in Southwest border states that were not in the Southwest border region, and trends in HIFCA and HIDTA counties not in Southwest border states. We also interviewed representatives from banks that operate in the Southwest border region about the time and resources required to file SARs and how they approached the decision to close a branch. To describe what Southwest border banking customers and others told us about any effects of account terminations and branch closures in Southwest border communities, we conducted site visits to communities in three of the four Southwest border states (Nogales, Arizona; San Ysidro, California; and McAllen, Texas). We selected these communities to achieve a sample of locations that collectively satisfied the following criteria: (1) counties with different classifications of how rural or urban they are based on their RUCC classification; (2) counties that experienced different rates of branch closures from 2013 through 2016; and (3) counties that had received different designations by the federal banking regulators as distressed or underserved as of June 1, 2016. Perspectives gathered from our visits to the selected cities cannot be generalized to all locations in Southwest border counties. During our site visits, we conducted a total of five discussion groups and summarized participants’ responses about how they were affected by account terminations and branch closures in their communities. Discussion groups included a range of 2 to10 participants with varied experiences related to access to banking services in their area, including customers whose accounts were terminated or branch was closed. Participants were selected using a convenience sampling method, whereby we coordinated with local city government and chamber of commerce officials who agreed to help us recruit participants and identify facilities where the discussion groups were held. Local officials disseminated discussion group invitations and gathered demographic data on potential participants. Three of the five discussion group sessions included business banking customers—persons representing businesses that utilize banking services (such as banking accounts or business loans). The other two sessions included nonbusiness retail banking customers—persons with individual experience with banking services (such as a personal checking or savings account) and were conducted in Spanish. Each session was digitally recorded, translated (if necessary), and transcribed by an outside vendor, and we used the transcripts to summarize participant responses. An initial coder assigned a code that best summarized the statements from discussion group participants and provided an explanation of the types of discussion group participant statements that should be assigned to a particular code. A separate individual reviewed and verified the accuracy of the initial coding. The initial coder and reviewer discussed orally and in writing any disagreements about code assignments and documented consensus on the final analysis results. Discussion groups are intended to generate in- depth information about the reasons for the participants’ views on specific topics. The opinions expressed by the participants represent their points of view and may not represent the views of all residents in the Southwest border region. We also interviewed various border stakeholders including economic development specialists, industry and trade organizations that focus on border trade and commerce, as well as chamber of commerce and municipal officials representing border communities. We reviewed recent articles on the effects of account terminations and branch closures on communities as well as research organization, industry, and government reports. Finally, we reviewed academic studies on the effects of branch closings on communities. In particular, we focused our review on one recent paper that estimated the impact of branch closings, using detailed geographic and lending data, on employment growth and small business lending, among other outcomes. We identified the census tracts of all branch closures in our three site visit communities from 2013 through 2016 and applied impact estimates from this research to the level of small business lending and employment in these communities, based on data from Community Reinvestment Act reporting (small-business lending) and the U.S. Census American Community Survey (employment).These results are intended to illustrate an approximate magnitude of effects and not produce precise estimates of local impacts. To evaluate how FinCEN and the federal banking regulators have assessed and responded to concerns about derisking and the effectiveness of those efforts, we reviewed guidance the agencies issued to banks related to derisking, related agency memorandums and documents, and an OCC internal analysis on derisking. We also reviewed guidance from FATF on AML and terrorist financing measures and financial inclusion. In addition, we reviewed various executive orders that require most executive branch agencies, and encourage independent agencies, to develop a plan to conduct retrospective analyses, and Office of Management and Budget guidance implementing those executive orders. We reviewed Treasury documentation on BSA regulatory reviews and the BSA-related components of the 2007 and 2017 Economic Growth and Regulatory Paperwork Reduction Act reports issued by the Federal Financial Institutions Examination Council (FFIEC). We also reviewed federal internal control standards related to risk assessment. Finally, we interviewed officials from FinCEN and the federal banking regulators about the actions they have taken related to derisking, as well as retrospective reviews they had conducted on BSA regulations. We utilized multiple data sources throughout our review and took steps to assess the reliability of each one. First, to assess the reliability of data in FDIC’s Summary of Deposits database we discussed the appropriateness of the database for our purposes with FDIC officials, reviewed related documentation, and conducted electronic testing for missing data, outliers, or any obvious errors. Second, to assess the reliability of FinCEN’s data on SAR and CTR filings, we interviewed knowledgeable agency officials on the appropriateness of the data for our purposes, any limitations associated with the data, and the methods they used to gather the data for us. We also reviewed related documentation and conducted electronic testing to identify missing data, outliers, and any obvious errors. Third, we assessed the reliability of the HIFCA and HIDTA county designations by interviewing officials from FinCEN, the Office of National Drug Control Policy, and the National HIDTA Assistance Center on changes to county designations over time and reviewed related documentation. Fourth, to assess the reliability of FDIC’s Statistics on Depository Institutions database, we reviewed related documentation and conducted electronic testing of the data for missing data, outliers, or any obvious errors. Fifth, we interviewed officials from FDIC, the Federal Reserve, and OCC on the data the agencies collect related to BSA/AML bank exam violations and also asked them questions related to methods they used to gather the data for us and any limitations associated with the data. We also manually reviewed the data for any obvious errors and followed up with agency officials, as needed. Finally, for data we obtained from the U.S. Census Bureau (American Community Survey data on population and age and the Residential Building Permits Survey), the Bureau of Economic Analysis (Local Area Personal Income), and Department of Agriculture (Rural-Urban Continuum Codes), we reviewed related documentation, interviewed knowledgeable officials about the data, when necessary, and conducted electronic testing of the data for missing data, outliers, or any obvious errors. We concluded that all applicable data were sufficiently reliable for the purposes of describing BSA/AML risks and compliance challenges for Southwest border banks; identifying banks to survey on account terminations and limitations; evaluating branch closure trends in the Southwest border region and elsewhere, and the factors driving those closures; and describing the effects for Southwest border communities experiencing branch closures and account terminations. We conducted this performance audit from March 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Responses to Selected Questions from GAO’s Survey of Banks on Account Terminations and Limitations From July 2017 to September 2017, we administered a web-based survey to a nationally representative sample of banks. In the survey, we asked banks about the number of account terminations for reasons related to Bank Secrecy Act/anti-money laundering (BSA/AML) risk; whether banks are terminating, limiting, or not offering accounts to certain types of customer categories; and the factors influencing these decisions. We collected information for the 3-year time period of January 1, 2014, to December 31, 2016. All survey results presented in this appendix are generalizable to the population of banks, except where otherwise noted. We obtained a weighted survey response rate of 46.5 percent. Because our estimates are from a generalizable sample, we express our confidence in the precision of our particular estimates as 95 percent confidence intervals. Responses to selected questions we asked in our survey that were directly applicable to the research objectives in this report are shown below. Survey results presented in this appendix are categorized into three groups (1) all banks nationwide, (2) Southwest border banks, and (3) non-Southwest border banks, unless otherwise noted. Our survey was comprised of closed- and open-ended questions. In this appendix, we do not provide information on responses provided to the open-ended questions. For a more detailed discussion of our survey methodology, see appendix I. Questions 15 through 23 applied only to banks in our sample that had branches domiciled both inside and outside of the Southwest border region in order to obtain information on their accounts domiciled in the Southwest border region. All the percentage estimates for this question are not statistically reliable. All the percentage estimates for this question are not statistically reliable. Between January 1, 2014 and December 31, 2016, did the bank terminate any cash-intensive small business checking, savings, or money market accounts domiciled in the bank’s Southwest border branches for reasons related to BSA/AML risk? (Check one.) (Question 21) All the percentage estimates for this question are not statistically reliable. Appendix III: Econometric Analysis of Bank Branch Closures This technical appendix outlines the development, estimation, results, and limitations of the econometric model we described in the report. We undertook this analysis to better understand factors that may have influenced banks to close branches in recent years. Model Development and Specification We developed a number of econometric models that included demographic, economic, and risk factors that might have influenced branch closures nationally since 2010. We developed these models based on a small number of relevant studies, our discussions with banks and regulators, and our own prior empirical work on banking. Our models are based on all counties with bank branches in the United States and are designed to predict whether a county will lose a branch the following year based on the characteristics of the county. Because we are modeling a binary outcome (whether or not a county lost a branch) we use a specific functional form for our regression models known as a logistic regression (logit). The demographic factors included in our models are rural-urban continuum codes, age profile (proportion of the population of the county over 45), and the level of per capita income. We chose these demographic factors, in particular, because they tend to be associated with the adoption of mobile banking, which may explain the propensity to close branches in a community. The economic factors included in our models—intended to reflect temporary or cyclical economic changes affecting the county—are the growth of per capita income, growth in building permits (a measure of residential housing conditions), and growth of the population. The money laundering-related risk factors are whether a county has been designated a High Intensity Financial Crime Area (HIFCA) or a High Intensity Drug Trafficking Area (HIDTA), and the level of suspicious or possible money laundering-related activity reported by bank branches in the county (known as Suspicious Activity Report (SAR) filings). HIDTA and HIFCA designations in our model could proxy for a number of features of a county, including but not limited to the intensity of criminal activity related to drug trafficking or financial crimes. Bank officials we spoke with generally said that SAR filings were a time and resource-intensive process, and that the number of SARs filings—to some extent—reflected the level of effort, and overall BSA compliance risk, faced by the bank. That said, the impact of SAR variables in our models could reflect a combination of (1) the extent of BSA/AML compliance effort and risk faced by the bank, as described by bank officials, and (2) the underlying level of suspicious or money laundering- related activity in a county. We constructed variables from the following data sources to estimate our models: Net branch closures and the size of deposits in each county, from Federal Deposit Insurance Corporation’s (FDIC) Summary of Deposits; Rural-urban continuum codes, from the U.S. Department of Population growth and age profile in each county, from the Census Bureau’s American Community Survey; Per capita income, from Bureau of Economic Analysis Local Area Building permits by county, from the Census Bureau; HIFCA and HIDTA county designations from the Financial Crimes Enforcement Network (FinCEN) and the Office of National Drug Control Policy, respectively; and SAR filings by depository institution branches, from FinCEN We estimated a large number of econometric models to ensure that our results were generally not sensitive to small changes in our model, in other words, to determine if our results were “robust.” Our results, as described in the body of the report, were highly consistent across models and were generally both statistically and economically significant—that is, results of this size are unlikely to occur at random if there were no underlying relationship (p-values of interest are almost always less than 0.001), and the estimated impacts on the probability of branch closures are substantively relevant. For our baseline model, we estimated branch closures (dependent variable: 1/0 for whether or not a county lost one or more branches, on net, that year) as a function of the 1 year lagged share of the population over 45 in the county, a rural-urban continuum code, level of per capita income, population growth, growth in the value of building permits, growth in per capita income, whether or not the county is a HIDTA, and the level of suspicious activity report filings per billion dollars of deposits held in the county, including time and state fixed effects. Economic variables were adjusted for inflation (converted to constant 2015 dollars) using appropriate price indices. We generally estimated models with cluster robust standard errors, clustering at the county. See the logistic regression equation for our baseline model below, where the c subscript represents the county and the t subscript represents the year. Where f is the cumulative logistic function: 𝑓𝑓(𝑧𝑧)= 𝑦𝑦𝑧𝑧1+𝑦𝑦𝑧𝑧 Full year SAR filings are only available for 2014–2016 which is generally the limiting factor on the time dimension of our panel. Because FinCEN changed reporting requirements as of April 2013, we were able to obtain an additional year of data by calculating SAR filings for 4 truncated years, which is April–December 2013, April–December 2014, April–December 2015, and April–December 2016. As we discussed earlier in the report, this variable is an important geographic measure of money laundering- related risk, based on a bank-reported measure of the extent of suspicious or money-laundering related activity associated with branches located in a particular county. After confirming that results were similar for full year and truncated year SARs, we continued estimation with truncated year SARs to benefit from the additional year of data. We report estimates from the version of our baseline model that includes truncated year SARs. Marginal effects for select coefficients (and associated p- values) are reported in table 20 below along time period, sample size, and goodness-of-fit (pseudo r-squared). Generally speaking, across our baseline specifications and robustness tests, counties were more likely to lose branches, all else equal, if they were (1) urban, high income, and had a younger population (proportion under 45), or (2) designated HIFCA, HIDTA, or had higher SAR filings. Economic variables were generally not statistically significant. Below is a list of robustness tests—changing how or which variables influenced branch closures in the model, over what time period—we performed. Unless specifically noted the results described above were very similar in the models listed below (i.e., robust): As an alternative to total SARs as an indicator of money laundering- related risk, we estimated a model with only those SARs that were classified as money laundering or structuring. Total SARs include suspicious activity that may be unrelated to money laundering or structuring, including, for example, check fraud. As an alternative to HIDTAs as a county risk designation we estimated a model with HIFCA county designations. The impact of HIFCAs in the model was smaller magnitude and less statistically significant. We estimated a model interacting HIDTAs with SARs (the interaction suggests SARs have a larger impact on non-HIDTA counties). We estimated models restricted to only rural counties or only urban counties. SARs and HIDTAs have larger effects in urban counties and the impact of the age profile and per capita income are not statistically significant in the model with only rural counties. We estimated models with MSA fixed effects or state-year fixed effects, in addition to state and year fixed effects. We estimated models that assumed that economic conditions from the previous 2 years were relevant or only economic conditions from 2 years prior. Our baseline model assumed only the prior year’s economic conditions influenced branch closures. We estimated a panel logit with random effects. We estimated a panel logit with county fixed effects. None of the results discussed above are statistically significant when county fixed effects are introduced. This suggests that the model is identified primarily based on cross-sectional (differences between counties that persist over time) rather than time series variation in the relevant variables. The role of county fixed effects here may also indicate the presence of unobserved, county characteristics that are omitted from our models, although it is generally not possible to simultaneously estimate the role of highly persistent factors that influence branch closures while including fixed effects. We estimated models where we omitted small percentage changes in branches from our indicator dependent variable—for example, we estimated models with indicators equal to one only if branch losses were above 3 percent or 5 percent (omitting smaller branch losses from the dependent variable altogether). Generally speaking, demographic factors have less explanatory power for larger loss levels although SARs remains statistically significant and at practically meaningful magnitudes. This suggests that higher SARs are relatively better at explaining larger branch losses while demographic factors are better at explaining smaller branch losses. Despite the robustness of our results and our efforts to control for relevant factors, our results are subject to a number of standard caveats. The variables we use come from a number of datasets, and some of them have sampling error, relied on imputation, or are better thought of as proxy variables that measure underlying factors of interest with some degree error. As such, our statistical measures, including standard errors, p-values, and goodness of fit measures such as pseudo r-squared, should be viewed as approximations. Some of the effects we measure based on these variables may reflect associational rather than causal relationships. Also, our regression models may be subject to omitted variable bias or specification bias—for example, it is unlikely that we have been able to quantify and include all relevant factors in bank branching decisions, and even where we have measured important drivers with sufficient precision the functional form assumptions embedded in our choice of regression model (e.g., logistic regression) are unlikely to be precisely correct. Should omitted variables be correlated with variables that we include, the associated coefficient may be biased. We interpret our results, including our statistical measures and coefficients values, with appropriate caution. Appendix IV: Comments from the Board of Governors of the Federal Reserve System Appendix V: Comments from the Federal Deposit Insurance Corporation Appendix VI: Comments from the Office of the Comptroller of the Currency Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the individual named above, Stefanie Jonkman (Assistant Director), Christine Houle (Analyst in Charge), Carl Barden, Timothy Bober, Rebecca Gambler, Toni Gillich, Michael Hansen, Michael Hoffman, Jill Lacey, Patricia Moye, Erica Miles, Marc Molino, Steve Robblee, Tovah Rom, Jerry Sandau, Mona Sehgal, Tyler Spunaugle, and Verginie Tarpinian made key contributions to this report.
Why GAO Did This Study Some Southwest border residents and businesses have reported difficulties accessing banking services in the region. GAO was asked to review if Southwest border residents and businesses were losing access to banking services because of derisking and branch closures. This report (1) describes the types of heightened BSA/AML compliance risks that Southwest border banks may face and the BSA/AML compliance challenges they may experience; (2) determines the extent to which banks have terminated accounts and closed branches in the region and the reasons for any terminations and closures; and (3) evaluates how regulators have assessed and responded to concerns about derisking in the region and elsewhere, and how effective their efforts have been; among other objectives. GAO surveyed a nationally representative sample of 406 banks, which included the 115 banks that operate in the Southwest border region; analyzed Suspicious Activity Report filings; developed an econometric model on the drivers of branch closures; and interviewed banks that operate in the region. What GAO Found “Derisking” is the practice of banks limiting certain services or ending their relationships with customers to, among other things, avoid perceived regulatory concerns about facilitating money laundering. The Southwest border region is a high-risk area for money laundering activity, in part, because of a high volume of cash and cross-border transactions, according to bank representatives and others. These types of transactions may create challenges for Southwest border banks in complying with Bank Secrecy Act/anti-money laundering (BSA/AML) requirements because they can lead to more intensive account monitoring and investigation of suspicious activity. GAO found that, in 2016, bank branches in the Southwest border region filed 2-1/2 times as many reports identifying potential money laundering or other suspicious activity (Suspicious Activity Reports), on average, as bank branches in other high-risk counties outside the region (see figure). According to GAO's survey, an estimated 80 percent (+/- 11 percent margin of error) of Southwest border banks terminated accounts for BSA/AML risk reasons. Further, according to the survey, an estimated 80 percent (+/- 11) limited or did not offer accounts to customers that are considered high risk for money laundering because the customers drew heightened regulatory oversight—behavior that could indicate derisking. Counties in the Southwest border region have been losing bank branches since 2012, similar to national and regional trends. Nationally, GAO's econometric analysis generally found that counties that were urban, younger, had higher income or had higher money laundering-related risk were more likely to lose branches. Money laundering-related risks were likely to have been relatively more important drivers of branch closures in the Southwest border region. Regulators have not fully assessed the BSA/AML factors influencing banks to derisk. Executive orders and legislation task the Department of the Treasury's Financial Crimes Enforcement Network (FinCEN) and the federal banking regulators with reviewing existing regulations through retrospective reviews to determine whether they should be retained or amended, among other things. FinCEN and federal banking regulators have conducted retrospective reviews of parts of BSA/AML regulations. The reviews, however, have not evaluated how banks' BSA/AML regulatory concerns may influence them to derisk or close branches. GAO's findings indicate that banks do consider BSA/AML regulatory concerns in providing services. Without assessing the full range of BSA/AML factors that may be influencing banks to derisk or close branches, FinCEN, the federal banking regulators, and Congress do not have the information needed to determine if BSA/AML regulations and their implementation can be made more effective or less burdensome. What GAO Recommends GAO recommends that FinCEN and the federal banking regulators conduct a retrospective review of BSA regulations and their implementation for banks. The review should focus on how banks' regulatory concerns may be influencing their willingness to provide services. The federal banking regulators agreed to the recommendation. FinCEN did not provide written comments.
gao_GAO-18-558
gao_GAO-18-558_0
Background Utilities Privatization Authorities and Intent Congress provided statutory authority in 1997 for the privatization of utility systems on military installations to address DOD’s need to supply reliable, safe, and efficient utility services to its installations. In defining a utility system, the authority includes systems for the generation and supply of electric power; the treatment or supply of water; the collection or treatment of wastewater; the generation or supply of steam, hot water, and chilled water; the supply of natural gas; and the transmission of telecommunications. Included in a utility system are the associated equipment, fixtures, structures, and other improvements, as well as real property, easements, and rights-of-way. The authority states that the Secretary of a military department may convey a utility system to a municipal, private, regional, district, or cooperative utility company or other entity. DOD’s policy permits the military departments to maintain ownership of utility systems and not privatize them for unique security reasons, such as installations with highly sensitive missions, or when privatization is uneconomical. Utilities Privatization Roles and Responsibilities ASD (EI&E) oversees DOD’s utilities privatization program, which is part of the department’s installation energy management portfolio. In this capacity, ASD (EI&E) is responsible for developing policies and overseeing the program. There are two main sources of guidance for utilities privatization— a DOD instruction on energy management at the installation level, DOD Instruction 4170.11, Installation Energy Management, and a series of memorandums specific to utilities privatization. The instruction and memorandums direct the military departments to attempt to privatize all utility systems, unless the Secretary of the military department determines that the system is exempt from privatization for security or economic reasons. Some of the memorandums were issued to provide the military departments with guidance to implement certain changes to the statutory authority related to utilities privatization. For example, the congressional authority was amended in 2006 to require the Secretary of Defense’s (or a designee’s) approval for utilities privatization contracts with terms longer than 10 years, but not to exceed 50 years. The subsequent guidance memo delegated the approval from the Secretary of Defense to the Secretaries of the military departments and the Director of the DLA. The military departments have the responsibility for program implementation, as the statutory authority to privatize utility systems is granted to the Secretaries of the military departments. As such, the military departments determine which systems will be privatized and which systems may be exempted from privatization due to economic or security reasons. According to military department officials, each military department considers utilities privatization as an option for the recapitalization of utility infrastructure. The Army views utilities privatization as the preferred option, while the Navy and the Air Force consider utilities privatization to be one option among others. Specifically, Army officials stated that they follow the statute and ASD EI&E program guidance documents. Those documents state that utilities privatization is the preferred method for recapitalizing utility infrastructure and officials stated that the Army plans to assess all of its utility systems for privatization. The Army prioritizes systems in the worst condition and systems with important missions for privatization. According to Army officials, in cases where the utility system is in poor condition and the installation performs important missions, the Army may privatize utility systems even if the costs in the contractor’s proposal exceed the costs in the government’s “should cost” estimate by as much as 15 percent. The Air Force’s utilities privatization policy states that the program’s goal is to permanently convey utility systems on Air Force active, reserve, and guard installations to private or public utility companies in conjunction with an award of a long-term utility services contract for the operation and maintenance of those systems. The purpose of privatizing a utility system is to restore utility infrastructure to industry standards for operations, maintenance, recapitalization, health, and safety while achieving a monetary savings over the cost of continued Air Force ownership. According to Navy officials, the Navy has not pursued utilities privatization in recent years but is currently in the process of assessing utility systems for potential conveyance. Any decisions to convey utility systems will be based on a business case analysis for total ownership cost and the ability to improve reliability, resilience, and efficiency for priority missions. Navy officials noted that the Navy follows DOD policy for utility conveyance authority. DLA works with the military departments to plan for utilities privatization and procures and administers 61 utilities privatization contracts for the Departments of the Army and Air Force from the pre-solicitation phase and into the post-award phase. According to DLA officials, the entire pre- award process takes approximately 915 days, based on the assumption that the solicitation receives 1 to 6 proposals from contractors. Once an award decision is made, privatization involves two transactions with the successful contractor—the conveyance of the utility system infrastructure and the acquisition of utility services for upgrades, operations, and maintenance under a long-term contract of up to 50 years. According to DLA officials, the contract term can be up to 50 years because it allows the military departments the opportunity to spread the high costs to repair and replace existing utility infrastructure over a long period of time. The Department of the Navy administers its own utilities privatization contracts for Navy and Marine Corps installations. DOD’s Privatization of Utility Systems Since 1988 As of January 2017, the military departments have privatized approximately 23 percent (601 of 2,574) of their utility systems. As shown in table 1, the Army has privatized the most systems (369), followed by the Air Force (174), and then the Navy (58). In addition, table 1 shows the number of utility systems the military departments have exempted for either economic or security reasons. As of January 2017, the military departments have 600 systems that have not been privatized or exempted from privatization. The Army and the Air Force have plans to privatize more systems in the coming years. Industrial Control System Vulnerabilities and Cybersecurity Policies and Guidance According to an ASD (EI&E) official, information residing on ICS associated with privatized utilities systems, and more broadly, information on any ICS, may be used by adversaries to gain insights into operations on installations or to conduct a cyberattack. According to U.S. Cyber Command, DOD’s ICS are a potential target and an adversary could gain unauthorized access and attack DOD in a variety of ways, including removing data from an ICS, inserting false data to corrupt the monitoring and control of utility infrastructure through ICS, and physically destroying utility infrastructure controlled by an ICS. As such, DOD’s 2015 Cyber Strategy recognizes the need to protect DOD information regardless of where it resides—on DOD’s own information systems and ICS or on contractor-owned information systems and ICS— so that DOD capabilities are not exploited, misdirected, countered, or cloned. Figure 1 illustrates a potential cyberattack using false data in an ICS. In addition, there have been reports of successful attacks using ICS associated with infrastructure. Specifically, the Office of the Director of National Intelligence issued a report in 2017 describing several of these attacks. For example, the report noted that in 2010, Stuxnet was the first computer virus specifically targeting ICS, and it allowed attackers to take control of the systems and manipulate real-world equipment without the operators knowing. The attacker targeted certain equipment at the Natanz uranium enrichment plant in Iran, manipulated computer systems that control and monitor the speed of the centrifuges, and reportedly destroyed roughly one-fifth of Iran’s nuclear centrifuges by causing them to spin out of control. The attacker increased the pressure on spinning centrifuges while showing the control room that everything appeared normal by replaying recordings of the plant’s protection system values during the attack. In another example, the report noted that in 2012, a U.S. power utility’s ICS was infected with a virus when a third-party technician used an infected USB drive to upload software to the systems. The virus resulted in downtime for the systems and delayed plant restart by approximately 3 weeks. In recognition of these threats, DOD has developed cybersecurity policies and guidance for ICS that apply to both DOD-owned ICS and contractor- owned ICS. Specifically, For DOD-owned ICS, the department has issued several policies and guidance for the cybersecurity of ICS. For example, in 2016, in response to one of our prior recommendations that ASD (EI&E) address challenges the military services faced in implementing the risk management framework guidance, ASD (EI&E) directed the services to develop plans identifying the goals, milestones, and resources needed to identify, register and implement cybersecurity controls on DOD facility-related ICS. Further, DOD issued additional guidance that was intended to assist the military services in developing implementation plans to meet these requirements. In 2016, DOD issued guidance, in the form of Unified Facilities Criteria, which provides criteria for the inclusion of cybersecurity in the design of control systems in order to address appropriate security controls during design and subsequent construction. Also, in 2016, the U.S. Cyber Command and the Office of the Secretary of Defense issued guidance that identifies device anomalies that could indicate a cyber incident, specific detection procedures to assess the anomaly, and procedures to recover electronic devices, including removing and replacing the device. For contractor-owned ICS, including ICS owned by privatized utility system owners, DOD has a Defense Federal Acquisition Regulation Supplement clause to require that contractors take steps to ensure safeguards are put in place to protect covered defense information, which is defined as unclassified controlled technical information or other information that is processed, stored, or transmitted on the contractor’s information system or ICS. Controlled unclassified information is information that requires safeguarding or dissemination controls pursuant to and consistent with law, regulations, and government-wide policies. The clause also requires the contractor to report cyber incidents. Military Departments Have Some Types of Information on Their Privatized Utility Systems, but Have Not Tracked Contract Performance or Developed Measurable Performance Standards The military departments have information about utility systems that have been privatized, but they have not tracked utilities privatization contract performance or developed measurable performance standards for these contracts. Specifically, for the systems in our sample the military departments have some information on the costs for utility infrastructure improvements and commodities, system reliability, and contractor performance evaluations. Costs for Utility Infrastructure Improvements: The military departments have information on the estimated cost avoidance at the time of contract award for utility infrastructure improvements; however, none of the military departments have determined whether the utilities privatization contracts are on track to achieve those cost avoidance estimates. For example, officials at Fort Bragg, North Carolina, estimated at the time of contract award that it would have cost the Army $61.4 million to provide natural gas utility services over the life of the utilities privatization contract, while the successful proposal from the contractor estimated a cost of $52.3 million for the same services. Therefore, the Army initially projected that it would avoid an estimated cost of $9.1 million for natural gas utility services at Fort Bragg over the life of the contract. However, the estimate at the time of contract award used by each military department does not account for changes in the cost of the contract over time. Moreover, none of the military departments measure actual cost avoidance over time, and some utilities privatization contracts have experienced cost increases. Specifically, we found that six of the nine utilities privatization contracts in our sample included modifications, which increased the original cost of the contract by more than 5 percent after adjusting for inflation. For example, the contract to privatize electric and water services at Tyndall Air Force Base, Florida, had 59 modifications, which have increased the total estimated contract value by 36 percent ($42 million) to $159 million since it was awarded in September 2010. In addition, the water and wastewater privatization contract at Fort Bragg had 219 modifications, which has increased the total estimated contract value by 96 percent ($552 million) to about $1.1 billion since it was awarded in September 2007. According to military department and DLA officials, there are limitations to using the information in the modifications to analyze changes in cost over time associated with the utilities privatization contracts because some cost changes may have occurred even if the government had retained ownership of the utility system. DLA officials stated that the modifications are made for a number of different reasons, including changes in mission requirements, changes to the utility service requirements, and capital upgrade projects on the installation. According to military department officials, cost changes associated with changes in the installation’s mission would likely have occurred had the military department retained ownership and would not be a cost increase due to privatization. Thus, it is difficult to determine the extent to which cost increases affect the cost avoidance estimated at the time of contract award. In 2006, we reported that cost growth in DOD’s utilities privatization contracts may become a concern because once a utility system is privatized, the government enters into a sole-source relationship with the privatized utilities system owner, which may put the government at a disadvantage when negotiating prices for utility system changes. To mitigate this disadvantage, DLA and Air Force officials stated that they use experts who review proposals from the privatized utility system owners to help ensure that costs are fair and reasonable. Costs for Utility Commodities: Military department officials stated that they have observed reduced usage of the commodity provided by the utility, such as water usage, and thus decreased commodity costs through utilities privatization; however, installation officials have not tracked the data and associated savings. Furthermore, the officials have not determined whether any savings were fully attributable to utilities privatization, recognizing that other factors may have affected commodity usage. For example, officials at Tyndall Air Force Base, Florida, stated that repairs to their privatized water system infrastructure have resulted in less water usage, and that there has been a decrease in the number of leaks. An Army official estimated commodity cost savings by comparing commodity costs prior to utilities privatization with commodity costs after utilities privatization. This approach was based on the assumption that any such savings were primarily due to utilities privatization. However, an Army official stated that the commodity cost savings the Army estimated could be attributed to other factors outside of utilities privatization, such as decreases in base population or execution of Energy Savings Performance Contracts. Air Force and Navy officials stated they did not estimate commodity cost savings. System Reliability: Military department officials stated that they have perceived improvements in utility system reliability since utilities privatization and have access to contractor-provided data to assess reliability; however, the military departments have not used the contractor-provided data to determine reliability trends over time. For example, Army officials at Arlington National Cemetery, Virginia, stated that they could not recall an unscheduled outage since the privatization of the electric system in 2015. In addition, officials at Tyndall Air Force Base, Florida, stated that there was a significant drop in outages after the electric system was privatized in 2010. However, we found that none of the military departments have formally measured improvements in reliability due to utilities privatization, because, according to military installation officials, they did not track reliability statistics prior to utilities privatization nor were they required to do so. In addition, we found that not all installations in our sample of cases have analyzed contractor-provided outage data, which includes information on the number of scheduled and unscheduled outages and the causes of the outages, to verify perceived reliability improvements. However, officials at Hill Air Force Base, Utah, stated that the system owner provides reports that track reliability over time and trends could be determined through this data collection. As we previously reported, there are benefits to collecting utility disruption information since it can be used to identify repairs and to prioritize funding for those repairs. Contractor Performance Evaluations: The military departments use the Contractor Performance Assessment Reporting System to subjectively evaluate each utility system owner’s performance across several categories, including management, schedule, and cost control, among others; however, based on our review of the evaluations associated with the nine contracts in our sample, we found that the evaluations were anecdotal and varied in frequency and quality. While we found that the assessing officials generally reported satisfactory system owner performance, the performance periods in the evaluations varied. For example, one evaluation for the water privatization contract at Naval Air Station Key West, Florida, covered 4 years, while the subsequent evaluation for the same contract covered 1 year. Another evaluation for the natural gas privatization contract at Fort Bragg, North Carolina, covered a performance period of 1 year and 4 months. Guidance for these contractor assessments indicates that agencies should conduct contractor performance evaluations on an interim annual basis and upon final completion of the contract. In addition, evaluation information supporting ratings varied. In one evaluation for the electric and water privatization contract at Tyndall Air Force Base, Florida, an assessing official cited multiple concerns in the supporting narrative for an evaluation area and rated it as “unsatisfactory,” while the subsequent evaluation for the same contract provided an “exceptional” rating for the same evaluation area with no explanation of how previous concerns were addressed. The military departments have not tracked utilities privatization contract performance and have not developed measurable performance standards because ASD (EI&E) has not issued guidance requiring the military departments to develop metrics and measurable performance standards. Standards for Internal Control in the Federal Government state that management should design control activities—such as the establishment of performance measures and indicators—to achieve objectives. In addition, our prior work has shown that an element of sound planning focuses on developing a set of metrics that will be applied to gauge progress toward attainment of the plan’s long-term goals. The metrics can be used to evaluate the plan through objective measurement and systematic analysis to determine the manner and extent to which privatized utility systems meet measurable performance standards. According to our prior work, performance measurement focuses on whether a program has achieved its objectives, expressed as measurable performance standards. Moreover, DOD’s guidebook for the acquisition of services states that services acquisition is about acquiring performance results that meet performance requirements needed to successfully execute an organization’s mission. Those performance requirements and how the government will assess the contractor’s performance must be determined before the contract is awarded. DOD has guidance that requires the military departments to conduct a post-conveyance review for each privatized utility system. That guidance states that the military departments shall compare utilities privatization costs after the contract award to projected costs to identify whether there is a problem with cost growth. The guidance does not require the development of metrics and associated measurable performance standards to report on the performance of utilities privatization contracts. ASD (EI&E) officials stated that performance metrics are needed to improve DOD’s oversight of utilities privatization efforts. According to Standards for Internal Control in the Federal Government, it is important for management to design performance metrics and standards because they help the entity achieve its goals. For example, ASD (EI&E) officials stated that they issued a data call to the military departments in January 2017 requesting information about the performance of utilities privatization contracts. Officials noted that they received different information from each military department and did not believe that the information would enable the department to determine whether the privatized utilities systems are improving reliability or achieving the cost savings originally estimated. For example, these officials stated that some installations provided contractor performance evaluation ratings, but these ratings were anecdotal and could not be used to determine improved reliability or estimated cost savings. Air Force officials also stated that they needed performance metrics to improve their management of utilities privatization. Officials explained that the information they receive from contracting officers and contracting officer representatives specific to privatized utilities is anecdotal and qualitative, and they have no metrics in place that allow the Air Force to track the performance of utilities privatization contracts over time or to identify trends and issues that would enable the Air Force to take steps to improve utilities privatization. However, Air Force officials stated that the Air Force is working on developing a standardized reporting template, called the Monthly System Performance Report, which will enable the Air Force to track reliability for its privatized utility systems and to identify reliability trends over time. DOD’s utilities privatization program has been in place for 21 years and some information, such as the contractor-provided reliability data, is available that could be used to track performance over time. Performance metrics and standards would help ASD (EI&E) track the outcomes of the utilities privatization program. In addition, the life of utilities privatization contracts can extend to 50 years, producing a long-term, one-to-one relationship between the utility system owner and the government. The ability of ASD (EI&E), DLA, and the military departments to track performance over the life of utilities privatization contracts may help mitigate the risks of being in a one-to-one relationship with the utility system owner. Without issuing guidance that requires the military departments and DLA to develop and implement metrics and measurable performance standards to track contract performance for future utilities privatization contracts and to develop similar guidance for current utilities privatization contracts, the department will lack information on the performance of utilities privatization contracts. As a result, ASD (EI&E), the military departments, and DLA may not be able to perform effective program management and oversight for these long-term utilities privatization contracts. DOD Has Cybersecurity Requirements for Industrial Control Systems, but Has Not Begun to Implement Those Requirements for Utilities Privatization Contracts DOD Has Cybersecurity Requirements for Industrial Control Systems In November 2013, DOD issued guidance in the form of a Defense Federal Acquisition Regulation Supplement clause to establish minimum requirements for safeguarding covered defense information on a contractor’s ICS. The clause requires contractors to implement a minimum set of security controls on contractor information technology and ICS, to report cyber incidents, and to support DOD damage assessments as needed. According to DOD, the Defense Federal Acquisition Regulation Supplement clause for safeguarding covered defense information is required to be added to all new solicitations and contracts as of November 2013. The clause is not required to be incorporated retroactively into DOD contracts awarded prior to 2013, but that does not preclude a contracting officer from modifying existing contracts to incorporate the clause. To implement the clause for safeguarding covered defense information, the contractor must apply a minimum set of security controls on its ICS. For the contractor to know what the appropriate security controls are, DOD first must identify what, if any, covered defense information is provided to or developed by the contractor in performance of the contract. If the requiring activity determines that covered defense information is provided to or developed by the contractor, then the contracting officer notifies the contractor by documenting what information is considered covered defense information. Then, to secure DOD’s covered defense information, the contractor must apply adequate security to its ICS on which that information resides and document, in a system security plan, how the requirements were met or how the contractor plans to meet the requirements. When requested by the requiring activity, the system security plan should be submitted to demonstrate that adequate security has been implemented. Figure 2 shows the responsibilities for identifying, marking, and securing DOD’s covered defense information on contractor information and industrial control systems. DOD Has Not Begun to Implement Cybersecurity Requirements for Utilities Privatization Contracts DOD officials stated that while they have taken steps to incorporate the clause requiring the safeguarding of covered defense information into many of their utilities privatization contracts, they have not begun to implement the cybersecurity requirement in the clause to ensure that covered defense information is appropriately safeguarded for those contracts. DLA, Army, and Air Force officials stated that they have added cybersecurity requirements to some of the utilities privatization contracts they administer, but the Navy has not. Specifically: DLA: According to DLA officials, of the 61 privatized utility contracts DLA manages on behalf of the Army and Air Force, officials have incorporated the clause requiring the safeguarding of DOD covered defense information into 60 contracts, and are in the process of modifying one contract to incorporate the clause. According to DLA officials, beginning in June 2015, they determined that the utilities privatization contracts needed to be modified to incorporate the cybersecurity requirements to safeguard DOD covered defense information associated with its utilities privatization contracts for two reasons. First, DLA officials stated that they interpreted DLA- contracting guidance issued in 2015 to direct them to incorporate the clause into all contracts. Second, DLA officials stated that the clause should be applied to all utilities privatization contracts so that there was consistency across the program. Since the issuance of the DLA contracting guidance in 2015, DLA officials stated that they have provided direction to the utilities privatization contracting officers on multiple occasions to incorporate the clause into all contracts and plan to ensure that the remaining contracts are modified to include the clause. DLA officials stated that most of the contract modifications to include this clause were completed in 2015 and 2016; however, some modifications occurred as late as 2017. Army: Army officials who manage the Army’s other utilities privatization contracts stated that the clause requiring the safeguarding of covered defense information has been added to some contracts, but could not state definitively that the clause was added to all of the utilities privatization contracts that the Army manages. Army officials stated that Army contracting guidance issued in 2015 did not specifically address utilities privatization; however, the guidance did require that the clause be added to several different types of contracts, including all contracts for programs where officials expect covered defense information to be furnished by the government or developed by the contractor, and contracts that were active in fiscal year 2016 and later, among other contracts, or provide a rationale for not including the clause. Army officials stated that they did not know if their utilities privatization contracts contained covered defense information. However, Army officials determined that the guidance required the clause to be added to utilities privatization contracts because these contracts fell into the category of contracts that were active in fiscal year 2016 and later. Another contracting officer for several Army privatization contracts stated that he does not recall how information about the guidance to incorporate the clause into utilities privatization contracts was shared. However, he stated that the issue was discussed at utilities privatization meetings, and he believed that it was implied at these meetings that the clause should be incorporated into existing utilities privatization contracts. Air Force: The Air Force official who manages the Air Force’s utilities privatization program stated that two of the nine contracts managed by the Air Force included the clause, and the clause was being added to two additional contracts at the time of our review. Further, the Air Force stated that it was planning on adding the clause to the remaining five contracts. An Air Force official stated that it was not clear whether the clause was required to be incorporated into all existing utilities privatization contracts. However, since DLA added the clause across all of the utilities privatization contracts it managed, the Air Force official assumed that all non-DLA managed utilities privatization contracts should do the same. Navy: Navy officials stated that they have not taken steps to incorporate the requirement into any of their utilities privatization contracts. According to Navy officials, they have not added the cybersecurity clause to the Navy’s utilities privatization contracts because they are waiting for guidance from ASD (EI&E) regarding whether the clause is necessary for all utilities privatization contracts and, if so, additional guidance on how to implement the clause. DLA, Army, and Air Force officials stated that while they have taken steps to incorporate the clause requiring the safeguarding of covered defense information into many of their utilities privatization contracts, they have not begun to implement the cybersecurity requirement for those contracts. As previously discussed, DOD acquisition guidance states that the requiring activity, which in the case of utilities privatization contracts is the military departments, must identify what information is considered covered defense information and provide that information to the contractor. However, before officials can fully implement these requirements, they must first identify what information is considered covered defense information. According to an ASD (EI&E) official, information residing on ICS associated with privatized utility systems could be considered covered defense information because it could be used by adversaries to gain insights into operations on installations or to conduct a cyberattack. For example, information about energy or other commodity usage, water or gas pressure in pipes, or the amount of chemicals that need to be added during water treatment processes might be useful information to an adversary seeking to disrupt operations on a military installation. In one example of a cyber incident on an ICS associated with the operation of a dam in New York, a threat actor repeatedly obtained information on the status and operation of the dam, including information about the water levels, temperature, and status of the gate that controls water levels and flow rates. This access would allow the attacker to remotely operate and manipulate the dam’s gate. However, in this instance, the gate had been manually disconnected for maintenance at the time of the intrusion. In another example, threat actors obtained control-level access to a water treatment ICS and altered settings that controlled the amount of chemicals used to treat tap water and water flow rates, disrupting water distribution. The activity triggered an alert within the ICS, notifying the water treatment utility to quickly identify and reverse the chemical and flow changes, largely minimizing the impact on customers. Had the threat actors been more familiar with the flow control system, the attack could have been far more consequential. However, DLA officials stated that there are currently no procedures that state what, if any, information associated with utilities privatization contracts is considered covered defense information. DLA officials stated that they conferred with Army and Air Force officials, and DLA’s own policy division, and reached out to ASD (EI&E) to obtain a clear definition on what information associated with DOD’s utilities privatization contracts might be considered covered defense information. DLA’s efforts to obtain clarification from ASD (EI&E) on how to implement the clause for utilities privatization contracts began in 2016. For example, in 2016, DLA officials stated they met with ASD (EI&E) officials to discuss the issue of covered defense information specific to the utilities privatization program, discussing what, if any, information on ICS associated with privatized utilities should be identified as covered defense information. Further, DLA officials asked for procedures regarding what steps to take to evaluate a contractor’s compliance with the provision. In addition, DLA officials asked privatized utilities system owners to conduct a self-assessment of the cybersecurity controls they currently use for their ICS. DLA officials stated that they provided this information to ASD (EI&E) to aid decision making on how to approach cybersecurity for these systems. However, DLA officials stated that they did not receive a clear response from ASD (EI&E). DLA officials stated that because there are no procedures that definitively state which, if any, utilities privatization- related information should be categorized as covered defense information, they have been unable to provide clear procedures to the utilities privatization contractors who must implement the clause to safeguard any such information. Moreover, according to DLA officials, some of the utilities privatization contractors were reluctant to modify the contract to incorporate the clause for safeguarding DOD covered defense information because it was unclear how it was to be implemented. Also, Navy officials stated that they have not yet incorporated the clause into any of their utilities privatization contracts because they are waiting for procedures from ASD (EI&E). In addition, DLA and military department officials stated that the current costs associated with implementing the clause are unknown. Standards for Internal Control in the Federal Government require management to evaluate security threats to information technology, which can come from both internal and external sources, and periodically review policies and procedures for continued relevance and effectiveness in addressing related risks. Information technology refers to processes that are enabled by technology, including ICS, which are computer-controlled systems that monitor or operate physical utility infrastructure, among other things. DLA and military department officials stated they have not begun to implement the requirements in the clause because they are waiting for ASD (EI&E) officials to issue procedures concerning how the military departments are to determine what, if any, covered defense information associated with utilities privatization contracts is provided or developed by the contractor in performance of the contract. Such procedures are needed to help the military departments and DLA take the appropriate steps to implement the defense acquisition regulation clause for their utilities privatization contracts and safeguard covered defense information. An ASD (EI&E) official acknowledged that specific procedures concerning how the military departments are to determine what, if any, information associated with utilities privatization contracts is considered covered defense information are lacking and the office plans to update the policies. However, at the time of our review, it was not clear what that guidance will require. In the absence of a clear understanding of how to implement the clause requiring the safeguarding of covered defense information, both installation officials and some system owners reported having taken various actions to address and enhance the cybersecurity of ICS associated with privatized utility systems. For example, An Air Force installation official stated that he and an employee of the privatized utility system worked closely with the installation’s office that handles cybersecurity and followed service guidance to try to ensure mitigation of risks to and the security of the ICS. For example, officials ensured that the ICS could not be accessed remotely and that authorized users are required to use strong passwords. The Air Force official stated that the privatized utility system owner may be required to apply additional cybersecurity measures in the future, depending on what decisions are made regarding the provision to safeguard covered defense information. A Navy installation official stated that he had no knowledge of what, if any, cybersecurity practices the privatized utility system owner had implemented for the ICS it uses to help operate an electrical distribution system. However, an official from the privatized utility system owner stated that the company has adopted some cybersecurity practices, which have been audited by an independent organization for 3 of the last 4 years, and the company plans to make this a standard part of business operations. Army officials stated that the installation relies on the privatized utility system owner to employ industry practices for cybersecurity efforts. Officials from the privatized utility system owner stated that the company has robust cybersecurity practices and the ability to continuously monitor the system to detect any unusual activities. While installation officials and some system owners reported having taken some steps to address and enhance the cybersecurity of ICS associated with privatized utility systems, the lack of procedures may result in uncertainty as to whether covered defense information across utilities privatization contracts is safeguarded by the military departments and DLA. As previously reported, vulnerabilities in ICS can be exploited by various methods, causing loss of data, denial of service, or the physical destruction of infrastructure. Without procedures concerning how the military departments are to determine what, if any, covered defense information is provided to or developed by the contractor in the performance of the utilities privatization contract, the military departments and DLA may not be able to take steps to adequately and consistently protect DOD’s information associated with utilities privatization contracts. Conclusions As of January 2017, the military departments have privatized over 600 utility systems, and the Army and the Air Force have plans to privatize more systems in the coming years. While the military departments have some types of information on their privatized utilities, they have not tracked utilities privatization contract performance or developed measurable performance standards, as asked for in the Standards for Internal Control in the Federal Government. In addition, while military department officials stated that they have perceived improvements in utility system reliability since utilities privatization, the military departments have not used contractor-provided data to determine reliability trends over time. Without issuing guidance that requires the military departments and DLA to develop and implement metrics and measurable performance standards to track contract performance for future utilities privatization contracts and to develop similar guidance for current utilities privatization contracts, the department will lack information on the performance of utilities privatization contracts. As a result, ASD (EI&E), the military departments, and DLA may not be able to perform effective program management and oversight for these long-term utilities privatization contracts. DOD officials stated that they have taken steps to incorporate the clause requiring the safeguarding of covered defense information into many of their utilities privatization contracts, but they have not begun to implement the cybersecurity requirement for those contracts. DLA, Army, and Air Force officials stated they have not begun to implement the cybersecurity requirement for those contracts that include the clause because ASD (EI&E) has not issued specific procedures regarding how the military departments are to determine whether covered defense information is provided to or developed by the contractor in the performance of the utilities privatization contract. The lack of procedures may result in uncertainty as to whether covered defense information across utilities privatization contracts is safeguarded by the military departments and DLA. As previously reported, vulnerabilities in ICS can be exploited by various methods, causing loss of data, denial of service, or the physical destruction of infrastructure. Without procedures concerning how the military departments are to determine what, if any, types of information are considered covered defense information and are provided to or developed by the contractor in the performance of the utilities privatization contract, the military departments and DLA will not be able to adequately and consistently protect DOD’s information associated with utilities privatization contracts. Recommendations for Executive Action We are making two recommendations to the Secretary of Defense. The Secretary of Defense should ensure that the Assistant Secretary of Defense for Energy, Installations, and Environment, in consultation with the military departments, issues guidance requiring the military departments and DLA to develop and implement performance metrics and measurable performance standards to track utilities privatization contract performance for future utilities privatization contracts, and develops similar guidance for current utilities privatization contracts. (Recommendation 1) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Energy, Installations, and Environment (a) issues procedures concerning how the military departments are to determine what constitutes covered defense information and what, if any, of this information is provided to or developed by the contractor in the performance of utilities privatization contracts, and (b) takes appropriate steps to protect such information. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with both of our recommendations. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretaries of the military departments. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Brian Lepore at (202) 512-4523 or LeporeB@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Selected Characteristics of the Department of Defense’s (DOD) Utilities Privatization Contracts Included in GAO’s Review This appendix provides information on the nine utilities privatization contracts that we selected as case studies to review. Each of seven contracts privatized one utility system, and each of two contracts privatized two utility systems, for a total of 11 utility systems covered by the nine contracts. Table 2 lists selected characteristics of each contract. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kristy Williams (Assistant Director), Michael Armes, John Bauckman, Emily Biskup, Vincent Buquicchio, Cаrolyṇn Cаvanаugh, Desiree Cunningham, Michael Gilmore, Simon Hirschfeld, Gina Hoover, Kush Malhotra, Richard Powelson, and Jack Wang made key contributions to this report. Related GAO Products Defense Infrastructure: Actions Needed to Strengthen Utility Resilience Planning. GAO-17-27. Washington, D.C., November 14, 2016. Defense Infrastructure: Improvements in DOD Reporting and Cybersecurity Implementation Needed to Enhance Utility Resilience Planning. GAO-15-749. Washington, D.C., July 23, 2015. GAO, Defense Infrastructure: Actions Taken to Improve the Management of Utility Privatization, but Some Concerns Remain. GAO-06-914. Washington, D.C.: September 5, 2006. Defense Infrastructure: Management Issues Requiring Attention in Utility Privatization. GAO-05-433. Washington, D.C.: May 12, 2005.
Why GAO Did This Study Since Congress provided statutory authority in 1997 for the privatization of utility systems at military installations, the military departments have privatized nearly 600 utility systems. According to DOD officials, utilities privatization enables military installations to obtain safe, reliable, and technologically current utility systems at a relatively lower cost than they would under continued government ownership. The Senate report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 included a provision that GAO review DOD's utilities privatization program. This report assesses the extent to which DOD has (1) tracked utilities privatization contract performance and developed measureable performance standards, and (2) implemented cybersecurity guidance for industrial control systems associated with privatized utility systems. GAO reviewed relevant policies and internal control standards, analyzed a non-generalizable sample of utilities privatization contract documents, and interviewed DOD and selected military installation officials and privatized utility system owners. What GAO Found The military departments have some types of information about privatized utility systems, but they have not tracked contract performance or developed measurable performance standards for these contracts. Specifically: Costs for Utility Infrastructure Improvements: The military departments estimated the cost avoidance at the time of contract award; however, none of the military departments have determined whether the utilities privatization contracts are on track to achieve those estimates. Costs for Utility Commodities: Military department officials stated that they have observed reduced usage of commodity utilities, such as water usage, and thus decreased commodity costs, through utilities privatization; however, the officials have not tracked the data and any associated savings. Furthermore, the officials have not determined whether any savings were fully attributable to utilities privatization, recognizing that other factors may have affected commodity usage. System Reliability: Military department officials stated that they have perceived improvements in utility system reliability since utilities privatization and have access to contractor-provided data to assess reliability; however, the military departments have not used this data to determine reliability trends over time. Contractor Performance Evaluations: The military departments use the Contractor Performance Assessment Reporting System to evaluate each utility system owner's performance; however, based on GAO's review of the evaluations associated with the contracts in its sample, the evaluations were anecdotal and varied in frequency and quality. Department of Defense (DOD) guidance does not require the development of metrics and associated measurable performance standards to track utilities privatization contract performance. Without a requirement to develop these metrics and standards, DOD will lack information on the performance of utilities privatization contracts and thus may not be able to perform effective program management and oversight for these long-term contracts. DOD has taken steps to add a cybersecurity clause to its utilities privatization contracts that requires contractors take steps to ensure safeguards are put in place to protect covered defense information, which is defined as information that is processed, stored, or transmitted on the contractor's information system or industrial control systems. To implement the clause, DOD first must identify what, if any, covered defense information is provided to or developed by the contractor in performance of the contract. However, the Defense Logistics Agency (DLA) and military department officials stated that they have not begun to implement the clause because they need DOD to issue procedures concerning how the military departments are to determine what, if any, covered defense information associated with utilities privatization contracts is provided or developed by the contractor in performance of the contract. Without these procedures, the military departments and DLA will not have assurance that such information is being safeguarded. What GAO Recommends GAO recommends that DOD issue guidance requiring the military departments and DLA to develop metrics to track utilities privatization contract performance, and issue procedures concerning how the military departments are to determine what constitutes covered defense information as it relates to utilities privatization contracts. DOD concurred with both recommendations.
gao_GAO-18-515
gao_GAO-18-515_0
Background USPS undertakes capital-spending projects for a number of reasons and more than one reason may apply to a single project. According to USPS documentation on its capital spending processes, these reasons include: to support USPS’s organizational objectives and strategic plan, to help sustain existing operations and meet USPS’s universal service obligation, to protect the health and safety of employees and customers or meet legal requirements, or to generate a positive return-on-investment—such as by increasing revenues or decreasing costs—thus improving USPS’s finances. USPS generally categorizes its capital spending in four broad categories: vehicles, facilities, information technology and other, and mail processing equipment, as shown in figure 1. USPS has processes for setting an annual capital-spending budget and approving specific capital projects. USPS prepares an annual capital- spending budget as part of its annual organization-wide budget. According to USPS documentation on its capital spending process, and USPS officials the process includes the following steps: In advance of each fiscal year, USPS’s Finance and Planning Department reviews estimated revenues and expenses to determine an appropriate total capital-spending budget. Next, USPS’s Executive Leadership Team and the leadership of relevant departments develop requests for each department’s estimated capital-spending needs for the upcoming year, including a ranking of desired projects. These lists provide information on projects’ purposes, estimated capital and operating expenses, potential return-on-investment, and relationship to USPS’s strategic initiatives. The Finance and Planning Department then reviews these lists in light of the previously determined total capital-spending budget and sets a capital spending budget for each of the broad categories of capital spending. The Executive Leadership Team votes on this preliminary capital spending budget, which, if approved, is included in the organization- wide budget that is subject to approval by USPS’s Board of Governors. The budget approved by the Board of Governors includes the total and categorical capital-spending budget, but does not include approvals for specific projects. According to USPS officials, USPS also uses these capital-spending requests, along with other information, such as historical capital-spending data and information on already identified specific future capital-spending projects, to annually update a 10-year projection of capital spending. USPS uses this 10-year projection to estimate USPS’s potential future capital spending and requirements for capital project cash outlays. USPS also has processes for approving specific capital projects. Project sponsors—those departments that wish to undertake a capital-spending project—must obtain approval from different groups within USPS to initiate capital projects. According to USPS documentation, the level of approval required depends on the estimated total cost of the project: Total costs over $5 million: The project sponsor must submit a DAR to USPS’s Investment Review Committee for review. DARs contain estimated project cost, return-on-investment, and other information used to justify the project. If the committee approves, it makes a recommendation to the Postmaster General for final approval. USPS’s Office of Inspector General also reviews and assesses the adequacy and the depth of the information in the DAR, assesses whether the project is in USPS’s best business interest, and may provide input to the Investment Review Committee, which may take that information into consideration when reviewing projects. Total costs from $1 million to $5 million: The project sponsor must submit a DAR to USPS’s Technical Review Committee for review and approval. Total costs under $1 million: The project is reviewed by USPS’s Finance and Planning department, and approval is subject to the level of budgetary resources available. USPS does not require a DAR for these projects, although the process involves other documents, such as a one-page “Justification of Expense” that is required for many of the projects. USPS faces organization-wide uncertainty that may affect its capital spending. We define “organizational uncertainty” as those uncertainties— such as business, budgetary, legislative or regulatory, or other conditions—that may affect USPS’s ability to remain competitive and achieve its mission. For example, in the absence of adequate revenues that would cover all of USPS’ expenses, these uncertainties may affect the extent to which USPS can undertake its identified capital-spending plans. According to USPS, organizational uncertainties include the following: Business uncertainty includes potential changes to USPS’s business and the market for its products and services. Such uncertainty may be affected by changing customer preferences—such as continuing diversion of First Class Mail to electronic alternatives (e.g., e-mail or online banking)—and increased competition for package shipments. Budgetary uncertainty includes potential uncertainty and changes to revenues and expenses that affect USPS’s finances. Legislative or regulatory uncertainty includes potential actions intended to address some of USPS’s financial challenges. For example, postal reform legislation has been introduced that, if enacted, could improve USPS’s financial position. Both H.R. 756 and S. 2629 propose to relieve USPS of some of its retiree health and pension obligations and provide a reinstatement of a partial rate surcharge. Similarly, the Postal Regulatory Commission—an independent establishment of the executive branch that regulates USPS— is considering providing USPS with additional flexibility on pricing, which could also improve USPS’s finances. According to USPS documentation on capital-spending processes as well as DARs for individual capital-spending projects, capital-spending projects also can face project-specific risks, such as the following: Technological risks, which include complexity, quality, and security concerns: For example, capital projects deploying new technology intended to increase operational efficiency may face the risk that the new technology could become obsolete given future technological advances. Operational risks, which include maintenance and performance of projects: For example, equipment purchased as part of a capital project could involve the risk that it may not perform as expected. Integration risks, which include network and system integration and user acceptance of projects: For example, a project involving new retail technology may face the risk that USPS’s customers will not accept the new technology, and, as a result, the project does not meet its target for customer use. USPS Plans to Increase Capital Spending, but Business Uncertainties Will Likely Involve Prioritization against Other Business Needs and among Capital Projects According to USPS, the organization has critical capital-spending needs after years of reduced capital spending. Starting in fiscal year 2009, USPS sharply decreased its capital spending for several years, in response to decreased volume and revenues; however, USPS now plans to increase its spending. Specifically, USPS projects average annual capital-spending cash outlays of $2.4 billion from fiscal years 2018 through 2028—about 70 percent more than the average of $1.4 billion from fiscal years 2007 through 2017. (See fig. 2.) While this projected spending is largely driven by plans to acquire a new fleet of delivery vehicles, USPS also projects increased spending in the other categories of facilities, information technology, and mail-processing equipment. In addition, while some of USPS’s planned capital spending is intended specifically to generate a return-on-investment—such as by increasing revenues or decreasing costs—much of USPS’s planned capital spending is to help sustain operations. Specifically, according to our analysis of USPS data, roughly 80 percent of USPS’s projected capital spending for fiscal year 2018 is for projects intended to help sustain operations. Vehicles: Spending Planned to Replace Aging Fleet In its latest projection of capital spending, covering fiscal years 2018 through 2028, USPS projects an annual average of roughly $821 million on capital spending for vehicles, primarily driven by a multi-year acquisition of new delivery vehicles starting in fiscal year 2019. According to USPS officials, USPS decided a number of years ago to defer purchasing new delivery vehicles and instead continued using and maintaining its existing fleet. Because USPS started acquiring most of its existing delivery fleet in 1987, the majority of its delivery vehicles are several decades old. USPS officials said these vehicles incur high maintenance costs, averaging about $4,500 per vehicle annually. In acquiring new vehicles, USPS plans to take a number of steps to ensure that the vehicles best meet the organization’s needs. According to USPS officials, it will spread the acquisition over multiple years to avoid a large cash outlay in any given year and to enable USPS to modify the vehicle purchases over time to take advantage of any technological changes, such as advances in alternative fuel technologies. Officials added that USPS is considering vehicles that will encourage operational efficiencies. For example, USPS is considering taller vehicles that will better allow carriers to handle trays of mail and packages. The officials also noted that USPS may consider different vehicle designs for different market needs. The officials said that USPS is currently testing various vehicle prototypes and has not decided on any one vehicle design at this time. In total, USPS projects that its acquisition of new delivery vehicles will require about $5.7 billion in capital-spending cash outlays distributed over a number of years. In addition to its planned future acquisition of delivery vehicles, USPS has also conducted smaller acquisitions of vehicles in recent years. According to USPS officials, in the past few years USPS has been replacing most of its non-delivery vehicles and will have done so by 2019, while also purchasing a small number of delivery vehicles to replace ones that have exceeded their useful life or will serve route growth. For example, in April 2017 USPS approved a capital spending project to purchase more than 2,000 cargo vans used to transport large volumes of mail from postal plants to post offices and other facilities, and about 375 spotter vehicles used to move trailers among docks at processing facilities. In May 2017 USPS approved a capital spending project to purchase approximately 8,000 off-the-shelf delivery vehicles needed to serve route growth and replace existing high-maintenance-cost vehicles. (See fig. 3.) Facilities: Spending Primarily Intended for Repairs of Existing Facilities USPS projects an annual average of about $607 million in capital spending for facilities from fiscal years 2018 through 2028. According to USPS officials, USPS faces little need for capital spending on new facility construction given changes to USPS’s business such as decreasing mail volumes. As a result, most of USPS’s projected capital spending is for rehabilitation and repair of existing facilities, such as the replacement of roofs or heating, ventilation, and air-conditioning systems needed to sustain operations. For example, in December 2016, USPS approved ca capital spending project to replace the roof at a mail processing facility in Tulsa, Oklahoma. USPS had concluded that the roof was in a state of failure, and there were no economically feasible repair options. In addition, in 2017 USPS approved about a capital spending project to repair facilities in the U.S. Virgin Islands damaged during Hurricane Maria. Although most facilities spending is related to rehabilitation and repair, some USPS capital spending is on new facilities. According to USPS officials, new facilities projects are generally approved because of the need to completely replace an existing facility that is beyond repair or to construct a new facility that will replace multiple existing facilities. For example, in May 2017 USPS approved a capital spending project to construct a mail-processing facility in Nashville, Tennessee. The facility is intended to replace and close four existing facilities which will eliminate space deficiencies, reduce transportation costs, and improve operating efficiencies. In addition, according to USPS officials, USPS may need to make capital spending investments to facilities to accommodate growth in package volume, should that growth continue. Information Technology and Other: Spending Intended to Support USPS’s Network and Cybersecurity Efforts USPS projects an annual average of about $541 million in capital spending for information technology and other capital projects, such as customer support equipment, from fiscal years 2018 through 2028. Information technology spending, which makes up an average of 98 percent of the projected spending in this category from fiscal years 2018 through 2028, is intended to maintain the infrastructure used to support USPS and provide security from cyber-threats, among other things. According to USPS officials, while it is difficult to project capital spending on information technology because future needs are uncertain, they can more accurately predict some future needs, such as hardware replacement. For example, there is a baseline of projected costs to replace servers because USPS knows the length of the technologies’ useful lives and when they will need to be replaced. According to USPS officials, while much of its capital spending on information technology is intended to replace outdated servers and other hardware, some spending is for developing new information technology systems. For example, in March 2017 USPS approved a capital spending project to purchase 67 video conferencing systems intended to increase productivity and encourage collaboration among USPS offices. In addition, USPS officials told us that in recent years USPS has undertaken more capital spending than expected on cybersecurity, a trend that will likely continue for the next few years. According to a DAR for cybersecurity investments, USPS is undertaking such investments to proactively identify and respond to security threats that have the potential to cause financial or other damage to the organization’s assets or employees, including threats that could disrupt or destroy information. Capital spending on information technology can also support USPS strategic goals and provide a positive return-on-investment. For example, in January 2017 USPS approved an additional capital spending to support development of its Informed Visibility program, which is a system that provides tracking and reporting of mail shipments for commercial mailers. According to the Informed Visibility DAR, these capabilities will provide users with access to valuable business information, helping improve operational efficiencies and marketing, among other things. According to the DAR, Informed Visibility will also provide a positive return-on-investment by eliminating some redundant costs and programs. Mail-Processing Equipment: Projected Spending Intended to Increase Automation and Efficiency USPS projects an annual average of about $464 million on capital spending for mail-processing equipment from fiscal years 2018 through 2028. USPS intends to maintain or replace existing aging equipment used to process mail and purchase new equipment that USPS expects will increase efficiency and provide other business benefits. According to USPS officials, equipment projects can also generate a positive return- on-investment in a number of ways, such as by increasing automation to reduce costs or by improving customer service. For example, in August 2017 USPS approved a capital spending project to provide new control systems for about 1,000 bar code sorter machines that USPS expects will decrease mail-processing costs. Some of USPS’s mail-processing equipment investments may also specifically address the growing market for package shipments. For example, in July 2017 USPS approved a capital spending project for upgrades to automated package-processing machines—upgrades intended to reduce package-handling costs and improve collection of data on when and where packages are processed. USPS first deployed these machines in 2004. According to the DAR, by 2017, the machines were nearing the end of their useful life, resulting in reduced reliability. USPS’s Projected Capital Spending Will Likely Involve Prioritization Decisions Although USPS is projecting increased capital spending over the next 10 years, it has reported that it faces uncertainties, such as the level of future revenues, that could affect its ability to undertake planned and projected spending. USPS faces continuing declines in First Class Mail volume, and while it has experienced increased volume in packages, future increases in package volume are uncertain. Specifically, according to USPS, some of its major shipping customers are now building their own delivery capability that may enable them to divert some package shipments away from USPS. USPS has also stated that it faces challenges in ensuring that future operations generate sufficient revenues to support planned capital spending and that it is constrained in its ability to reduce costs. We have previously testified that USPS continues to face a serious financial situation with insufficient revenues to cover its expenses. This uncertain financial outlook may result in USPS changing its current capital-spending plans, including setting new priorities across its planned projects and other business needs. These prioritization decisions can involve tradeoffs among projects and between capital and operations spending. USPS has already faced these types of tradeoffs, as in fiscal year 2017, when it did not make $6.9 billion in required prefunding payments for retiree health and pension benefits, stating that it lacked sufficient cash to make those payments while ensuring it could continue to provide service, and stating that it required sufficient cash reserves for capital spending. While USPS officials noted that USPS must always make prioritization decisions regarding capital spending, its financial future may make such decisions more critical given its currently projected increased capital spending. For example, unless USPS increases its revenues or decreases other expenses, such prioritization decisions may involve USPS undertaking less future capital spending than it currently projects over the next 10 years. Further, even if USPS’s financial situation were to dramatically improve, USPS may not necessarily undertake more capital spending than currently projected, because of significant other business needs, such as funding operating expenses. Should USPS have more resources than expected in the coming years, though, USPS may be able to make fewer tradeoffs regarding capital spending. Various Processes Support USPS’s Ability to Address Uncertainties and Risks That Affect Capital Spending USPS Has Processes Designed to Identify Uncertainties and Risks That Affect Capital Spending USPS has processes that can help it to identify uncertainties and risks that could affect its capital spending and adjust its spending to changing circumstances. USPS has adopted the Committee of Sponsoring Organizations of the Treadway Commission’s (COSO) internal control framework, which includes how organizations should address uncertainties and risks. Specifically, this framework states that organizations should identify uncertainties and risks to the achievement of their objectives and analyze these uncertainties and risks to determine how they should be managed. Additionally, COSO’s internal control framework asserts that organizations should not only identify and analyze uncertainties and risks but also assess any changes in conditions that could affect the organization including its capital spending. Identifying and Analyzing Organizational Uncertainties USPS has processes for identifying and analyzing organizational uncertainties, such as business and budgetary uncertainties, which can affect capital spending. These processes align with aspects of COSO’s internal control framework. For example, according to USPS documentation on its strategic-planning process, USPS conducts a business environment assessment and an enterprise risk assessment every 3 years to identify its organizational uncertainties, such as the effect of changes in the number of delivery points or mail volume. Additionally, USPS has processes to analyze the effects of its organizational uncertainties. For example, some department managers analyze the potential effects of organizational uncertainty by modeling different scenarios to help inform their department’s capital-spending decisions. For example, USPS officials stated that the vehicles department models the interactions among key variables—such as stabilizing or declining mail volume, route structures, and vehicle cargo sizes—as it considers various vehicle acquisition options. In addition, USPS facilities department officials told us that they plan to develop on a model to consider how key variables, such as mail volume, affect USPS’s facility needs. In addition to identifying and analyzing the potential effects of organizational uncertainties, USPS also has processes for assessing changes in these organizational uncertainties. For example, USPS documentation shows that USPS leadership holds a monthly business review meeting in which officials discuss any changes in internal conditions, such as labor costs, or external conditions, such as mail volume, that could affect the organization and, when applicable, how these conditions could affect capital spending. Officials told us that USPS also distributes a survey every 18 months to internal and external stakeholders to obtain perspectives on changes, if any, in some of the conditions addressed by USPS’s strategic plan. The survey also covers other conditions such as uncertainty about the extent to which USPS will have funds to maintain, repair, and replace infrastructure. Identifying and Analyzing Project Risks Individual capital projects face inherent risks—such as technological, operational, and integration risks. We found that USPS’s capital-spending processes align with aspects of COSO’s internal control framework by incorporating processes to identify and analyze project-specific risks through the use of DARs. As discussed earlier, USPS’s capital spending processes require DARs to justify proposed capital projects with total costs of $1 million or more. Specifically, internal USPS guidelines state that DARs should identify the technological, operational, and integration risks that could affect capital projects and any tradeoffs related to potential alternatives to the proposed capital project. For example, we reviewed one DAR for mail-processing equipment that explained that the project has a low level of operational risk noting that the new equipment will not require training for operators, thus avoiding potential costs and delays associated with training. Another DAR we reviewed for a project intending to improve the customer experience and reduce costs through more efficient staffing at retail locations identified integration risks and noted that the project's proposed deployment schedule might not allow time for delays. USPS leadership may also request additional analyses to verify, or support, information in a DAR before deciding whether to approve a project. For example, according to documentation we reviewed, USPS leadership recently requested that its Finance and Planning division review economic data, such as population growth rates, to confirm the economic growth projections used in support of a DAR for a new facility in Bismarck, North Dakota. USPS Has Processes Designed to Respond to Identified Uncertainties and Risks That Affect Capital Spending We found that USPS has processes that are designed to help it respond to identified organizational uncertainties, specifically future budgetary uncertainty. According to OMB’s Capital Programming Guide, capital spending “...should be consistent with the level of future budgetary resources that will be available.” USPS officials said USPS seeks to minimize the budgetary uncertainty that capital spending will outpace available resources by developing its annual capital-spending budget as part of USPS’s overall annual budget. As a result, USPS can determine an annual capital spending budget based on the most recent conditions, including the most recent revenue forecasts, and consider possible tradeoffs—such as those between capital spending and other spending needs such as operating expenses. Further, while the creation of a capital-spending budget establishes capital-spending levels, the process does not commit capital spending on any particular project. Instead, USPS reviews and approves new capital projects throughout the fiscal year, allowing USPS to make capital spending-decisions based on its most current financial condition, which may have evolved during the fiscal year. After USPS has set the annual capital spending budget, USPS’s capital- spending process also allows the organization to respond to any changes in its financial outlook, business environment, or other organizational uncertainties that might occur during the fiscal year. As stated previously, USPS’s capital spending budget establishes capital spending levels for the fiscal year and does not include approvals for specific projects. Project sponsors must obtain approval from different groups within USPS to initiate capital projects. USPS may approve less capital spending for capital projects than budgeted for at the start of the year. Our analysis of capital-spending cash outlays from fiscal year 2007 through 2017 shows that on average, USPS spent about 18 percent less than was budgeted for at the start of each year. According to USPS officials, capital spending can be below budgeted levels for a variety of reasons. USPS may shift strategic priorities based on business conditions and cancel or delay some planned projects that it determines are no longer aligned with its priorities. For example, USPS canceled a previously approved centralized distribution facility project in Brooklyn, New York, and decided to look for less costly alternatives to support the area’s increased package processing needs. Also, officials stated that projects could come in below budget because of a reduction in project scope or because a multi-year project falls behind schedule and has less cash outlays in a given year than were planned. In other instances, USPS’s capital-spending approval process provides flexibility to re-allocate capital funds as USPS identifies and assesses changing conditions that affect the organization, or when contingencies or emergencies arise. For example, according to USPS officials, as USPS monitors the economic indicators that affect its business, the indicators may signal an increase in package volume. USPS might respond by allocating more capital toward additional purchases of package-sorting equipment. According to USPS officials, USPS’s capital-spending process also allows USPS to respond to contingencies. In fiscal year 2017, USPS approved capital spending to repair facilities in the U.S. Virgin Islands damaged during Hurricane Maria. (See fig. 4.) In the event that such unplanned projects arise to repair damages or are required for safety, project sponsors can expedite the capital spending approval process, such as by submitting an advance funding request to USPS. In addition to having processes to respond to organizational uncertainties, we also found that USPS has processes for responding to the risks affecting individual capital projects. According to USPS documentation, capital projects with total costs of over $5 million are reviewed at certain stages in their implementation to assess any changes, including changes in the return-on-investment, timeline, and performance of the projects. USPS may alter project specifications or time frames to respond to these changes. During the implementation stage of some major capital projects, such as the installation of mail-processing equipment, departments may initially test a limited number of units with the option to request the purchase of additional units if the tests are successful. Additionally, some major capital projects, such as the replacement of USPS’s delivery vehicles, require acquisitions over multiple years, which, USPS officials told us, can be used to limit risk. As mentioned earlier, USPS is planning to replace its fleet by purchasing vehicles over a number of years, potentially allowing it to capitalize on technological advances that may develop over the time period. After a capital project is complete, USPS has a process for reviewing the results as a way to inform and improve future capital-spending decisions, including better addressing project risks. USPS’s capital-spending process requires USPS to evaluate capital projects with total costs over $25 million after project completion, reviewing the cost, schedule, and performance results of these projects. For example, in November 2017, USPS discussed the results of two package processing and sorting projects that experienced delays associated with accommodating new equipment at the facilities due to design issues. As a result, USPS recommended that project sponsors conduct more research about any site-specific risks before submitting a DAR for future package processing and sorting projects. In addition, USPS’s Office of Inspector General prepares an annual capital-project-compliance report that evaluates the soundness of USPS’s capital spending. According to USPS officials, the organization considers the results of these reports and seeks to address any resulting recommendations. For example, we reviewed documentation explaining that, in response to one recent Office of Inspector General recommendation, USPS stated it would revise its capital spending guidance to define review and approval procedures, validation, and compliance report requirements for all investments. Agency Comments We provided a draft of this report to USPS for review and comment. USPS provided a written letter (see appendix II) in which USPS provided no comments. Via email, USPS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Postmaster General. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives for this report were to (1) describe the U.S. Postal Service’s (USPS) projected capital spending over the next 10 years and (2) assess whether USPS’s processes support its ability to address uncertainties and risks that affect its capital spending. For our second objective, our scope was limited to assessing whether USPS had designed processes; that is, we did not assess the quality of any analyses that USPS conducted regarding risks or any determinations that USPS made regarding capital-spending projects as this was beyond the scope of our review. Such assessments are routinely conducted by the USPS Office of Inspector General. To address USPS’s planned capital spending over the next 10 years, we reviewed USPS data on capital spending from fiscal years 2007 through 2017 and USPS documentation on projected capital spending from fiscal years 2018 through 2028. In both cases, we focused on a fiscal year’s actual or projected capital-spending cash outlays—or the amount of cash spent on capital projects—as opposed to capital-spending commitments made in that fiscal year. For historical data, we used data from USPS’s annual budgets, known as Integrated Financial Plans, for fiscal years 2008 through 2018. Each annual budget contains data on actual capital spending levels from prior fiscal years. The annual budgets generally report capital spending in four broad categories: vehicles, facilities, information technology and other, and mail-processing equipment. Because the categories used in past annual budgets were not consistent, we recategorized some years’ spending to be consistent. Specifically, we considered “mail-processing equipment” or “equipment” as part of “mail- processing equipment.” We considered “infrastructure and support,” “information technology and other,” and “customer service and support equipment” as part of the “information technology and other” category. The past budgets consistently used “facilities” and “vehicles” categories. We obtained input from USPS officials on our recategorizations. To determine the reliability of these data, we reviewed the data for any obvious errors, reviewed relevant documentation, and interviewed officials. We determined that these data were sufficiently reliable for the purposes of reporting on USPS’s past capital spending. For information on USPS’s projected capital spending from fiscal years 2018 through 2028 we reviewed USPS’s 10-year capital-spending forecast for those years, which USPS created in 2017. This 10-year forecast is a projection of capital spending, but is not a commitment for any level of investment. The 10-year forecast categorizes capital spending projects into the following categories: construction and building purchases, building improvements, mail processing equipment, vehicles, capitalized software, customer service equipment, and postal support equipment. For our analysis, we combined “postal support equipment,” “information technology,” and “customer service equipment” into one overall “information technology and other” category, and “construction and building purchases” and “building improvements” into one overall “facilities” category. USPS officials agreed with this approach. To determine the reliability of these data, we interviewed USPS officials, reviewed data for any obvious errors, and reviewed relevant documentation. We determined that these data were sufficiently reliable for the purposes of providing information on USPS’s projected capital spending. In addition, we interviewed four USPS vice presidents in charge of the departments that correspond with the four broad categories of capital-spending investments about historic, ongoing, and projected capital spending. We also selected and reviewed a non-generalizable sample of 14 Decision Analysis Reports (DAR)—internal USPS documents used to justify and obtain approval for some proposed capital-spending projects— of the 66 approved by USPS for fiscal year 2017 and part of fiscal year 2018. USPS requires DARs for all proposed capital spending projects with a total project cost of at least $1 million. The DARs contain information on, among other things, project specifications, purpose, risks and tradeoffs, and timeframes. We reviewed the DARs for this and other information; we did not review the quality of the analyses contained in the DARs. We obtained a list of all approved DARs for fiscal years 2017 and 2018 and selected DARs of the two largest and two smallest capital projects by total value in each of the four categories (i.e., vehicles, facilities, information technology and other, and mail processing equipment). Because the vehicles category had only two approved DARs at the time we received the list of approved DARs from USPS, we reviewed 14 DARs instead of 16. While the information from our reviews cannot be generalized to all DARs, the information provides insights into USPS’s reasons for undertaking capital spending projects. To assess whether USPS has processes that support its ability to address uncertainties and risks that affect its capital spending, we reviewed USPS documentation, including USPS’s policies and procedures for capital spending, internal guidance documents, and others related to processes that affect its capital spending. We identified criteria for addressing uncertainties and risks, including those specific to capital spending. Specifically, we identified criteria from the Committee of Sponsoring Organizations of the Treadway Commission’s (COSO) Internal Control-Integrated Framework (the internal control standards adopted by USPS) and the Office of Management and Budget’s Capital Programming Guide. COSO Principle 7 states, “The organization identifies risks to the achievement of its objectives across the entity and analyzes risks as a basis for determining how the risks should be managed.” Further, COSO Principle 9 states, “The organization identifies and assesses changes that could significantly affect the system of control.” The Office of Management and Budget’s Capital Programming Guide element I.1.1 states, “The plan should also be consistent with the level of future budgetary resources that will be available.” We evaluated USPS’s processes that affect capital spending against these criteria to determine whether USPS had designed processes to address uncertainties and risks related to capital spending. We did not review the capital spending projects USPS has undertaken to determine, for example, if USPS made appropriate decisions regarding selected projects. We also interviewed USPS officials regarding USPS’s capital- spending processes. Specifically, we interviewed officials with USPS’s Capital Investment and Business Analysis Department; Finance and Planning Department; Technical Analysis, Accounting, and Finance Department; and the four vice presidents mentioned above about how they address uncertainties and risks related to capital spending within their departments. We conducted this performance audit from September 2017 to June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Postal Service Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Kyle Browning and Faye Morrison (Assistant Directors); Matthew Rosenberg (Analyst in Charge); Amy Abramowitz; Sara Ann Moessbauer; Josh Ormond; Joshua Parr; Amy Rosewarne; and Crystal Wesco made key contributions to this report. Also contributing to this report were Carol Henn, Sabine Paul, and Carolyn Voltz.
Why GAO Did This Study USPS faces significant financial challenges as it continues to experience declining mail volumes and revenues. Capital spending is needed to support USPS's operations, but can be affected by various uncertainties and risks, such as those related to future business activities and revenues. In the past, USPS has reduced its capital spending in response to declining revenues. GAO was asked to review USPS's capital-spending plans and examine how its capital-spending processes address uncertainties and risks. This report: (1) describes USPS's projected capital spending over the next 10 years and (2) assesses whether USPS's processes support its ability to address uncertainties and risks that affect its capital spending. GAO reviewed USPS data and information on actual capital spending from fiscal years 2007 to 2017 and projected capital spending for fiscal years 2018 through 2028. GAO also reviewed USPS reports on 14 approved capital projects in fiscal years 2017 and 2018, selected to provide a mix of project type and value; examined documentation related to USPS's processes that affect capital spending and compared USPS's processes to internal control standards adopted by USPS; and interviewed USPS officials. On a draft of this report, USPS provided technical comments, which GAO incorporated as appropriate. What GAO Found The United States Postal Service (USPS) projects increased capital spending over the next 10 years. According to USPS, this spending will support its mission and improve its financial position. USPS projects average annual capital cash outlays of $2.4 billion from fiscal years 2018–2028—about 70 percent more than the $1.4 billion average from fiscal years 2007–2017 (see figure). For example, USPS plans to acquire a new fleet of delivery vehicles starting in 2019 to replace its aging existing fleet and plans to purchase new mail-processing equipment to increase efficiency. However, USPS faces a serious financial situation with insufficient revenues to cover expenses. This uncertainty may result in USPS's making capital-spending prioritization decisions that can lead to tradeoffs across planned capital projects and potentially between capital spending and other organizational needs such as operational expenses. Such prioritization could lead to USPS's undertaking less capital spending than currently projected in the absence of increased revenues or decreased expenses. USPS has processes that help it identify the uncertainties and risks that may affect its capital spending and adjusts its capital spending accordingly, in line with internal control standards adopted by USPS. For example, USPS identifies organizational uncertainties, such as mail volumes and revenues, as part of its strategic planning process and considers them when creating its capital spending budget. It also identifies individual project risks through a project review process, and considers tradeoffs inherent in different project scenarios. USPS's processes also allow it to respond to these uncertainties and risks. Specifically, USPS sets a capital-spending budget in its overall financial plan, to help ensure that spending is in line with expected resources. USPS's process also allows it to shift funds if needed, such as to repair a facility damaged during a natural disaster. USPS also reviews individual capital projects during implementation and can change specifications or time frames based on changing circumstances.
gao_GAO-18-115
gao_GAO-18-115_0
Background Stakeholders Involved in the Federal Restitution Process Victims of federal crimes may be compensated for their losses through criminal proceedings when a federal court orders restitution pursuant to statute. Restitution is part of the sentencing process for federal offenders and there are four key groups of stakeholders involved: Judges and court officials. The federal judiciary consists of a system of courts that has the critical responsibility of ensuring the fair and swift administration of justice in the United States and handles all federal civil, criminal, and bankruptcy cases and reviews of federal administrative agency cases throughout the country. The federal courts have various responsibilities in the restitution process. Federal judges are responsible for ordering the proper amount and type of restitution, including payment schedules and modifications. Federal probation officers are responsible for the presentence report, which must include information for the court to use in fashioning a restitution order, including, among other things, a complete accounting of the losses to each victim and information about the economic circumstances of each defendant. Following a defendant’s sentencing and the imposition of restitution, probation officers supervise offenders to ensure compliance with orders for restitution, including conducting ongoing financial investigations, supporting U.S. Attorneys’ Offices in the collection and enforcement of restitution orders, notifying the federal court of an offender’s failure to pay outstanding restitution, and making recommendations to amend orders based on changes in an offender’s ability to pay. Clerks of federal courts are responsible for receipting and disbursing restitution payments and notifying DOJ of such. In addition, the Director of the Administrative Office of the U.S. Courts has statutory responsibilities related to the restitution process, including establishing procedures and mechanisms within the judicial branch for processing fines, restitution, forfeitures of bail bonds or collateral, and assessments. The Judicial Conference of the United States, a body of 27 judges over which the Chief Justice of the United States presides, is the judiciary’s principal policymaking body and operates through a network of committees created to address and advise on a wide variety of subjects such as budget, criminal law, and court administration. Given the role of the judiciary in the restitution process, the Judicial Conference has taken policy positions on various restitution-related issues and has supported various legislative proposals to improve the process. Prosecutors and DOJ officials. DOJ officials are responsible for prosecuting federal offenses, identifying victims of crime and informing them of restitution to which they may be entitled, identifying victim losses and harms that are subject to restitution after consulting with victims and providing that information to probation officers, demonstrating the amount of loss sustained by the victim, enforcing orders of restitution, and collecting criminal debt, including unpaid restitution. Various entities and officials within DOJ are responsible for these activities, including federal prosecutors in the Criminal Division and the U.S. Attorneys’ Offices, and their respective Financial Litigation Units. Victims. A federal crime victim is a person directly and proximately harmed as the result of a federal offense. Federal crime victims are entitled to full and timely restitution as provided in law. Victims may provide information to prosecutors, probation officers, and to the court about their losses and have a right to be heard at sentencing, but are not required to participate in any phase of the restitution proceedings. Defendants and their counsel. Defendants who commit federal crimes where an identifiable victim suffered a physical injury or monetary loss are generally required to pay restitution. Defendants are required to submit information about their financial resources and the financial needs and earning ability of their dependents to the court and have the burden of demonstrating these resources and needs in any restitution proceedings. Defendants are generally represented by counsel in criminal proceedings and according to the judiciary approximately 90 percent qualify for court-appointed counsel under the Criminal Justice Act because they are financially unable to retain counsel in federal criminal proceedings. Court-appointed counsel is provided by Federal Defender Organizations and panel attorneys. A defendant may be convicted pursuant to plea agreement with the government or after a trial; more than 90 percent of defendants plead guilty rather than go to trial. A defendant is referred to as an offender following conviction of an offense. Compensation in the Restitution Process Restitution is only available to victims and for harms as statutorily authorized. Congress passed the MVRA in 1996, which substantially revised the restitution process. The legislative history reflects the balancing of competing interests—including ensuring that the loss to crime victims is recognized, that they receive the restitution that they are due, and also that the offender realizes the damages caused by the offense and pays the debt owed to the victim as well to society. As provided in the legislative history, one of the ways that the law sought to balance the application of mandatory restitution was by limiting it to the instances where a named identifiable victim suffered a physical injury or monetary loss as a direct and proximate result of the defendant’s offense of conviction. This means that the victim would not have been harmed “but for” the conduct underlying the offense of conviction and also that the harm was proximately caused by the conduct. Proximately caused means that the causal nexus between the conduct and the loss is not too attenuated either factually or temporally. As such, a defendant is not held liable for downstream effects of an act where there were additional, intervening causes not sufficiently related to the offense. For example, a rapist would not be held responsible for the death of a hospitalized rape victim who died in a hospital fire. In addition, the loss caused by the defendant’s conduct underlying the offense of conviction establishes the outer limits of the restitution order. This means that harms caused by the defendant’s conduct that were related to, but outside the scope of, the crime of conviction cannot be compensated through restitution. For example, a defendant who was convicted of illegally possessing a firearm but acquitted of using the firearm to shoot someone would not be liable for restitution for medical costs for the shooting victim. The restitution statutes specify the types of harm that may be compensated, and federal case law interpreting these statutes provides guidance to courts when ordering restitution. For example, when a crime results in bodily injury to a victim, compensable expenses include the costs of medical and other services related to physical, psychiatric, and psychological care; costs for necessary physical and occupational therapy and rehabilitation; and reimbursement for lost income as a result of the offense, among other enumerated losses. Courts have also ordered restitution for a victim’s actual losses that were proximately caused by the defendant’s conduct even when not explicitly listed in statute. When restitution is ordered by the court, it is to be in the full amount of each victim’s losses without consideration of the economic circumstances of the defendant. During a defendant’s sentencing, additional hearings may be held to examine losses to victims for restitution and prosecutors are responsible for proving and litigating issues related to victims’ losses. Restitution may be imposed by the federal courts up to 90 days after sentencing if additional time is needed by the court to locate victims and calculate losses. In some cases, courts can decline to order restitution, such as when the court determines that fashioning an order of restitution would complicate or prolong the sentencing process so much that the need to provide restitution is outweighed by the burden on the sentencing process. Separate from criminal restitution, victims may seek compensation from the offender by pursuing litigation at their own expense through a civil proceeding in a federal, state, or local court. In the United States, criminal and civil proceedings are separate legal systems subject to different laws, standards, and rules of procedure. The types of harms for which a victim may receive compensation differ in a civil proceeding. For example, federal criminal courts may order restitution to reimburse a victim for medical expenses, but cannot order compensation for pain and suffering caused by a crime. However, a victim may seek compensation for pain and suffering by filing a civil action against the defendant, as well as for other types of damages that are not available through restitution. Other types of civil remedies not compensable as criminal restitution include intended harm, punitive damages, breach of contract, and disgorgement of ill-gotten gains, among others. Figure 1 below outlines the steps taken for compensation of victims of federal crimes through federal criminal restitution and civil proceedings. Total Restitution Ordered Since 1996 According to USSC data for fiscal years 1996 through 2016, the percentage of offenders ordered to pay restitution by federal courts has remained fairly steady. From fiscal years 1996 through 2016, an average of 15 percent of individual offenders and 32 percent of organizational offenders annually were ordered to pay restitution by the federal courts (see fig. 2). For more information on the restitution imposed by the federal courts from fiscal years 1996 through 2016, see appendix II. Collection of Restitution We have previously reported on issues related to the collection of federal restitution and currently have ongoing work on DOJ’s collection of restitution pursuant to the Justice for All Reauthorization Act of 2016. According to data DOJ provided to us, the total outstanding restitution debt owed in federal cases as of the end of fiscal year 2016 was $110.2 billion. DOJ, through its U.S. Attorneys’ Offices’ Financial Litigation Units, is responsible for collecting restitution debt from offenders. This collection typically begins after offenders are sentenced and ordered to pay restitution and includes enforcement actions such as filing garnishments and liens. We noted in our 2001 and 2004 reports that collection of outstanding restitution debt is inherently difficult due to a number of factors, such as offenders who may be incarcerated or have minimal earning capacity and the MVRA requirement that the assessment of restitution be based on actual loss and not on an offender’s ability to pay. In 2005, we reported that court-ordered restitution amounts far exceed likely collections for crime victims in selected financial fraud cases. Specifically, we found that these offenders, who had either been high-ranking officials of companies or operated their own businesses, pleaded guilty to crimes for which the courts ordered restitution totaling about $568 million to victims. As of the completion of our fieldwork, which was up to 8 years after the offenders’ sentencing, court records showed that amounts collected for the victims in these cases totaled only about $40 million, or about 7 percent of the ordered restitution. Stakeholders Identified Various Factors Related to the Potential Expansion of Federal Courts’ Authority to Order Restitution Stakeholders we interviewed identified various factors related to the potential expansion of federal courts’ authority to order restitution in the four areas listed in the Justice for All Reauthorization Act of 2016: (1) to apply to victims who have suffered harm, injury, or loss that would not have occurred but for the defendant’s related conduct; (2) in the case of an offense resulting in the victim’s death, to allow the court to use its discretion to award the income lost by the victim’s surviving family members or estate as a result of the victim’s death; (3) to require that the defendant pay to the victim an amount determined by the court to restore the victim to the position he or she would have been in had the defendant not committed the offense; and (4) to require the defendant compensate the victim for any injury, harm, or loss, including emotional distress, that occurred as a result of the offense. Stakeholders also identified additional factors to consider, beyond the ones identified for the four provisions above, for potential broadening of courts’ authority to order restitution generally. For a summary of the provisions and factors cited by stakeholders, see appendix I. Factors to Consider in a Potential Expansion of Restitution Authority if it were to Include a Defendant’s Related Conduct and Eliminate the Proximate Cause Requirement Related Conduct Federal courts have the authority to order defendants to pay restitution for a victim’s losses that resulted from the defendant’s conduct underlying the offense of conviction. However, at times, a defendant’s related conduct can be broader than the offense of conviction and can include criminal conduct for which the defendant’s guilt was not established either by trial or plea agreement with the government. For example, in a case before the Fourth Circuit where restitution was not allowed for conduct that was broader than the offense of conviction, the government asserted that the defendant was the ringleader of a nationwide pickpocketing ring and submitted a list of victims for restitution that included five financial institutions and four individuals who had suffered losses. However, because the defendant had pleaded guilty to, and was convicted for, fraudulent use of a credit card related to one individual on one date—and the defendant’s offense did not involve as an element a scheme, conspiracy, or pattern—the court determined that restitution was not proper for the additional victims because they were not harmed by the conduct underlying the offense of conviction. On the other hand, when a defendant has been convicted of an offense that involves as an element a scheme, conspiracy, or pattern, the court may order restitution for direct harm caused by that scheme, conspiracy, or pattern. For example, in another case involving credit card fraud, because the defendant pleaded guilty to and was convicted of conspiracy to traffic in counterfeit credit cards—in contrast to the previous case where the defendant was convicted of only one fraudulent use—the Eleventh Circuit held that the sentencing court could order a defendant to pay restitution for losses from additional credit card fraud that were to advance the conspiracy. Stakeholders we interviewed identified the following factors to consider if federal courts’ authority were to be broadened to allow a defendant’s related conduct to be included in an order for restitution: Constitutional issues. Eight of 10 stakeholders we spoke with identified potential constitutional issues if the federal courts could order restitution for a defendant’s related conduct. For example, two stakeholders representing defendants and an association representing federal prosecutors told us that including a defendant’s related conduct in orders for restitution could result in potential violations of a defendant’s rights under the Fifth Amendment’s Due Process Clause, which provides that no person shall be deprived of life, liberty, or property without due process of law. This was also a concern noted in the legislative history of the MVRA, and an individual knowledgeable about restitution we interviewed noted that the Supreme Court has also suggested that due process could be a concern if the court were to order federal criminal restitution beyond the conduct underlying the conviction. Increased complexity to determine losses. Four of 10 stakeholders we spoke with stated that if the authority of federal courts to order restitution were broadened to allow inclusion of harms for a defendant’s related conduct, there would be increased complexity to determine losses for restitution. For example, DOJ officials told us that inclusion of a defendant’s related conduct would allow restitution to be open to a larger pool of potential victims, and identifying and calculating losses for all victims with a nexus to the offense of conviction could become an impossible task. An association representing federal prosecutors stated that this increased complexity could have the effect of federal courts ordering less restitution through the exception for complex cases, which would negatively impact victims. DOJ’s practices for plea bargaining. In contrast, two individuals we spoke with who represent victims stated that prosecutors could more consistently follow DOJ’s guidelines to include a defendant’s related conduct in plea agreements without expanding federal courts’ authority to order restitution. DOJ guidelines, which are based on statutory direction, provide that prosecutors must consider requesting full restitution to all victims for all charges contained in the indictment, without regard to the counts to which the defendant actually pleaded guilty. In other words, when DOJ and the defendant agree to certain terms as part of a plea agreement in which the defendant pleads guilty to one or more charged offenses, or lesser or related offenses, prosecutors must consider requesting the defendant pay restitution for all of the charges, not just the ones to which the defendant is pleading guilty. As a result, federal courts may order restitution pursuant to the plea agreement for losses sustained by crime victims for related conduct or criminal conduct that is not part of the offense of conviction. Proximate Cause Federal courts currently have the authority to order an offender to pay restitution to victims who have suffered harms as a direct and proximate consequence of the crime of conviction. This means that the harm must have been not only caused by the offense, as a matter of fact, but also that it was reasonably foreseeable as a result of the offense. For example, courts have allowed damage caused by the escape from a robbery—such as damage to police cars hit during a car chase—to be compensable as restitution because the flight was casually related to the bank robbery. Although 5 of 10 stakeholders stated that the proximate harm requirement does not generally present challenges related to federal courts’ authority to order restitution, stakeholders identified additional factors to consider if the federal courts’ authority were to be expanded to eliminate the proximate cause requirement: Additional sentencing-related hearings and litigation. Three of 10 stakeholders we interviewed stated that eliminating the proximate harm requirement could result in prolonged sentencing for defendants due to additional sentencing-related hearings and litigation. For example, judiciary officials and an association representing federal prosecutors stated that if federal courts’ authority to order restitution were expanded to eliminate the proximate harm requirement, more litigation would be required during sentencing to examine harms to victims and to determine how losses related to the offense of conviction. Plea bargaining could be affected. Two of 10 stakeholders we interviewed stated that eliminating the proximate harm requirement would impact plea bargaining between defendants and DOJ. For example, an individual knowledgeable about federal restitution stated that eliminating proximate harm would hinder plea bargaining as during plea agreement negotiations a defendant would no longer have a sense of how much federal criminal restitution could be ordered. At the time the MVRA was passed, Congress also recognized the central role of plea bargaining in the federal criminal justice system with the legislative history of the MVRA noting the intent that the legislation not impair the role of plea bargaining. Factors to Consider in the Potential Expansion of Restitution to Include Income Lost by Deceased Victims and Their Family Members In cases involving the death of a crime victim, federal courts may order restitution for losses to be paid to a deceased crime victim’s surviving family members or estate, including for funeral expenses, as applicable. Further, according to 6 of 10 stakeholders we interviewed, federal courts currently have the authority to order compensation for future lost income of a deceased crime victim’s family member or estate due to precedent established in case law. For example, the Tenth Circuit held that restitution for the future lost income of a three-month-old victim of voluntary manslaughter was not precluded by the MVRA; thus a court may exercise its discretion in declining to grant an award, or, as it did in this case, undertake such proceedings. In a Ninth Circuit case, the court held that “restitution for future lost income may be ordered under the MVRA so long as it is not based upon speculation, but is reasonably calculable,” and returned the case to the district court to redetermine the amount of restitution to be awarded. Stakeholders we interviewed also identified the following factors to consider if federal courts’ authority were to be expanded to include compensation for the future lost income of a deceased crime victim and to compensate the deceased victim’s surviving family members for their lost income as a result of the victim’s death: Increased victim compensation awards. Four of 10 stakeholders we interviewed stated that expanding federal courts’ authority to include compensation for future lost income of a deceased crime victim could result in more compensation awarded through restitution. For example, three stakeholders representing victims stated if this provision were specified or made explicit in statute, it would be more likely that federal courts would order compensation for the future lost income of a deceased crime victim. One of the stakeholders added that having such loss specified and enumerated in restitution statutes would ensure it is considered during the restitution process and is less likely to be challenged during appeal. Further, another stakeholder representing victims stated that including the surviving family members’ lost income in a restitution order could allow for compensation of those family members who lost income prior to a victim’s death, such as in cases where those family members provided care to a victim prior to the victim’s death. Complexity of calculation and need for experts. Three of 10 stakeholders we interviewed stated that determining a deceased crime victim’s future and family members’ lost income would add complexity to the restitution process. For example, an association representing federal prosecutors stated that it would be difficult for federal probation officers and prosecutors to determine the amount of future lost income for deceased victims as that area of specialization is currently in civil law. In addition, DOJ officials stated that the complexity and need for experts to make these specialized calculations could increase the cost of prosecution given the government’s burden to prove victim losses. Suitability of criminal versus civil proceedings. Three of 10 stakeholders we interviewed stated that compensation for a deceased crime victim’s future and family members’ lost income is more appropriate for litigation through civil proceedings rather than combining or merging such litigation in a federal criminal proceeding. For example, an association representing defendants stated that federal criminal proceedings are not suitable venues to fairly vet and litigate this type of victim issue. This stakeholder further stated that issues of this type are routinely litigated vigorously in civil proceedings and involve extensive discovery practices, such as taking of depositions, exchanges of documents, and assessments by competing experts. An association representing federal prosecutors additionally noted that federal prosecutors are not well positioned to handle typical civil losses in criminal trials. Sentencing of defendants could be prolonged. Two of 10 stakeholders we interviewed stated that including a deceased crime victim’s future and family members’ lost income in an order for restitution could result in a defendant’s sentencing being prolonged. For example, judiciary officials stated that the sentencing of defendants could take more time due to the need for multiple hearings to examine losses and calculate a deceased crime victim’s future lost income. Collectability of debt due to these offenders’ ability to pay. Two of 10 stakeholders stated that the potential collectability of restitution from offenders for a deceased crime victim’s future and family members’ lost income should be considered. For example, an association representing federal prosecutors stated that these offenders are most likely to be incarcerated with the least ability to pay. As a result, the amount of resources needed to order restitution compared to collectability of the debt for a deceased crime victim’s future lost income should be considered. Further, according to that stakeholder, resources—such as prosecutorial expertise, money to hire experts, judicial resources like probation officers—need to be weighed against the collectability of the debt. This issue was also described in the legislative history of the MVRA: significant number of defendants required to pay restitution…will be indigent … many…may also be sentenced to prison terms as well, making it unlikely that they will be able to make significant payments… At the same time, these factors do not obviate the victim’s right to restitution or the need that defendants be ordered to pay restitution. Factors to Consider in the Potential Expansion of Restitution to Restore the Victim to His or Her Position Had the Offense Not Been Committed According to 6 of 10 stakeholders we interviewed, the provision “to require that the defendant pay to the victim an amount determined by the court to restore the victim to the position he or she would have been in had the defendant not committed the offense” is already the goal of federal restitution. These stakeholders stated that this is established in case law and is not an expansion of federal courts’ current authority. Other stakeholders we interviewed identified the following factor to consider if federal courts’ authority were to be expanded to include the provision “to require that the defendant pay to the victim an amount determined by the court to restore the victim to the position he or she would have been in had the defendant not committed the offense”: Expansion of authority to include general restitution. Three of 10 stakeholders we interviewed stated that the provision would expand federal courts’ authority to order restitution by allowing general restitution, meaning courts would have more discretion to determine awards for all harms that victims suffered in order to restore the victim to his or her pre-offense condition. Further, an association representing victims stated that federal courts’ authority to order restitution is listed as elements or categories of losses. For example, losses such as the cost of necessary physical and occupational therapy and rehabilitation, and necessary funeral and related services, among others. This association explained that by including a provision for general restitution, the courts would be able to order restitution to victims for any losses outside of those categories, which would function as a catchall for all victim harm. Factors to Consider in the Potential Expansion of Restitution to Include Any Injury, Harm, or Loss, Including Emotional Distress That Occurred as a Result of an Offense Federal courts may order restitution for actual losses—in other words, these must be tangible or “out-of-pocket” losses, and they must be supported by the record. This includes, for example, reimbursement of medical expenses for bodily injuries resulting from the victimizing offense. However, federal courts are not authorized to order restitution for losses such as pain and suffering and emotional distress to crime victims. For example, in a case where the defendant was convicted of committing a brutal hate crime against the victim, leaving him with severe physical injuries and depression, among other harms, the sentencing court acknowledged that it did not have authority to award restitution for pain and suffering and noted that the victim would be allowed to pursue civil remedies. Stakeholders we interviewed identified the following factors to consider if federal courts’ authority were expanded to allow any injury, harm, or loss, including emotional distress, to be included in an order for restitution: Suitability of criminal versus civil proceedings. Five of 10 stakeholders we interviewed stated that including compensation to victims for any injury, harm, or loss, including emotional distress, in restitution orders raises issues related to the types of harms that should be compensated in civil versus criminal proceedings. For example, a stakeholder representing defendants stated that federal criminal law is not suited to determine injuries such as emotional distress and pain and suffering, whereas the civil system is set up to handle these kinds of issues and losses. Further, an association representing federal prosecutors stated that federal prosecutors and federal probation officers in criminal cases lack the specialized skills to determine losses for cases involving compensation for pain, suffering, and emotional distress. This was an issue that was considered during passage of the MVRA as well, as the report accompanying the MVRA provides, “It is the committee’s intent that courts order full restitution to all identifiable victims of covered offenses, while guaranteeing that the sentencing phase of criminal trials do not become fora for the determination of facts and issues better suited to civil proceedings.” Increased complexity to determine losses. Four of 10 stakeholders we interviewed stated that determining losses such as emotional distress and pain and suffering would add complexity to the restitution process. For example, DOJ officials stated that pain and suffering and emotional distress are not easily quantified and restitution hearings to examine such losses would involve experts trying to prove these kinds of losses. Stakeholders Identified Additional Factors to Consider Related to the Potential Expansion of Courts’ Authority to Award Restitution Stakeholders we interviewed identified the following factors to consider related to the potential broadening of courts’ authority to order restitution generally, in addition to the factors discussed above associated with particular expansions of federal courts’ authority to order restitution: Increased restitution debt and collection challenges. Seven of 10 stakeholders we interviewed told us that increased restitution debt and collectability challenges should be considered in the potential broadening of federal courts’ authority to order restitution. For example, two stakeholders representing defendants stated that offenders often lack the financial resources to pay restitution. Under the MVRA, federal courts must order mandatory restitution without consideration of a defendant’s financial resources which has resulted in large amounts of uncollected federal restitution debt. These two stakeholders stated that by broadening federal courts’ authority to order restitution, the amount of uncollected restitution debt owed to victims would continue to increase. One of these stakeholders further suggested that the addition of secondary restitution (i.e., additional victims entitled to compensation) could have the effect of reducing the amount paid to the primary victims because all classes of victims will be forced to compete for payment on restitution awards that will often far exceed an offender’s ability to pay. According to judiciary officials, adding to the uncollected restitution debt would lead to further collection challenges, including the additional DOJ efforts needed to collect more restitution debt and additional supervision of offender restitution payments by probation officers. These issues were also observed during the passage of the MVRA. The report accompanying the law states that the Chair of the Criminal Law Committee of the Judicial Conference of the United States had testified before the Senate Judiciary Committee that 85 percent of all federal defendants are indigent at the time of sentencing and mandatory restitution would not lead to an appreciable increase in victim compensation; however, the report noted the Committee’s view of the benefits of even nominal payments to victims as well as the potential penalogical benefits of requiring the offender to be accountable for the harm caused to the victim. Suitability of criminal versus civil proceedings. Seven of 10 stakeholders we interviewed told us that the suitability of criminal versus civil proceedings should be considered in the potential broadening of federal courts’ authority to order restitution. According to judiciary officials, a system has been developed with rules to litigate damages in civil proceedings which are not included within criminal trials. Further, an association representing defendants told us that attorneys who directly represent alleged victims in civil proceedings are more appropriate parties to pursue this type of litigation. The association said this is because the prosecutor represents the public at large instead of an individual client, whereas a private attorney has an obligation to obtain a maximum recovery for the client. Comparing the process for the compensation of victims through restitution and civil proceedings, a stakeholder knowledgeable about federal restitution told us that the restitution process to compensate victims is more efficient for victims compared to civil proceedings which can last longer and result in victims incurring costs for ligation. Further, this individual stated that through the federal criminal restitution process in contrast to civil proceedings, victims receive help collecting funds through the federal courts, prisons, and probation officers during offender supervision. Other stakeholders did not consider the civil forum to be a suitable alternative for victims. One stakeholder representing victims stated that civil proceedings are inadequate for compensating victims and should not be considered. Additionally, another stakeholder representing victims also stated that victims may lack access to evidence to pursue civil litigation against an offender in cases where the conduct was not part of an offense of conviction or listed in a plea agreement. The legislative history of the MVRA also acknowledged the need for a balance, providing, as noted above, the intent that courts order full restitution but also that sentencing not become a forum for issues better suited to civil proceedings; to that end, the MVRA restricted mandatory restitution requirements to the specified set of crimes. Impacts to federal resources. Five of 10 stakeholders we interviewed told us that impacts to judiciary and DOJ resources— including increased workloads, additional legal services, and the need for more experts—should be considered in the potential broadening of federal courts’ authority to order restitution. According to judiciary officials, broadening federal courts’ authority to order restitution could result in increased workloads by probation officers who could have to conduct more investigations to support additional restitution orders. As discussed above, federal probation officers could also be required to track and supervise more restitution payments. Officials from Federal and Community Defenders told us that if federal courts’ authority to order restitution were broadened, additional legal services would need to be provided to offenders. For example, these officials stated that larger restitution orders could require increased workloads for federal defenders to work on behalf of offenders to modify payment schedules and their level of supervision by probation officers. Likewise, an association representing defendants stated that increased collection efforts could be required by U.S. Attorneys’ Offices’ Financial Litigation Units if the number of victims eligible for restitution increased. According to DOJ officials, prosecutors could experience increased workloads as they would be identifying more victims, thereby having to spend more time investigating and determining losses. Moreover, an association representing defendants told us that additional federal experts could be needed as sentencing courts and probation officers lack the resources and expertise to examine the harms that would result from broadening restitution authority. Attention to the costs to the justice system for mandatory restitution was considered in 1995, with the legislative history of the MVRA noting the attempt to reduce costs by limiting mandatory restitution to offenses in which an identifiable victim suffered a physical injury or monetary loss. Concerns about offenders’ reentry into society. Two of 10 stakeholders we interviewed told us that offenders’ reentry into society should be considered in the potential broadening of federal courts’ authority to order restitution. These two stakeholders, an association that represents defendants and officials from Federal and Community Defenders, told us that if the authority of the federal courts to order restitution were broadened to include non-monetary harms, offenders would be further burdened in their ability to reenter society due to excessive monetary sanctions from restitution orders. Further, these two stakeholders stated that offenders with large restitution orders face challenges obtaining employment, securing housing, and satisfying other financial obligations, which could increase their risk for recidivism and reduce their ability to pay any restitution. Agency Comments We provided a draft of this report for review and comment to DOJ, USSC, the Administrative Office of the U.S. Courts, and the Federal Judicial Center. The Administrative Office of the U.S. Courts provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Attorney General, the Judicial Conference of the United States, the Directors of the Administrative Office of the U.S. Courts and the Federal Judicial Center, the Staff Director of USSC, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you and your staff have any questions about this report, please contact me at (202) 512-8777 or goodwing@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions are listed in appendix III. Appendix I: Overview of Restitution Provisions for Potential Expansion in the Justice for All Reauthorization Act of 2016 Table 2 summarizes and describes the provisions included in the Justice for All Reauthorization Act of 2016 for potential expansion of federal courts’ authority to order restitution and the factors cited by stakeholders that Congress should consider in broadening existing restitution statutes. . Appendix II: Restitution Imposed by the Federal Courts from Fiscal Years 1996 through 2016 Table 3 and 4 summarize restitution imposed by the federal courts from fiscal years 1996 through 2016 for individual and organizational offenders. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Dawn Locke (Assistant Director); Carl Potenzieri; Janet Temko-Blinder; David Alexander; Sasan J. “Jon” Najmi; Amber Edwards; Kathleen Donovan; and Emily Hutz, made key contributions to this report.
Why GAO Did This Study Victims of federal crimes may be compensated for their losses through criminal proceedings when federal courts order restitution during a defendant's sentencing. Federal law dictates the crimes for which restitution is mandatory versus discretionary and what types of losses may be compensated. Federal prosecutors and Department of Justice officials are responsible for proving and litigating issues related to victims' losses for restitution orders, enforcing orders of restitution, and collecting criminal debt, including unpaid restitution. The Justice for All Reauthorization Act of 2016, Pub. L. No. 114-324, contains a provision for GAO to conduct a review on the factors that should be considered when broadening restitution provisions. This report describes factors stakeholders believe should be considered for a potential expansion of federal courts' authority to award restitution. To gather information on factors, GAO interviewed a non-generalizable group of stakeholders knowledgeable about the restitution process, including individuals and entities representing federal judges and court officials, federal prosecutors and Department of Justice officials, victims, and defendants and their counsel. GAO also reviewed relevant federal laws, legal cases, agency documentation, summary data on orders for restitution from fiscal years 1996 through 2016, and the amount of outstanding restitution debt owed in federal cases as of the end of fiscal year 2016. What GAO Found Federal courts have authority to award restitution for authorized losses to eligible victims. Generally, victims are those directly and proximately harmed as a result of a defendant's offense of conviction and they may be awarded compensation for their actual or “out-of-pocket” losses. Provisions for the potential expansion of restitution contained in the Justice for All Reauthorization Act of 2016 that GAO reviewed could allow for courts to award restitution to additional victims and for a greater scope of losses. Stakeholders GAO interviewed identified various factors to consider related to these potential expansion provisions, for example: Restitution for related conduct and no proximate cause requirement . A factor stakeholders stated should be considered in potentially allowing restitution for conduct that is broader than the offense of conviction was that it could be a violation of a defendant's constitutional right to due process because restitution could be awarded for conduct for which the defendant's guilt was not established. In addition, they said it could lead to increased complexity to determine victim losses, which could create challenges for federal prosecutors and could result in less restitution being awarded. For a potential expansion of restitution to compensate harm that was not proximately caused by the defendant (i.e., harm that was not reasonably foreseeable as a result of the offense) stakeholders said factors that should be considered include that the current proximate harm requirement does not present challenges and that such an expansion could lead to additional sentencing-related hearings and litigation. Restitution to restore victims to their position had the offense not been committed . Stakeholders said this provision is already a goal of federal restitution, but that a potential expansion could allow judges more discretion to order restitution for victim losses not specified by statute, which could help restore the victim to his or her pre-offense condition. Restitution for any injury, harm, or loss, including emotional distress . A factor stakeholders identified in potentially expanding restitution to cover intangible losses, including emotional distress, included that it could increase the complexity of the restitution process because these are not easily quantified losses. Relatedly, stakeholders said that the suitability of criminal versus civil proceedings should be considered because the civil system, through which crime victims may seek compensation at their own expense, is set up to handle these issues and losses, whereas officials involved in criminal cases lack the specialized skills to determine these kinds of losses. Stakeholders GAO interviewed identified additional factors related to the potential broadening of courts' authority to order restitution generally; for example, they told GAO that increased restitution debt and collectability challenges should be considered. According to the Department of Justice, the amount of outstanding restitution debt owed in federal cases as of the end of fiscal year 2016 was $110.2 billion. Stakeholders stated that defendants often lack the financial resources to pay restitution and adding to the uncollected restitution debt through a potential expansion of authority could lead to further collection challenges.
gao_GAO-18-234T
gao_GAO-18-234T_0
Background According to the President’s budget, the federal government plans to invest more than $96 billion for IT in fiscal year 2018—the largest amount ever. However, as we have previously reported, investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages, while contributing little to the desired mission-related outcomes. For example: The Department of Veterans Affairs’ Scheduling Replacement Project was terminated in September 2009 after spending an estimated $127 million over 9 years. The tri-agency National Polar-orbiting Operational Environmental Satellite System was halted in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion. The Department of Homeland Security’s Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion for the program. The Office of Personnel Management’s Retirement Systems Modernization program was canceled in February 2011, after the agency had spent approximately $231 million on its third attempt to automate the processing of federal employee retirement claims. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011. The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. Our past work found that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT. Such projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we have reported that some CIOs’ roles were limited because they did not have the authority to review and approve the entire agency IT portfolio. Implementing FITARA Can Improve Agencies’ Management of IT FITARA was intended to improve agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. The law includes specific requirements related to seven areas. Federal data center consolidation initiative (FDCCI). Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing their data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Enhanced transparency and improved risk management. OMB and covered agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their investments by level of risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Agency CIO authority enhancements. CIOs at covered agencies are required to (1) approve the IT budget requests of their respective agencies, (2) certify that OMB’s incremental development guidance is being adequately implemented for IT investments, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Portfolio review. Agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness and identify potential waste and duplication. In establishing the process associated with such portfolio reviews, the law requires OMB to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings. Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Maximizing the benefit of the Federal Strategic Sourcing Initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing Initiative. OMB is also required to issue regulations related to the initiative. In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance is intended to, among other things: assist agencies in aligning their IT resources with statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT costs, schedules, performance, and security. The guidance identified several actions that agencies were to take to establish a basic set of roles and responsibilities (referred to as the common baseline) for CIOs and other senior agency officials, which were needed to implement the authorities described in the law. For example, agencies were required to conduct a self-assessment and submit a plan describing the changes they intended to make to ensure that common baseline responsibilities were implemented. Agencies were to submit their plans to OMB’s Office of E-Government and Information Technology by August 15, 2015, and make portions of the plans publicly available on agency websites no later than 30 days after OMB approval. As of November 2016, all agencies had made their plans publicly available. In addition, in August 2016, OMB released guidance intended to, among other things, define a framework for achieving the data center consolidation and optimization requirements of FITARA. The guidance includes requirements for agencies to: maintain complete inventories of all data center facilities owned, operated, or maintained by or on behalf of the agency; develop cost savings targets for fiscal years 2016 through 2018 and report any actual realized cost savings; and measure progress toward meeting optimization metrics on a quarterly basis. The guidance also directs agencies to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for fiscal years 2016, 2017, and 2018. This strategy is to include, among other things, a statement from the agency CIO indicating whether the agency has complied with all data center reporting requirements in FITARA. Further, the guidance indicates that OMB is to maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. IT Acquisitions and Operations Identified by GAO as a High-Risk Area In February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlighted several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives was inconsistent and more work remained to demonstrate progress in achieving IT acquisition and operation outcomes. Further, our February 2015 high-risk report stated that, beyond implementing FITARA, OMB and agencies needed to continue to implement our prior recommendations in order to improve their ability to effectively and efficiently invest in IT. Specifically, from fiscal years 2010 through 2015, we made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations. These recommendations included many to improve the implementation of the aforementioned six critical IT initiatives and other government-wide, cross-cutting efforts. We stressed that OMB and agencies should demonstrate government-wide progress in the management of IT investments by, among other things, implementing at least 80 percent of our recommendations related to managing IT acquisitions and operations within 4 years. In February 2017, we issued an update to our high-risk series and reported that, while progress had been made in improving the management of IT acquisitions and operations, significant work still remained to be completed. For example, as of November 2017, OMB and agencies had fully implemented 452 (or about 56 percent) of the 803 recommendations. This was an increase of about 284 recommendations compared to the number of recommendations we reported as being fully implemented in 2015. Figure 1 summarizes the progress that OMB and agencies have made in addressing our recommendations as compared to the 80 percent target, as of November 2017. In addition, in fiscal year 2016, we made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings in IT acquisitions and operations. Also, beyond addressing our prior recommendations, our 2017 high-risk update noted the importance of OMB and federal agencies continuing to expeditiously implement the requirements of FITARA. To further explore the challenges and opportunities to improve federal IT acquisitions and operations, we convened a forum on September 14, 2016, to explore challenges and opportunities for CIOs to improve federal IT acquisitions and operations—with the goal of better informing policymakers and government leadership. Forum participants, which included 13 current and former federal agency CIOs, members of Congress, and private sector IT executives, identified key actions related to seven topics: (1) strengthening FITARA, (2) improving CIO authorities, (3) budget formulation, (4) governance, (5) workforce, (6) operations, and (7) transition planning. A summary of the key actions, by topic area, identified during the forum is provided in figure 2. In addition, in January 2017, the Federal CIO Council concluded that differing levels of authority over IT-related investments and spending have led to inconsistencies in how IT is executed from agency to agency. According to the Council, for those agencies where the CIO has broad authority to manage all IT investments, great progress has been made to streamline and modernize the federal agency’s footprint. For the others, where agency CIOs are only able to control pieces of the total IT footprint, it has been harder to achieve improvements. Current Administration Has Undertaken Efforts to Improve Federal IT The current administration has initiated additional efforts aimed at improving federal IT, including digital services. Specifically, in March 2017, the administration established the Office of American Innovation, which has a mission to, among other things, make recommendations to the President on policies and plans aimed at improving federal government operations and services and on modernizing federal IT. In doing so, the office is to consult with both OMB and the Office of Science and Technology Policy on policies and plans intended to improve government operations and services, improve the quality of life for Americans, and spur job creation. In May 2017, the administration also established the American Technology Council, which has a goal of helping to transform and modernize federal agency IT and how the federal government uses and delivers digital services. The President is the chairman of this council, and the Federal CIO and the United States Digital Service administrator are members. Congress Has Taken Action to Continue Selected FITARA Provisions and Modernize Federal IT Congress has recognized the importance of agencies’ continued implementation of FITARA provisions, and has taken legislative action to extend selected provisions beyond their original dates of expiration. For example, Congress has passed legislation to: remove the expiration date for enhanced transparency and improved risk management provisions, which were set to expire in 2019; remove the expiration date for portfolio review, which was set to expire in 2019; and extend the expiration date for FDCCI from 2018 to 2020. In addition, Congress is considering legislation to ensure the availability of funding to help further agencies’ efforts to modernize IT. Specifically, recently proposed legislation calls for agencies to establish working capital funds for use in transitioning from legacy systems, as well as for addressing evolving threats to information security. The legislation also proposes the creation of a technology modernization fund within the Department of the Treasury, from which agencies could borrow money to retire and replace legacy systems as well as acquire or develop systems. Agencies Have Taken Steps to Implement FITARA, but Additional Actions are Needed to Address Related Recommendations Agencies have taken steps to improve the management of IT acquisitions and operations by implementing key FITARA initiatives. However, agencies would be better positioned to fully implement the law and, thus, realize billions in cost savings and additional management improvements, if they addressed the numerous recommendations we have made aimed at improving data center consolidation, increasing transparency via OMB’s IT Dashboard, implementing incremental development, and managing software licenses. Agencies Have Made Progress in Consolidating Data Centers, but Need to Take Action to Achieve Planned Cost Savings One of the key initiatives to implement FITARA is data center consolidation. OMB established FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities, and the enactment of FITARA reinforced the initiative. However, in a series of reports that we issued from July 2011 through August 2017, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies’ data center consolidation plans, data center optimization, and OMB’s tracking and reporting on related cost savings. In these reports, we made a matter for Congressional consideration, and a total of 160 recommendations to OMB and 24 agencies to improve the execution and oversight of the initiative. Most agencies and OMB agreed with our recommendations or had no comments. As of November 2017, 84 of these recommendations remained open. For example, in May 2017, we reported that the 24 agencies participating in FDCCI collectively had made progress on their data center closure efforts. Specifically, as of August 2016, these agencies had identified a total of 9,995 data centers, of which they reported having closed 4,388, and having plans to close a total of 5,597 data centers through fiscal year 2019. Notably, the Departments of Agriculture, Defense, the Interior, and the Treasury accounted for 84 percent of the completed closures. In addition, that report noted that 18 of the 24 agencies had reported achieving about $2.3 billion collectively in cost savings and avoidances from their data center consolidation and optimization efforts from fiscal year 2012 through August 2016. The Departments of Commerce, Defense, Homeland Security, and the Treasury accounted for approximately $2.0 billion (or 87 percent) of the total. Further, 23 agencies reported about $656 million collectively in planned savings for fiscal years 2016 through 2018. This is about $3.3 billion less than the estimated $4.0 billion in planned savings for fiscal years 2016 through 2018 that agencies reported to us in November 2015. Figure 3 presents a comparison of the amounts of cost savings and avoidances reported by agencies to OMB and the amounts the agencies reported to us. As mentioned previously, FITARA required agencies to submit multi-year strategies to achieve the consolidation and optimization of their data centers no later than the end of fiscal year 2016. Among other things, this strategy was to include such information as data center consolidation and optimization metrics, and year-by-year calculations of investments and cost savings through October 1, 2018. Further, OMB’s August 2016 guidance on data center optimization contained additional information for how agencies are to implement the strategic plan requirements of FITARA, and stated that agencies were required to publicly post their strategic plans to their agency-owned digital strategy websites by September 30, 2016. As of April 2017, only 7 of the 23 agencies that submitted their strategic plans—the Departments of Agriculture, Education, Homeland Security, and Housing and Urban Development; the General Services Administration; the National Science Foundation; and the Office of Personnel Management—had addressed all five elements required by the OMB memorandum implementing FITARA. The remaining 16 agencies either partially met or did not meet the requirements. For example, most agencies partially met or did not meet the requirements to provide information related to data center closures and cost savings metrics. The Department of Defense did not submit a plan and was rated as not meeting any of the requirements. To better ensure that federal data center consolidation and optimization efforts improve governmental efficiency and achieve cost savings, in our May 2017 report, we recommended that 11 of the 24 agencies take action to ensure that the amounts of achieved data center cost savings and avoidances are consistent across all reporting mechanisms. We also recommended that 17 of the 24 agencies each take action to complete missing elements in their strategic plans and submit their plans to OMB in order to optimize their data centers and achieve cost savings. Twelve agencies agreed with our recommendations, 2 did not agree, and 10 agencies and OMB did not state whether they agreed or disagreed. More recently, in August 2017, we reported that agencies needed to address challenges in optimizing their data centers in order to achieve cost savings. Specifically, we noted that, according to the 24 agencies’ data center consolidation initiative strategic plans as of April 2017, most agencies were not planning to meet OMB’s optimization targets by the end of fiscal year 2018. Further, of the 24 agencies, 5—the Department of Commerce and the Environmental Protection Agency, National Science Foundation, Small Business Administration, and U.S. Agency for International Development—reported plans to fully meet their applicable targets by the end of fiscal year 2018; 13 reported plans to meet some, but not all, of the targets; 4 reported that they did not plan to meet any targets; and 2 did not have a basis to report planned optimization milestones because they do not report having any agency-owned data centers. Figure 4 summarizes agencies’ progress in meeting OMB’s optimization targets as of February 2017, and planned progress to be achieved by September 2017 and September 2018, as of April 2017. Figure 4: Agency-Reported Plans to Meet or Exceed the Office of Management and Budget’s (OMB) Data Center Optimization Targets FITARA required OMB to establish a data center optimization metric specific to measuring server efficiency, and required agencies to report on progress in meeting this metric. To effectively measure progress against this metric, OMB directed agencies to replace the manual collection and reporting of systems, software, and hardware inventory housed within agency-owned data centers with automated monitoring tools and to complete this effort no later than the end of fiscal year 2018. Agencies were required to report progress in implementing automated monitoring tools and server utilization averages at each data center as part of their quarterly data center inventory reporting to OMB. As of February 2017, 4 of the 22 agencies reporting agency-owned data centers in their inventory— the National Aeronautics and Space Administration, National Science Foundation, Social Security Administration, and U.S. Agency for International Development—reported that they had implemented automated monitoring tools at all of their data centers. Further, 10 reported that they had implemented automated monitoring tools at between 1 and 57 percent of their centers, and 8 had not yet begun to report the implementation of these tools. In total, the 22 agencies reported that automated tools were implemented at 123 (or about 3 percent) of the 4,528 total agency-owned data centers, while the remaining 4,405 (or about 97 percent) of these data centers were not reported as having these tools implemented. Figure 5 summarizes the number of agency-reported data centers with automated monitoring tools implemented, including the number of tiered and non-tiered centers. To address challenges in optimizing federal data centers, in our August 2017 report, we made recommendations to 18 agencies and OMB. Ten agencies agreed with our recommendations, three agencies partially agreed, and six (including OMB) did not state whether they agreed or disagreed. Risks Need to Be Fully Considered When Agencies Rate Their Major Investments on OMB’s IT Dashboard To facilitate transparency across the government in acquiring and managing IT investments, OMB established a public website—the IT Dashboard—to provide detailed information on major investments at 26 agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. In this regard, FITARA includes a requirement for CIOs to categorize their major IT investment risks in accordance with OMB guidance. Over the past 6 years, we have issued a series of reports about the Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its Dashboard, as well as concerns about the accuracy and reliability of the data. In total, we have made 47 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the Dashboard and to increase its availability. Most agencies agreed with our recommendations or had no comments. As of November 2017, 25 recommendations remained open. In June 2016, we determined that 13 of the 15 agencies selected for in- depth review had not fully considered risks when rating their major investments on the Dashboard. Specifically, our assessments of risk for 95 investments at the 15 selected agencies matched the CIO ratings posted on the Dashboard 22 times, showed more risk 60 times, and showed less risk 13 times. Figure 6 summarizes how our assessments compared to the selected investments’ CIO ratings. Aside from the inherently judgmental nature of risk ratings, we identified three factors which contributed to differences between our assessments and the CIO ratings: Forty of the 95 CIO ratings were not updated during April 2015 (the month we conducted our review), which led to differences between our assessments and the CIOs’ ratings. This underscores the importance of frequent rating updates, which help to ensure that the information on the Dashboard is timely and accurately reflects recent changes to investment status. Three agencies’ rating processes spanned longer than 1 month. Longer processes mean that CIO ratings are based on older data, and may not reflect the current level of investment risk. Seven agencies’ rating processes did not focus on active risks. According to OMB’s guidance, CIO ratings should reflect the CIO’s assessment of the risk and the investment’s ability to accomplish its goals. CIO ratings that do no incorporate active risks increase the chance that ratings overstate the likelihood of investment success. As a result, we concluded that the associated risk rating processes used by the 15 agencies were generally understating the level of an investment’s risk, raising the likelihood that critical federal investments in IT are not receiving the appropriate levels of oversight. To better ensure that the Dashboard ratings more accurately reflect risk, we made 25 recommendations to 15 agencies to improve the quality and frequency of their CIO ratings. Twelve agencies generally agreed with or did not comment on the recommendations and three agencies disagreed, stating that their CIO ratings were adequate. However, we noted that weaknesses in these three agencies’ processes still existed and that we continued to believe our recommendations were appropriate. Agencies Need to Increase Their Use of Incremental Development Practices OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIOs certify that IT investments are adequately implementing incremental development, as defined in the capital planning guidance issued by OMB. Further, subsequent OMB guidance on the law’s implementation, issued in June 2015, directed agency CIOs to define processes and policies for their agencies which ensure that they certify that IT resources are adequately implementing incremental development. However, in May 2014, we reported that 66 of 89 selected investments at five major agencies did not plan to deliver capabilities in 6-month cycles, and less than half of these investments planned to deliver functionality in 12-month cycles. We also reported that only one of the five agencies had complete incremental development policies. Accordingly, we recommended that OMB clarify its guidance on incremental development and that the selected agencies update their associated policies to comply with OMB’s revised guidance (once made available), and consider the factors identified in our report when doing so. Four of the six agencies agreed with our recommendations or had no comments, one agency partially agreed, and the remaining agency disagreed with the recommendations. The agency that disagreed did not believe that its recommendations should be dependent upon OMB taking action to update guidance. In response, we noted that only one of the recommendations to that agency depended upon OMB action, and we maintained that the action was warranted and could be implemented. Subsequently, in August 2016, we reported that agencies had not fully implemented incremental development practices for their software development projects. Specifically, we noted that, as of August 31, 2015, 22 federal agencies had reported on the Dashboard that 300 of 469 active software development projects (approximately 64 percent) were planning to deliver usable functionality every 6 months for fiscal year 2016, as required by OMB guidance. The remaining 169 projects (or 36 percent) that were reported as not planning to deliver functionality every 6 months, agencies provided a variety of explanations for not achieving that goal. These included project complexity, the lack of an established project release schedule, or that the project was not a software development project. Further, in conducting an in-depth review of seven selected agencies’ software development projects, we determined that 45 percent of the projects delivered functionality every 6 months for fiscal year 2015 and 55 percent planned to do so in fiscal year 2016. However, significant differences existed between the delivery rates that the agencies reported to us and what they reported on the Dashboard. For example, for four agencies (the Departments of Commerce, Education, Health and Human Services, and Treasury), the percentage of delivery reported to us was at least 10 percentage points lower than what was reported on the Dashboard. These differences were due to (1) our identification of fewer software development projects than agencies reported on the Dashboard and (2) the fact that information reported to us was generally more current than the information reported on the Dashboard. We concluded that, by not having up-to-date information on the Dashboard about whether the project is a software development project and about the extent to which projects are delivering functionality, these seven agencies were at risk that OMB and key stakeholders may make decisions regarding the agencies’ investments without the most current and accurate information. As such, we recommended that the seven selected agencies review major IT investment project data reported on the Dashboard and update the information as appropriate, ensuring that these data are consistent across all reporting channels. Finally, while OMB has issued guidance requiring agency CIOs to certify that each major IT investment’s plan for the current year adequately implements incremental development, only three agencies (the Departments of Commerce, Homeland Security, and Transportation) had defined processes and policies intended to ensure that the CIOs certify that major IT investments are adequately implementing incremental development. Accordingly, we recommended that the remaining four agencies—the Departments of Defense, Education, Health and Human Services, and the Treasury—establish policies and processes for certifying that major IT investments adequately use incremental development. The Departments of Education and Health and Human Services agreed with our recommendation, while the Department of Defense disagreed and stated that its existing policies address the use of incremental development. However, we noted that the department’s policies did not comply with OMB’s guidance and that we continued to believe our recommendation was appropriate. The Department of the Treasury did not comment on its recommendation. More recently, in November 2017, we reported that agencies needed to improve their certification of incremental development. Specifically, agencies reported that 62 percent of major IT software development investments were certified by the agency CIO for implementing adequate incremental development in fiscal year 2017, as required by FITARA as of August 2016. Table 1 identifies the number of federal agency major IT software development investments certified for adequate incremental development, as reported on the IT Dashboard for fiscal year 2017. Officials from 21 of the 24 agencies in our review reported that challenges hindered their ability to implement incremental development, which included: (1) inefficient governance processes; (2) procurement delays; and (3) organizational changes associated with transitioning from a traditional software methodology that takes years to deliver a product, to incremental development, which delivers products in shorter time frames. Nevertheless, 21 agencies reported that the certification process was beneficial because they used the information from the process to assist with identifying investments that could more effectively use an incremental approach, and used lessons learned to improve the agencies’ incremental processes. In addition, as of August 2017, only 4 of the 24 agencies had clearly defined CIO incremental development certification policies and processes that contained descriptions of the role of the CIO in the process and how the CIO’s certification will be documented; and included definitions of incremental development and time frames for delivering functionality consistent with OMB guidance. Figure 7 summarizes our analysis of agencies’ policies for CIO certification of the adequate use of incremental development in IT investments. Lastly, we reported that OMB’s capital planning guidance for fiscal year 2018 (issued in June 2016) lacked clarity regarding how agencies were to address the requirement for certifying adequate incremental development. While the 2018 guidance stated that agency CIOs are to provide the certifications needed to demonstrate compliance with FITARA, the guidance did not include a specific reference to the provision requiring CIO certification of adequate incremental development. We noted that, as a result of this change, OMB placed the burden on agencies to know and understand how to demonstrate compliance with FITARA’s incremental development provision. Further, because of the lack of clarity in the guidance as to what agencies were to provide, OMB could not demonstrate how the fiscal year 2018 guidance ensured that agencies provided the certifications specifically called for in the law. Accordingly, in August 2017, OMB issued its fiscal year 2019 guidance, which addressed the weaknesses we identified in the previous fiscal year’s guidance. Specifically, the revised guidance requires agency CIOs to make an explicit statement regarding the extent to which the CIO is able to certify the use of incremental development, and to include a copy of that statement in the agency’s public congressional budget justification materials. As part of the statement, an agency CIO must also identify which specific bureaus or offices are using incremental development on all of their investments. In our November 2017 report, we made 19 recommendations to 17 agencies to improve reporting and certification of incremental development. Eleven agencies agreed with our recommendations, 1 partially agreed, and 5 did not state whether they agreed or disagreed. OMB disagreed with several of our conclusions, which we continued to believe were valid. In total, from May 2014 through November 2017, we have made 42 recommendations to OMB and agencies to improve their implementation of incremental development. As of November 2017, 34 of our recommendations remained open. Agencies Need to Better Manage Software Licenses to Achieve Savings Federal agencies engage in thousands of software licensing agreements annually. The objective of software license management is to manage, control, and protect an organization’s software assets. Effective management of these licenses can help avoid purchasing too many licenses, which can result in unused software, as well as too few licenses, which can result in noncompliance with license terms and cause the imposition of additional fees. As part of its PortfolioStat initiative, OMB has developed policy that addresses software licenses. This policy requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses. In May 2014, we reported on federal agencies’ management of software licenses and determined that better management was needed to achieve significant savings government-wide. In particular, 22 of the 24 major agencies did not have comprehensive license policies and only 2 had comprehensive license inventories. In addition, we identified five leading software license management practices, and the agencies’ implementation of these practices varied. As a result of agencies’ mixed management of software licensing, agencies’ oversight of software license spending was limited or lacking, thus potentially leading to missed savings. However, the potential savings could be significant considering that, in fiscal year 2012, 1 major federal agency reported saving approximately $181 million by consolidating its enterprise license agreements, even when its oversight process was ad hoc. Accordingly, we recommended that OMB issue needed guidance to agencies; we also made 135 recommendations to the 24 agencies to improve their policies and practices for managing licenses. Among other things, we recommended that the agencies regularly track and maintain a comprehensive inventory of software licenses and analyze the inventory to identify opportunities to reduce costs and better inform investment decision making. Most agencies generally agreed with the recommendations or had no comments. As of November 2017, 112 of the recommendations had not been implemented. Table 2 reflects the extent to which agencies implemented recommendations in these areas. In conclusion, with the enactment of FITARA, the federal government has an opportunity to save billions of dollars; improve the transparency and management of IT acquisitions and operations; and to strengthen the authority of CIOs to provide needed direction and oversight. The forum we held also recommended that CIOs be given more authority, and noted the important role played by the Federal CIO. Most agencies have taken steps to improve the management of IT acquisitions and operations by implementing key FITARA initiatives, including data center consolidation, efforts to increase transparency via OMB’s IT Dashboard, incremental development, and management of software licenses; and they have continued to address recommendations we have made over the past several years. However, additional improvements are needed, and further efforts by OMB and federal agencies to implement our previous recommendations would better position them to fully implement FITARA. To help ensure that these efforts succeed, OMB’s and agencies’ continued implementation of FITARA is essential. In addition, we will continue to monitor agencies’ implementation of our previous recommendations. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Dave Powner, Director, Information Technology at (202) 512- 9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Kevin Walsh (Assistant Director), Chris Businsky, Rebecca Eyler, Meredith Raymond, and Bradley Roach (Analyst in Charge). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The federal government plans to invest almost $96 billion on IT in fiscal year 2018. Historically, these investments have too often failed, incurred cost overruns and schedule slippages, or contributed little to mission-related outcomes. Accordingly, in December 2014, Congress enacted FITARA, aimed at improving agencies' acquisitions of IT. Further, in February 2015, GAO added improving the management of IT acquisitions and operations to its high-risk list. This statement summarizes agencies' progress in improving the management of IT acquisitions and operations. It is based on GAO's prior and recently published reports on (1) data center consolidation, (2) risk levels of major investments as reported on OMB's IT Dashboard, (3) implementation of incremental development practices, and (4) management of software licenses. What GAO Found The Office of Management and Budget (OMB) and federal agencies have taken steps to improve the management of information technology (IT) acquisitions and operations through a series of initiatives, and as of November 2017, had fully implemented about 56 percent of the approximately 800 related GAO recommendations made between fiscal years 2010 through 2015. However, important additional actions are needed. Consolidating data centers . OMB launched an initiative in 2010 to reduce data centers, which was reinforced by the Federal Information Technology Acquisition Reform Act (FITARA) in 2014. However, in a series of reports that GAO issued over the past 6 years, it noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies' data center consolidation plans, data center optimization, and OMB's tracking and reporting on related cost savings. These reports contained a matter for Congressional consideration, and a total of 160 recommendations to OMB and 24 agencies, to improve the execution and oversight of the initiative. Most agencies and OMB agreed with the recommendations or had no comments. As of November 2017, 84 of the recommendations remained open. Enhancing transparency . OMB's IT Dashboard provides information on major investments at federal agencies, including ratings from Chief Information Officers that should reflect the level of risk facing an investment. Over the past 6 years, GAO has issued a series of reports about the Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its Dashboard, as well as concerns about the accuracy and reliability of the data. In total, GAO has made 47 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the Dashboard and to increase its availability. Most agencies agreed with the recommendations or had no comments. As of November 2017, 25 of these recommendations remained open. Implementing incremental development . OMB has emphasized the need for agencies to deliver investments in smaller parts, or increments, in order to reduce risk and deliver capabilities more quickly. Since 2012, OMB has required investments to deliver functionality every 6 months. Further, GAO has issued reports highlighting additional actions needed by OMB and agencies to improve their implementation of incremental development. In these reports, GAO made 42 recommendations. Most agencies agreed or did not comment on the recommendations. As of November 2017, 34 of the recommendations remained open. Managing software licenses . Effective management of software licenses can help avoid purchasing too many licenses that result in unused software. In May 2014, GAO reported that better management of licenses was needed to achieve savings, and made 136 recommendations to improve such management. Most agencies generally agreed with the recommendations or had no comments. As of November 2017, 112 of the recommendations remained open. What GAO Recommends From fiscal years 2010 through 2015, GAO made about 800 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations, and included recommendations to improve the oversight and execution of the data center consolidation initiative, the accuracy and reliability of the Dashboard, incremental development policies, and software license management. Most agencies agreed with GAO's recommendations and had taken some actions or had no comments. In addition, from fiscal year 2016 to present, GAO has made more than 200 new recommendations in this area. GAO will continue to monitor agencies' implementation of these recommendations.
gao_GAO-18-418
gao_GAO-18-418_0
Background Authentication Provides IRS Reasonable Assurance That It Is Interacting with Legitimate Taxpayers IRS authenticates taxpayers to provide the agency with reasonable assurance that it is interacting with the legitimate taxpayer. IRS verifies that it is interacting with the legitimate taxpayer through identity proofing and authentication. Identity proofing is the process of first establishing that people are actually who they claim to be. Authentication is the process of verifying that returning users are who they say they are by requiring the use of one or more authenticators—such as a password, a cryptographic key, or a fingerprint—before allowing them access to sensitive data or a resource. In this report, we refer to both steps collectively as “authentication.” For high-risk interactions, such as access to prior year tax information, authentication can help IRS avoid improperly disclosing PII or issuing a fraudulent refund. Authentication is particularly important for combatting IDT refund fraud, which occurs when a fraudster obtains an individual’s SSN, date of birth, or other PII and uses it to file a fraudulent tax return seeking a refund. IDT refund fraud can also affect businesses. Specifically, fraudsters can use business information to file a fraudulent corporate return requesting a refund. According to IRS officials, fraudsters can file false employer Form W-2, Wage and Tax Statements (W-2) to support fraudulent individual returns seeking refunds. We have previously reported that when IRS suspects that a tax return is fraudulent, it will stop the return from further processing, and attempt to notify and authenticate the taxpayer before issuing the refund. Authentication can be accomplished using different methods depending on the risk of the interaction. Single-factor authentication: Useful when someone wants to access a low-risk system or service, this method may require only a user name and password. Multi-factor authentication: For high-risk interactions such as access to systems that include PII or financial information, this method requires at least two of the following: “something you know” (e.g., a user name and password); “something you have” (e.g., a mobile phone or cryptographic key); or “something you are” (e.g., a fingerprint or other biometric data). Designing authentication programs involves a balancing act—IRS needs to prevent fraudsters from passing authentication using stolen taxpayer information, but it must balance that against the burden on legitimate taxpayers who must also authenticate. If IRS makes the authentication process too stringent, legitimate taxpayers may not be able to successfully authenticate to, for example, access their prior year tax information or have IRS release a frozen refund. Conversely, if the process is too easy, fraudsters will likely be able to authenticate as easily as legitimate taxpayers. Industry representatives told us that identity proofing and authentication are becoming more difficult with the wide availability of PII. Further, according to NIST, it is challenging for organizations to authenticate users remotely via a web application because the processes and technologies to establish and use digital identities offer multiple opportunities for impersonation or other attacks. These interactions may become even more difficult and risky for organizations like IRS, who may interact with a taxpayer only once a year. As shown by the data breaches discussed at the beginning of this report, fraudsters are persistent in their efforts to exploit weaknesses in online systems and, in the context of IRS, access sensitive taxpayer information. For example, IRS reported that, between January and March 2017, fraudsters were able to use PII to access information from 100,000 taxpayer accounts through IRS’s Data Retrieval Tool. According to the Treasury Inspector General for Tax Administration, identity thieves may have used PII obtained outside the tax system to start the Free Application for Federal Student Aid (FAFSA) application process and access tax information through the Data Retrieval Tool. Further, we have previously reported that fraudsters can use PII obtained in a data breach to more easily create fraudulent returns that resemble authentic tax returns, making it more difficult for IRS to detect potential fraud. Even as IRS has adapted its IDT defenses, fraudsters have developed more complex and sophisticated methods to bypass those defenses and commit fraud undetected. IDT refund fraud affects IRS, state revenue offices, tax preparers, tax software companies, and financial institutions. According to industry representatives, as these entities improve security in one area prone to fraud, fraudsters’ methods evolve to target a weaker area. For example, in March 2016, IRS alerted payroll and human resource professionals of a phishing e-mail scheme in which fraudsters posed as company executives and requested personal information on employees via e-mail, including W-2s. With this information, fraudsters can imitate the legitimate taxpayer and file fraudulent tax returns seeking refunds. In January 2018, IRS reported that the agency received about 100 reports of W-2 phishing schemes in 2016 and about 900 reports in 2017. IRS also reported that more than 200 employers, affecting hundreds of thousands of employees, were victimized by W-2 phishing schemes in 2017. IRS Has Broad Efforts Underway to Address IDT and Authentication Challenges IRS is working to address these challenges, in part, by collaborating with industry—including tax software companies, the tax preparer community, and financial institutions—as well as state partners. In March 2015, the former IRS Commissioner convened a Security Summit with industry and states to improve information sharing and fraud detection and to address common challenges. The Summit led to the creation of seven workgroups to combat IDT refund fraud across multiple platforms. Each workgroup is led by three co-leads—one each from IRS, state departments of revenue or state associations, and industry partners. These workgroups collaborate on initiatives to improve IDT refund fraud prevention and detection, including authentication. In 2015, IRS also established the Identity Assurance Office (IAO) to increase insight into authentication and fraud detection needs agency- wide, including authentication services delivered via four channels: telephone, online, in-person, and correspondence (i.e., postal mail— hereafter referred to as mail—or fax). Among other responsibilities, IAO works with stakeholders across IRS to review the agency’s various authentication programs, including assessing risks of current and planned authentication efforts across the four channels and identifying ways to mitigate these risks. In December 2016, IAO released its IRS Identity Assurance Strategy and Roadmap (Roadmap) for developing a modern and secure authentication environment for all taxpayers, regardless of how they interact with IRS. NIST Established New Requirements for Digital Authentication Among other things, the National Institute of Standards and Technology (NIST) develops and maintains standards, guidelines, recommendations, and research on the security and privacy of information and information systems. In June 2017, NIST released guidance on digital authentication to help agencies improve the security of their identity-proofing and authentication programs. In its new guidance, NIST breaks down the digital identity environment into three separate components of assurance: 1. Identity proofing: establishing that the person is actually who they 2. Authentication: establishing that the person attempting to access a service is in control of one or more valid authenticators associated with that person’s identity; and 3. Federation: the concept that one set of user credentials can be used to access multiple systems. The guidance directs agencies to assess the risk for each component of identity assurance, rather than conducting a single risk assessment for the entire process. According to NIST officials, this new approach provides flexibility in choosing identity proofing and authentication solutions; aligns with existing, standards-based market offerings; is modular and cost-effective; and enhances individual privacy. In addition to NIST’s new requirements for authentication, recent technology advances and private-sector innovation are providing new options for identity proofing and authenticating users, including in cases where, for example, IRS interacts with taxpayers once a year. Some examples of these technologies include physical biometrics, such as facial recognition, as well as behavioral biometrics, such as voice patterns, computer keystroke or mouse use patterns, swipe patterns, and gait analysis. IRS Incorporates Risk and Other Factors to Guide Authentication Decisions for Taxpayer Interactions IRS Identifies Interactions that Require Authentication and Estimates Risk to Determine Authentication Approach According to IRS documents and discussions with officials, the agency considers risks to both the taxpayer and IRS when making decisions about how to approach authentication, which is consistent with federal guidelines. In making these decisions, IRS considers how individuals would be affected by the unauthorized release of sensitive information. IRS also considers the impact on the agency, including the potential for financial loss or harm to IRS programs or services, and loss of public trust. In 2016, IRS identified over 100 interactions between the agency and taxpayers that require authentication. The interactions range in risk level and IRS categorized them based on the potential for incorrect payment of refunds, disclosure of taxpayer information, and critical impacts on IRS operations. High-risk interactions include when an individual taxpayer establishes an online account with IRS, which provides access to prior year tax information and other PII, or when a taxpayer is asked to confirm his identify before IRS processes what the agency considers to be a potentially fraudulent tax return. Lower-risk interactions include paying a tax bill online. According to IRS, as the risk level of taxpayer interactions increases—for example, interactions that involve sensitive financial information—the authentication process becomes more rigorous. This enhanced security helps reduce the possibility that a fraudster can successfully authenticate. Further, if tax professionals want to conduct business with IRS online, such as when working on behalf of a client to file a return or request a prior year’s tax transcript, they must establish an account and authenticate their identity. IRS Can Authenticate Taxpayers through One or More Channels According to IRS, the agency determines the means by which a taxpayer or tax professional can authenticate his or her identity and what data are required during the authentication process to appropriately minimize risk to the agency. IRS officials told us that the agency works to balance potential risks against its resources and mission to provide all taxpayers access to IRS services and support. IRS performs authentication through the following channels. Telephone. Taxpayers can authenticate via telephone with a customer service representative (CSR) for selected higher-risk interactions with IRS, such as in cases of suspected IDT refund fraud. Telephone authentication can require taxpayers to respond to knowledge-based questions that a fraudster would not likely know. For example, for high- risk interactions, taxpayers must answer additional tax return-related questions. Taxpayers who fail to respond correctly to these questions are then required to authenticate in person at a Taxpayer Assistance Center. For certain lower-risk interactions, taxpayers can authenticate through an automated telephone system. In-person. For some interactions with IRS, taxpayers can authenticate their identity directly with an IRS employee at 1 of IRS’s approximately 400 Taxpayer Assistance Centers located throughout the country. Taxpayers may need to present one or more government-issued forms of identification and other documents, such as a utility statement, depending on the level of authentication required for the specific interaction. Online. IRS authenticates taxpayers online for both high-risk and lower- risk interactions. For high-risk interactions such as requesting a tax transcript or looking up an Identity Protection Personal Identification Number (IP PIN), taxpayers must pass a multi-factor authentication process using IRS’s Secure Access platform. IRS launched Secure Access in June 2016 following the Get Transcript data breach and, as of April 2018, was using it for 11 applications including authentication for Get Transcript, IP PIN, and the online account. Officials told us they plan to implement Secure Access for other IRS applications in 2018. Taxpayers authenticating through Secure Access establish an account by providing IRS with a valid e-mail address, basic personal information, and personal financial information. Taxpayers then provide IRS a mobile phone number and IRS sends the phone an activation code that the taxpayer enters online. This step validates that the taxpayer possesses the mobile phone. IRS authenticates returning users via a security code. For lower-risk interactions, taxpayers may authenticate online by answering several knowledge-based questions, such as questions about their current return to learn the status of their refund. Correspondence. In some cases, taxpayers can submit documents or request tax information via correspondence, which are then reviewed by IRS and authenticated by matching against information in IRS’s systems. This method can require that IRS send the requested documents (such as a tax transcript) only to the taxpayer’s address of record, or require the taxpayer to include a photocopy of identification. For example, in some instances, taxpayers who cannot authenticate via telephone and cannot travel to a Taxpayer Assistance Center in person may be able to authenticate by mail. Each authentication channel requires different IRS resources. These resources include IRS staff and overhead; contracts with vendors that provide identity verification services; and costs inherent to the specific channel, such as mailing costs. Figure 1 summarizes IRS’s authentication channels and illustrates a number of the interactions that taxpayers or tax professionals can accomplish through one, or several, channels. It also illustrates the differences in costs per transaction. According to IRS data, in-person authentication at a Taxpayer Assistance Center is the most expensive way to authenticate taxpayers (about $89 per interaction), followed by telephone (about $54 per interaction). Online authentication costs the least, at less than $1 per interaction. According to the National Taxpayer Advocate, while requiring the appropriate level of authentication is necessary to protect IRS against fraudsters, the agency also needs to offer taxpayers a range of options for interacting with IRS. IRS’s Authentication Programs and Services Are Designed to Reduce Fraud In this report, we focus on four key IRS programs and services that require authentication: Taxpayer Protection Program (TPP). Through TPP, IRS reviews tax returns that are flagged by IRS’s IDT filters as potentially fraudulent, such as when a return includes characteristics of known fraud schemes. IRS sends a letter notifying taxpayers that they must authenticate their identity before IRS will process the return or issue a refund. According to IRS, in fiscal year 2017, more than 1.9 million taxpayers received such a notification, and IRS authenticated about 1.17 million of them. These taxpayers could verify their identity via telephone, in-person, and correspondence. In August 2016, IRS suspended its TPP online authentication service because of potential system security weaknesses. In mid-March 2018, IRS relaunched the first phase of a more secure TPP online authentication service, which is discussed later in this report. Get Transcript. This service allows individual taxpayers to request and receive a copy of their prior years’ tax information. The transcript contains information from the taxpayer’s tax filing history, such as information from Form 1040, U.S. Individual Income Tax Return, that can be used, for example, when applying for a mortgage or student loan, or to electronically file (e-file) an upcoming tax return. Taxpayers can request the transcript online or in-person (to be delivered online or via mail); over the telephone (to be delivered via correspondence); or by correspondence (to be delivered via mail). Taxpayers must provide authentication information before IRS will process their request. According to IRS, in fiscal year 2017, IRS delivered about 26.4 million transcripts, with about 59 percent of transcripts delivered online. IP PIN. IRS assigns each victim of IDT a single-use identification number to be used to file a future electronic or paper tax return. IRS also offers taxpayers in Florida, Georgia, and the District of Columbia the option to request an IP PIN to help prevent IDT in these high tax- related IDT locations. IRS automatically rejects e-filed returns if they do not include the IP PIN and will delay paper returns for extra examination when taxpayers file without the IP PIN. According to IRS, the agency mailed 3.5 million IP PINs to be used during the 2017 filing season. IRS’s Online Services. IRS has developed a number of online services that require taxpayers and tax professionals to authenticate before accessing information online. For example, taxpayers who have established a verified online account can set up an online payment plan. Taxpayers can also check the status of their refund, as well as update their address of record. Taxpayers can also use IRS’s mobile application for some of these actions, such as checking the status of a refund or making a payment to IRS. Similarly, through IRS’s e-Services, tax professionals who have been vetted and approved by IRS can manage their e-file accounts, file tax returns on behalf of clients, and view their clients’ tax return information. As noted in figure 2, the volume of taxpayers authenticated for each IRS program or service varies by channel. Further, although TPP costs IRS more than Get Transcript and affects far fewer taxpayers, IRS reported that TPP helped prevent $5.3 billion in lost tax revenue in calendar year 2016. IRS Has Made Progress on Its Authentication Efforts, but Has Not Prioritized Authentication Improvements and Is Not Sufficiently Assessing and Monitoring Risks for All Channels IRS Has Begun to Implement Its Authentication Strategy, but Has Not Articulated Priorities and Resource Needs IRS has identified high-level strategic campaigns, or efforts to enhance identity assurance, in its Identity Assurance Strategy and Roadmap (Roadmap) and has established a business process to support these efforts. However, IRS has not articulated relative priorities for the foundational initiatives supporting its strategic efforts or the resources it will require to complete them. As discussed earlier, IRS’s 2016 Roadmap is the agency’s plan for developing a modern and secure authentication environment for all taxpayers regardless of how they interact with IRS. The Roadmap outlines six core authentication objectives, followed by 10 high-level strategic efforts, and 14 foundational initiatives to help IRS address its authentication challenges and identify opportunities for future investment. (See appendix II.) Further, IRS has identified about 90 activities to support its foundational initiatives and the responsible organizations and general duration to complete them. These initiatives include, for example, implementing a risk assessment framework that can be applied across all authentication channels and services; developing a framework of identity proofing and authentication requirements for third parties accessing and using IRS data and services; and improving taxpayer assurance by sending automated electronic alerts to taxpayers, such as when they file a return. To support implementation of these initiatives, IRS established a 12- member executive governance board. Board members are senior executives from business units across IRS, including the Identity Assurance Office (IAO), IT Applications Development, IT Cyber Security, and Wage and Investment. The board helps to monitor progress, risks, and challenges associated with implementing its Roadmap, and has generally met monthly since January 2017. Our prior work on government performance has identified several leading practices for planning at the program or initiative level. Among other things, these practices call for strategic plans to contain the goals and objectives of a program and the human, financial, and information resources required to complete them. Leading practices also call for agencies to develop estimates of benefits and costs to help prioritize new investments. Following these practices can help agencies establish priorities in a complex environment. IRS has made progress on some of the strategic efforts identified in its Roadmap. For example, consistent with its core objectives, IRS has taken steps to enhance fraud detection by improving telephone authentication procedures and expanding its online authentication services. In October 2016, IRS implemented a new process for high-risk telephone authentication, which includes generating questions for the taxpayer using data from internal IRS systems instead of from third-party data or credit reporting agencies. In addition, in March 2018, IRS launched the first phase of its improved online authentication service for TPP, called ID Verify. According to IRS officials, the first phase of the service will be available to taxpayers who did not file the return in question and appear to be victims of IDT refund fraud. The second phase, which IRS plans to implement later in 2018, will expand the service to all taxpayers selected for TPP. While IRS’s Roadmap demonstrates the breadth of the agency’s strategic vision and core objectives, it does not articulate the resources IRS needs to implement any of its 14 foundational initiatives and their supporting activities. For example, one of IRS’s foundational initiatives is to send event-driven notifications to taxpayers, such as when they file a return or request a tax transcript. Such notifications could help IRS detect potentially fraudulent activity at the earliest stage and improve authentication of tax returns. The Roadmap identifies seven supporting activities for this foundational initiative. One is to provide taxpayers with greater control over their online accounts. Another supporting activity is to determine methods for sending notifications to taxpayers about activity on their account. However, IRS has not identified the resources required to complete these activities, and the Roadmap notes that six of the seven activities will take between 6 months to 3 years to complete. In December 2017, IRS officials stated that they had developed business requirements for the foundational initiative to give taxpayers greater control over their online accounts. However, IRS has not identified funding for the initiative’s other supporting activities—such as developing requirements to send push notifications to taxpayers—and implementing them will depend on the availability of future resources. Further, while IRS has developed a business process that would help the agency prioritize initiatives, the process has not been fully implemented. In 2015, we recommended that IRS estimate and document the costs, benefits, and risks of possible options for taxpayer authentication, in accordance with OMB and NIST guidance. Consistent with our recommendation and its Roadmap, IRS developed a process to assess the costs, benefits, and risks of current and potential authentication tools. In May 2017, IRS implemented its business decision model to analyze and improve its online taxpayer authentication services and provided us with results from an analysis for implementing a text-to-voice functionality for IRS’s Secure Access online authentication platform. This function would allow taxpayers the option of receiving an automated voice code for authentication on a verified landline (instead of a text message on a mobile phone). As a result of this analysis, IRS approved the proposal to implement this tool. However, in December 2017, IRS officials stated that the text-to-voice tool is not moving forward because of other competing IT improvements and funding constraints. Further, IAO has not yet applied the business decision model to other potential authentication initiatives, such as those identified in its Roadmap. In December 2017, IRS officials stated that each of the strategic efforts and foundational initiatives identified in the Roadmap are a high priority, and they are working to address them concurrently while balancing the availability of resources against the greatest threats to the tax environment. We recognize that a strategy is necessarily high-level and that IRS must remain flexible and use necessary resources to respond to unexpected threats. At the same time, clearly identifying resources and prioritizing its initiatives and activities will help clarify the relationships between IRS’s authentication efforts and resource needs relative to expected benefits. Further, such efforts may also help IRS establish clearer timelines and better respond to unexpected events. IRS Has Not Established a Policy to Assess Risks for Telephone, In-Person, and Correspondence Authentication While IRS has generally performed regular risk assessments on its online authentication applications, it does not perform comparable assessments to identify, assess, and mitigate risks for its telephone, in-person, and correspondence authentication channels. Federal guidance directs agencies to regularly assess and address the risks of government IT systems. Specifically, OMB requires agencies to conduct annual risk assessments on IT systems performing remote authentication. The assessments should also be conducted when the agency plans to modify its business processes or technology. This includes reviewing new and existing electronic transactions to ensure that authentication processes provide the appropriate level of assurance outlined in NIST guidance. While federal guidelines broadly require agencies to identify and manage risks and establish specific requirements for programs using online authentication, no corresponding federal guidelines exist for telephone, in-person, and correspondence authentication, although we have previously reported that federal guidance and standards are applicable to IRS’s phone authentication. Similarly, our Framework for Managing Fraud Risks in Federal Programs directs agencies to conduct fraud risk assessments at regular intervals and when there are changes to the program operating environment, as assessing fraud risks is an iterative process. Previously, such risk assessments have helped IRS identify security weaknesses and, in some cases, have led the agency to take an authentication service offline. For example, in response to a recommendation we made in May 2016, IRS performed an updated risk assessment on TPP’s online authentication service, a key defense against IDT refund fraud. Based on the results of this assessment, IRS disabled its online authentication service until it could appropriately address the security weaknesses that it identified. Consistent with federal guidance, IRS has identified and analyzed risks associated with services and programs requiring online authentication, including TPP, Get Transcript, and IP PIN, among others. Further, IRS has made recent progress in updating risk assessments and improving security for its online authentication applications. Specifically, between June 2017 and April 2018, IRS reassessed authentication risk levels for some online applications, mitigated risks by moving additional applications behind its Secure Access authentication platform, and identified other compensating controls to appropriately protect its systems. In December 2017, IRS officials stated that they were working to bring remaining authentication applications in line with their most recent risk assessment. They expected to complete this work by the last quarter of fiscal year 2018. IRS has efforts underway to identify risks for telephone, in-person, and correspondence authentication, but has made limited progress implementing its process for assessing risks for all taxpayer authentication channels. As previously discussed, in 2016, IRS identified over 100 interactions that require taxpayer authentication and categorized these into three high-level risk outcomes. According to IRS’s risk assessment process, the next step is for IRS business units to assess the effects of incorrect authentication for each interaction or program, identify gaps in existing processes, and develop options to address the gaps. IRS officials stated that this process involves conducting scenario-based workshops with subject matter experts. However, as of March 2018, this process has only been applied to TPP and one other IRS business practice. In early 2017, IRS conducted a 2- day, internal, scenario-based workshop to assess risks and impacts and to identify gaps for TPP authentication. Workshop participants identified 45 short-, medium-, and long-term potential enhancements to TPP’s authentication processes. However, IRS had not performed similar risk impact assessments for other programs that rely on telephone, in-person, and correspondence authentication—including Get Transcript and IP PIN—and officials do not have a plan or timeline for conducting these assessments. Further, IRS has not developed a plan with time frames to address the deficiencies it identified for TPP. In December 2017, IRS officials stated they are reviewing the 45 TPP enhancements identified by workshop participants, but have no clear plans to implement them because of resource constraints. IRS has made limited overall progress on this front because it does not have a policy that requires regular assessments and timely mitigation of identified issues for telephone, in-person, and correspondence authentication, as is required for online authentication programs and services. IRS also does not have guidelines for mitigating authentication risks to these channels in a timely manner. In late November 2017, the Director of IAO stated that IAO alone does not have the authority to create and implement a policy that compels other IRS business units to use its risk assessment process or mitigate issues in a timely manner. Officials from other IRS business units stated that they continually assess risks to telephone, in-person and correspondence authentication, even without a policy to do so. However, IRS could not provide evidence of such prior risk assessments or risk mitigation plans. IRS’s Roadmap states that it will implement a secure authentication platform for taxpayers regardless of how they interact with IRS—online, via telephone, in- person, or correspondence—to help ensure that information is secure and that the agency is interacting with a legitimate taxpayer. Without a policy for conducting risk assessments for these channels and addressing deficiencies in a timely manner, IRS may underestimate known risks and overlook emerging threats to the tax environment. As a result, these channels may be more vulnerable to fraudulent activity, including unauthorized attempts to access taxpayer information. IRS Lacks Internal Controls to Effectively Monitor Telephone, In- Person, and Correspondence Authentication IRS has established internal controls including procedures and mechanisms to monitor performance of online authentication, but does not have similar controls in place to monitor the performance of telephone, in-person, and correspondence authentication. Federal standards for internal control call for agencies to design their information systems in a way that meets operational needs and allows the agency to respond to risks. Further, agencies are to collect and use quality information to make informed decisions. Quality information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. Further, to have an effective internal control system, agencies should also establish procedures to monitor and evaluate the performance of programs and systems as part of the normal course of operations. To this end, monitoring should be performed on an ongoing basis, and any deficiencies the agency has identified should be addressed in a timely manner. Monitoring activities are even more critical in an environment where the risk of fraud is high because such efforts allow an agency to quickly respond to emerging risks to minimize the impact of fraud. Further, IRS’s Strategic Plan calls for its organizations to use analytics and research to improve program effectiveness and foster a timely, data-driven decision-making environment. According to IRS documentation and discussions with officials, the Secure Access online authentication platform allows IRS to conduct near real-time monitoring of taxpayer authentication outcomes. Specifically, for each online service using Secure Access, IRS is able to monitor on a daily basis how many taxpayers registered for an account; rates of successful and unsuccessful identity proofing and verification; and suspicious user patterns, such as multiple login attempts. IRS is also able to monitor system error codes for specific steps in the authentication process, such as when the secure messaging process fails. IRS officials stated that this enhanced performance monitoring of online authentication began in June 2016, and it is helping IRS determine where in the authentication process taxpayers may be having difficulties and potential causes of the problem. However, IRS does not have comparable procedures and mechanisms to monitor authentication outcomes for telephone, in-person, and correspondence authentication, particularly for TPP, one of IRS’s key defenses against IDT refund fraud. Further, since August 2016, taxpayers have been able to authenticate using only these channels. IRS currently uses its Account Management Services (AMS) to capture telephone and in-person authentication outcomes for TPP; however, as discussed below, this is not an effective mechanism for monitoring authentication outcomes. AMS is IRS’s primary application for recording, storing, and retrieving information on all types of taxpayer interactions over time. IRS’s customer service representatives (CSR) use AMS to, among other things, record information related to taxpayer authentication performed over the phone or in person for TPP. According to IRS documentation, AMS includes a field where the CSR is to enter the authentication outcome and also an area where the CSR enters notes on the details of the taxpayer interaction. In the context of TPP, IRS officials stated that CSRs use the notes field to record, for example, the reason why the taxpayer failed the authentication process, and other information important for other CSRs to know. IRS also relies on another application to review the status of TPP cases, such as if a case is open or closed. To better understand how CSRs are implementing procedures to capture TPP authentication outcomes in AMS, we analyzed data in AMS from January through October 2017. The result of our analysis and related discussions with IRS officials indicate three primary internal controls issues. First, IRS does not have a reliable, direct mechanism to collect data on the number of taxpayers who pass and fail telephone, in-person, and correspondence authentication. Second, data quality issues make it difficult for IRS to understand why taxpayers may be failing these authentication processes. Third, the IRS organizations responsible for monitoring these channels do not have access to complete AMS data, making it difficult for IRS to identify potential authentication issues and develop solutions to address them. No mechanism to collect reliable, direct data on authentication passes and failures. As previously discussed, when a taxpayer calls IRS or visits a Taxpayer Assistance Center in regard to a TPP letter, the CSR is to enter the result of the authentication (i.e., pass or fail) into AMS with one of nine codes that accurately reflects the authentication outcome. However, AMS does not have a separate, discrete field where the CSR is to enter this information. The field available to capture authentication information is shared with 68 other issue codes, increasing the likelihood that the CSR may select a more generic issue, such as “identity theft” instead of one of the nine codes designated for TPP. Further, one of the TPP outcome codes, called “other issue,” may be too broad for useful analysis. Of the data we reviewed, we found that about one-third of TPP authentication cases were categorized as “other issue,” which provides no information on the authentication outcome. According to IRS’s procedures, this category is to be used in various scenarios, including when IRS does not have enough information to generate questions for authenticating the taxpayer, and in other cases when a taxpayer fails telephone authentication and must go to a Taxpayer Assistance Center. However, by combining all of these issues into one broad category, IRS has limited insight into the size of each particular problem and may be underestimating the number of taxpayers who fail TPP authentication. Further, IRS does not directly capture the results of correspondence- based authentication in AMS and is therefore unable to monitor pass and failure rates for this channel. Issues with data quality. We selected a generalizable random sample of AMS cases identified as TPP authentication failures for January through October 2017 and identified several data quality issues based on our analysis. First, we found that an estimated 19 percent of cases were categorized as an authentication failure, but the content of the CSR notes indicated otherwise. Further, we could not determine a clear match between the TPP authentication outcome and the CSR notes in an additional estimated 18 percent of cases. For example, in these instances, the CSRs’ notes provided no information on why the taxpayer failed authentication, or the notes were clearly unrelated to TPP. Second, we found that CSRs do not consistently enter useful information in the notes explaining why a taxpayer failed authentication, which could provide IRS management with valuable feedback on characteristics of potential fraud or problem areas for legitimate taxpayers. Specifically, our analysis showed that in an estimated 63 percent of cases, CSRs’ notes contained information that was useful or somewhat useful for helping IRS understand why a taxpayer failed authentication. In the estimated 37 percent of cases where we determined that the notes were not useful, CSRs generally documented the outcome (i.e., authentication failure) but not the details on why the taxpayer failed. We recognize that a portion of the TPP authentication failures may represent fraudsters trying to authenticate as a legitimate taxpayer. However, given that IRS’s fraud detection systems have a history of high false positive rates, these failures may also represent legitimate taxpayers who may be having trouble authenticating. Further, while the CSR notes could provide IRS potentially valuable information on why taxpayers may be failing authentication, further data analysis may prove difficult. This is because this information is captured in a free-text notes field, rather than in a drop-down list or other standardized way to record data that can then be analyzed. Further, during our analysis of AMS data, we found variation in the way CSRs enter notes, particularly in their use of abbreviations and shorthand on why a taxpayer failed authentication. Such variation makes systematic data analysis difficult. According to IRS officials and documents we reviewed, there may be several causes for the data quality issues. For example, as noted earlier, CSRs may not be selecting the correct TPP authentication outcome code because there are too many options and procedures may be unclear. IRS officials also noted that when a taxpayer contacts IRS about TPP authentication, they may want to discuss multiple issues. In these cases, the CSR may choose to record information on another issue instead of the authentication outcome. Complete AMS data sets are not readily available for analysis. In addition to the issues described above, the organizations responsible for monitoring TPP telephone and in-person authentication data do not have access to complete AMS data for TPP. IRS officials responsible for managing TPP told us that they do not have direct access to AMS data reports because they are not the system’s business owner. Instead, they receive a weekly extract of AMS data from IRS’s IT department. However, officials stated that this weekly data extract is limited to approximately the first 5,000 records for each issue area or outcome code, including the codes for TPP. IRS IT officials stated that they limited the file size of the AMS weekly report because it became too large to share internally via e- mail. IT officials stated that the free-text notes entries in AMS were the main cause for large file sizes. However, this procedure of emailing an extract of the data, rather than providing direct access to AMS, makes it difficult for IRS to perform comprehensive analyses and ongoing monitoring for TPP using AMS. To put this into further context, IRS officials reported that in fiscal year 2017, they authenticated about 1.13 million taxpayers for TPP via telephone and at Taxpayer Assistance Centers. However, we found only about 471,600 records with a TPP outcome code in the AMS data IRS provided to us. This represents only about 42 percent of the records we were expecting to see in AMS. IRS officials stated that the discrepancy was likely due to the AMS record limit described above. Yet, in the course of our analysis, we found that only a small number of outcome codes over 42 weeks appeared to be affected by this record limit. (See appendix I for details.) IRS officials could not confirm additional explanations for the discrepancy in the number of records. IRS’s Office of Research, Applied Analytics, and Statistics (RAAS) performs research and quantitative analysis on TPP and has studied authentication performance. For example, in April 2017, RAAS reported results of a newly implemented TPP authentication procedure and found that while the new procedures helped to reduce call times, CSRs were not following the procedures correctly in an estimated 44 percent of the calls. According to IRS officials, RAAS’s research efforts provide IRS management with insight into TPP performance and officials have identified areas where TPP can be improved. However, officials face similar data limitations we described above. Further, officials from IRS’s RAAS division stated that they must submit a formal data request with IT in order to receive additional data beyond what is included in the AMS weekly extract. While valuable, these research efforts are not a substitute for ongoing monitoring using complete, reliable data, which would allow IRS to identify and address potential problems in a more timely manner. IRS officials acknowledged that AMS has limitations and stated that they are in the process of planning a new capability in another system to analyze how taxpayers perform on specific questions during the high-risk authentication process. However, this capability will not address the issues in AMS we described above. Further, as of late November 2017, officials were uncertain when this capability would be implemented because of IT funding constraints. Without effective internal control procedures and mechanisms for collecting authentication outcome data, ensuring data quality, and using these data to perform comprehensive analyses and ongoing monitoring of TPP, IRS will continue to have limited insight into its taxpayer authentication operations. As a result, IRS may be challenged in identifying current and emerging threats to the tax system. IRS Is Working with Security Summit Partners to Improve Taxpayer Authentication Through the Security Summit, IRS is working with states, software companies, and financial industry partners to identify how best to address IDT and refund fraud. In February 2018, IRS announced that its key indicators for IDT dropped for the second year in a row and the number of taxpayers who reported they were victims of IDT in 2017 fell by about 40 percent, in part because of the Security Summit’s ongoing efforts to stop suspected fraudulent returns from entering tax processing systems. IRS has also included key efforts led by the Security Summit in its Roadmap. The Security Summit’s authentication workgroup leads several initiatives aimed at verifying the authenticity of the taxpayer and the tax return at the time of filing. One initiative involves analyzing data elements that are collected during the tax return preparation and filing process. In filing season 2017, the authentication workgroup collected data on 62 elements, 37 of which were new for that year. These elements included, for example, trusted customer requirements and other characteristics of the return. In addition, in 2016 the authentication workgroup worked with software providers to improve authentication procedures to protect taxpayers against their accounts being taken over by criminals. According to IRS officials, these improvements were some of the most visible to taxpayers because they included new password standards to access tax software and required the use of security questions. Authentication workgroup leaders also described their efforts to collaborate with industry to address authentication challenges. For example, in 2017, IRS, payroll service providers, and tax software providers expanded the Form W-2, Wage and Tax Statements (W-2) verification code pilot program. The goal of this program is to verify W-2 data submitted by taxpayers on e-filed individual tax returns, using a unique 16-character verification code printed on the form. According to IRS, verification codes appeared on more than 60 million W-2s issued for tax year 2017, compared with about 27.5 million W-2s issued for tax year 2016. Overall, co-leads from each of the sectors expressed positive views about the level of commitment and cooperation guiding the Security Summit authentication efforts. Officials with whom we spoke stated that they are dedicated to continuing to address authentication issues collaboratively because they all have an interest in improving authentication to reduce tax refund fraud. IRS Has Improved Its Authentication Methods, but Additional Actions Could Help Enhance Security IRS Has Taken Preliminary Steps to Adopt NIST’s New Guidance, but Does Not Have a Timeline or Detailed Plans for Full Implementation As described above, in June 2017, NIST released guidance related to online authentication that agencies will need to implement to ensure they are authenticating users in a secure manner. NIST’s guidance is designed to (1) describe the risk management process for selecting appropriate digital identity services and (2) help agencies implement authentication programs that provide reasonable risk-based assurances that a returning user is the same user that previously accessed the service. Adherence to the NIST guidance will help IRS provide reasonable risk-based assurance that the person accessing IRS services is who they claim to be. Further, OMB guidance states that federal legacy systems have 12 months to comply with a new NIST publication, while systems under development or undergoing a major transformation need to use the current revision when deployed. IRS officials told us they have met with NIST officials and plan to update IRS systems and applications to comply with the new security guidelines. IRS officials also noted that the agency has taken preliminary steps to implement the new guidelines. For example, in December 2017, IRS implemented a more secure authentication option through its mobile app, IRS2Go. After taxpayers link their online account with the mobile app, they can use the app to generate a security code to log into their online account. This option is in line with NIST’s new guidance and provides taxpayers with an alternative to receiving the security code via a text message. IRS has also taken other preliminary steps to implement the new NIST guidance, including forming a task force to guide the implementation of NIST guidance, working with the Security Summit to develop an authentication framework that incorporates the new guidance for state and industry partners, starting an analysis to identify gaps between IRS’s current authentication procedures and the new NIST guidance, and updating authentication procedures. However, IRS has not yet established detailed plans, including timelines, milestone dates, and resource needs, for fully implementing the new guidance. IRS officials cited several reasons for the delay. They said the agency will have to balance maintaining current authentication programs with developing IT infrastructure to support technologies that are compliant with the new guidance. In addition, officials stated that they will need to take a slower, incremental approach to updating authentication programs because of resource constraints. In March 2018, IRS officials provided us a draft, high-level analysis of IRS systems relative to the new NIST guidance, including some action items to address potential gaps. This preliminary analysis is a first step to help IRS identify gaps between IRS’s current authentication methods and the new NIST guidance. However, it does not include steps needed to implement the high-level action items, a timeline with milestones, or the resources needed to implement improvements to bring IRS into compliance with the new NIST guidance. IT officials stated that IRS intends to develop its implementation roadmap through 2018 and begin implementing technical solutions in 2019. However, those officials did not identify the technical solutions nor did they have a prioritization plan or documentation of a timeline to fully implement the new NIST guidance. Implementing the new NIST guidance and updating authentication programs to be protected by the appropriate level of assurance is consistent with federal standards for internal control and IRS’s Roadmap. Standards for Internal Control notes that agencies should identify, analyze, and respond to risks, as well as assess whether risk response actions sufficiently reduce risk to an acceptable level. Further, one of IRS’s initiatives in its Roadmap is to strengthen e-authentication and ensure it is in compliance with federal regulations, which includes guidance from NIST. Developing a plan that includes timelines with specific milestones and resource needs to implement the new NIST guidance is consistent with leading practices for effective planning and management. Specifically, in our prior work on the Government Performance and Results Act, we found that developing and using specific milestones and timelines to guide and gauge progress toward achieving an agency’s desired result is a leading practice for effective strategic planning and management. Further, our body of work on IRS has noted that developing project plans with measurable goals, schedules, and resources can help the agency more effectively plan new projects and initiatives. According to IRS officials, IRS must balance the needs of its existing authentication efforts against potential new investments. IRS’s gap analysis on current authentication procedures relative to the NIST guidance may help IRS prioritize which improvements are most critical. However, without clear plans, timelines, and milestones for performing this work, IRS may not be positioned to address the most vulnerable areas in a timely manner. IRS’s timely implementation of NIST’s new guidance is critical, as it can help the agency mitigate potential security weaknesses in its existing online authentication programs. IRS Does Not Have a Comprehensive Process to Evaluate Technologies That Could Help It Improve Authentication While IRS has made some progress in improving its authentication programs, the agency lacks a comprehensive, repeatable process to identify and evaluate potential new authentication technologies and approaches. IRS’s planning documents have noted a commitment to identify and leverage authentication best practices from outside organizations to protect taxpayer data and support IRS business needs. Specifically, IRS’s Roadmap states that the agency will leverage leading technology and implementation practices from the private and public sectors through a repeatable environmental scan process and, when appropriate, collaborate with partners to address its authentication needs. Similarly, IRS’s Strategic Plan notes that the agency will invest in innovative, secure technology needed to protect taxpayer data and support the business needs of the agency and its partners. IRS officials told us the agency continuously researches new identity assurance processes and technologies and has talked with other agencies, industry groups, and vendors to better understand how particular technology solutions could apply to IRS’s environment. Further, according to officials, IRS plans to work with an outside organization to analyze third-party identity proofing and authentication services; however, IRS is in the initial phases of this effort. IRS also recently established the Commissioner’s Identity Assurance Executive Steering Committee to help oversee IRS’s authentication efforts agency-wide. This committee is intended to serve as an advisory body, creating a forum for agency-wide collaboration, as well as providing guidance and direction for identity assurance implementation. IRS provided us documentation that it reviewed some available authentication technologies and their pros and cons in February 2016, and told us that this research helped them develop their Roadmap. However, IRS officials could not provide documentation on more recent evaluations of the broader authentication environment, or evidence of a repeatable, comprehensive process to identify and evaluate available authentication technologies and services. IRS officials stated that one way the agency evaluates potential technologies is through limited pilots or “innovation studies.” For example, from October 2017 to January 2018, IRS conducted a limited pilot to explore the feasibility of having a third-party identity assurance service provider authenticate taxpayers on behalf of IRS. Officials stated that this pilot was possible because it required little upfront investment by IRS. Specifically, IRS received a grant from NIST to implement it, and officials stated that it required minimal integration with IRS’s IT infrastructure. In January 2018, IRS officials stated they were reviewing the results of the pilot, but had not decided on any next steps. Further, IRS officials stated that the agency is considering other pilots, including one to assist with IRS’s telephone authentication and one to enhance security checks during the Individual Taxpayer Identification Number application process. However, while IRS has completed preliminary planning for these pilots, it has not established priorities or timelines because each pilot requires IT support, for example, to ensure the application can be integrated with IRS’s infrastructure and to make any technical changes. Further, in December 2017, IRS officials stated that all innovation studies were on hold until resources become available. IRS may benefit from considering new ways of approaching its authentication efforts, as other public and private entities face similar challenges of authenticating users. Our discussions with representatives from industry and financial institutions and with government officials indicate that there is no single, ideal taxpayer authentication solution that will solve IRS’s challenges related to IDT refund fraud. Further, representatives from industry and financial institutions and government officials with whom we spoke advocated a layered approach to authentication that relies on multiple strategies and sources of information, while giving taxpayers options for further protecting their information. Based on our discussions with representatives from industry and state departments of revenue and government officials, some options IRS could consider include the following: Expanding existing IRS services to further protect taxpayers. As discussed earlier, IRS’s online account offers taxpayers several services, including the ability to set up a payment plan and make payments to IRS and view their tax history. In fiscal year 2017, about 808,000 taxpayers created online accounts, and IRS expects this number to grow. IRS’s Roadmap has identified enhancing taxpayer assurance by expanding authentication, such as generating and sending event-driven notifications to taxpayers to help IRS authenticate returns, which could help IRS quickly validate legitimate returns. With this option, IRS may be able to further protect taxpayers from IDT refund fraud. For example, IRS could develop additional functionality for the online account that allows the taxpayer to designate a bank account or a preference for a paper check for receiving a tax refund. If a fraudster filed a return with different information, the return would automatically be rejected. In February 2018, IRS officials stated that their strategic vision includes empowering taxpayers to manage their online account; however, when these services offer the ability to change personal or financial information, there is greater potential for fraudsters to exploit them. Federated model. A federated authentication approach allows an organization to rely on trusted authentication credentials from another entity to log into its systems, potentially without needing to save information from the trusted source. (See figure 3.) One example of a federated authentication model is when people use their Google or Facebook credentials to log into a different website or mobile application. IRS could use a trusted authentication credential from the private or public sector, or another federal agency. The General Services Administration (GSA) has developed a single sign-on authentication platform for federal agencies called Login.gov. In March 2018, GSA officials told us that the Office of Personnel Management and Customers and Border Patrol were using Login.gov and that several other agencies plan to use the authentication platform. According to IRS officials, IRS and Department of the Treasury officials have met with GSA to discuss whether Login.gov could meet IRS’s authentication needs. In December 2017, IRS IT officials said they are tracking Login.gov’s progress and capabilities and want to ensure that GSA officials understand IRS’s requirements. IRS officials said that the agency is interested in being able to federate with different organizations, but does not want to limit federating to one entity, since different taxpayers will want to use different credentials. IRS officials also noted that the agency will need to implement additional IT infrastructure to support a federated model for authentication. Possession-based authentication. This type of authentication offers users a convenient, added layer of security when used as a second factor for accessing websites or systems that would otherwise rely on a username and password for single-factor authentication. As shown in figure 4, Universal Authentication Framework (UAF) solutions use biometrics, such as an embedded fingerprint, facial recognition, or voice recognition sensor on a computer or smart phone, eliminating the need for a password. Similarly, authentication with a Universal Second Factor (U2F) uses a trusted device or “security key” for authentication in addition to a username and password. According to a representative from the Fast Identity Online (FIDO) Alliance, UAF standards and U2F devices comply with NIST’s new guidance for digital authentication. While IRS is not likely to provide the devices to taxpayers, it could enable its systems to accept these types of standards-based authentication technology for taxpayers who elect to use UAF or U2F devices. For example, taxpayers could use a UAF or U2F device when logging into their IRS online account for additional protection. States’ strategies for authentication. When we met with representatives from five states to discuss how they authenticate taxpayers, representatives from three states volunteered that they use driver’s license information to help authenticate taxpayers and tax returns. One state we met with compares driver’s license information to other state agency data to help authenticate returns. IRS could investigate making driver’s license information, or other government identification, a requirement when filing a federal return, and work with states and other outside organizations to assist with authentication. This information could be a key factor in verifying that the legitimate taxpayer is filing the return. While some industry representatives told us driver’s license information is a good credential for identity- proofing, this information can be compromised. For example, fraudsters can use stolen PII to obtain fraudulent driver’s licenses. Contracting with outside organizations. Several private-sector organizations offer identity proofing and authentication services. We spoke with officials from the Department of Veterans Affairs (VA) and representatives from the State of Alabama’s Department of Revenue, both of which are currently using such services. VA is using a third- party service to identity proof and authenticate veterans accessing services through www.vets.gov. For the 2018 filing season, Alabama has contracted with a third-party organization to offer taxpayers a service that sends them an alert when a return is filed using their name, and authenticates the return as legitimate using a selfie. This photo is then digitally compared to their driver’s license photo. IRS could evaluate these services to see if any meet their needs. Working with trusted partners. IRS could partner with organizations it trusts that are accessible to taxpayers and enable the partners to identity-proof and authenticate taxpayers. Trusted partners could include tax preparers, financial institutions, or other federal agencies. In November 2017, IRS officials told us that they had been discussing an in-person identity proofing study with the Social Security Administration (SSA), where SSA would identity proof taxpayers and transmit the authentication data to IRS. However, in June 2018, IRS officials stated that discussions with SSA are ongoing, and they have not made a decision about next steps because SSA is concerned about resources. IRS is also exploring working with the U.S. Postal Service on an information-sharing initiative that could help IRS identify potential IDT. Throughout the course of our work, IRS officials stated that improving the security of IRS’s online authentication applications is a high priority and further noted that IRS must ensure that the highest-risk authentication improvements are completed first. In January 2018, IRS officials stated that the agency’s priority is implementing tax reform, which will use IRS’s limited IT resources. Further, officials noted that priorities, including resources required to develop project estimates, are determined by IRS’s appropriate executive steering committees. Developing a repeatable, comprehensive process to identify and evaluate different alternatives for taxpayer authentication, such as the ones described above, is consistent with leading practices and can help IRS ensure that it has a sound rationale for its investment decisions. It can also help ensure that IRS has the resources it needs to make authentication improvements in a timely manner. For example, these evaluations may involve developing and documenting a business case for selected initiatives in IRS’s Roadmap. Such a process could compare options for in-house authentication solutions with solutions available in the private sector based on estimates of cost, schedule, and benefits, as applicable. By identifying options and performing such an evaluation, IRS may find, for example, that an authentication technology available in the private sector already complies with the new NIST guidelines, offers IRS additional fraud detection capabilities, or is less expensive than developing a similar capability in-house. On the other hand, the process may show that minor improvements to a technology IRS is already using can provide the most secure option in relatively short time, given appropriate resources. This information could be communicated to IRS’s executive steering committees, as well as to Congress, to help IRS identify resource needs and ensure it is pursuing the most efficient and effective authentication improvements to protect IRS and taxpayers against evolving threats. IRS’s authentication environment is one component of a broad, complex IT infrastructure, and the agency faces many challenges as it modernizes its tax systems. However, given the availability of PII and the prevalence of cyberattacks, developing a repeatable, comprehensive process to identify and evaluate alternative options for taxpayer authentication and implementing improvements can help IRS ensure it is authenticating taxpayers in the most secure manner. IRS documentation acknowledges that a hybrid authentication approach using in-house solutions, third-party services, and working with trusted partners is the best approach to implementing the new NIST guidance and expanding IRS’s authentication coverage. However, without a process to comprehensively identify and evaluate available or emerging authentication technologies and models, IRS may be missing an opportunity to implement the most secure, robust technologies to authenticate and protect taxpayers. Further, including these authentication options and prioritizing them with other initiatives included in IRS’s Roadmap would help IRS ensure it is working on the highest priority authentication improvements first. It also provides a way for IRS to communicate its strategy and plan for authentication to IRS management and external stakeholders. Conclusions Each year, IRS authenticates millions of taxpayers via telephone, online, in-person, or correspondence to verify potentially fraudulent tax returns, provide taxpayers access to a tax transcript, or issue a replacement IP PIN. IRS’s cost to authenticate taxpayers varies widely, with in-person authentication at a Taxpayer Assistance Center costing about $89 per interaction and online authentication costing less than $1 per interaction. The challenge for IRS is to provide taxpayers with options to interact with the agency, while providing IRS with reasonable assurance that it is authenticating the legitimate taxpayer. In its Roadmap, IRS has identified high-level strategic efforts and numerous foundational initiatives to address its most pressing authentication challenges. IRS has made progress in several areas identified in its Roadmap. However, identifying the resources the agency will need to complete its foundational initiatives and further prioritizing them would help IRS better understand the relationship between its competing priorities and limited IT resources. Further, while IRS has made progress in identifying risks and establishing internal control activities to monitor online taxpayer authentication, it has not established equally rigorous controls for telephone, in-person, and correspondence authentication. First, IRS does not have a policy for identifying, assessing, and mitigating risks for these authentication channels. Second, IRS does not have effective internal controls for collecting reliable, useful data on telephone, in-person, and correspondence authentication outcomes for TPP and for using these data to monitor authentication operations. Without effective controls for collecting these data and using it for monitoring, IRS may not be positioned to identify potential vulnerabilities in its operations and the necessary improvements. Given the widespread availability of PII that fraudsters can use to perpetrate tax fraud, it is essential for IRS to strengthen taxpayer authentication to stay ahead of fraudsters’ schemes. Completing an analysis of IRS’s current authentication procedures relative to new NIST guidance may help IRS identify and prioritize which improvements are most critical. Developing a timeline with milestones and resource needs to implement NIST’s new guidance can help guide IRS’s implementation and help officials gauge progress and ensure the most critical improvements are made in a timely manner. Further, implementing NIST’s new guidance can help IRS ensure its online authentication applications are appropriately protecting IRS information. While improving IRS’s current authentication programs would help IRS further protect taxpayer information and identify and prevent fraud, IRS may not need to conduct all of its taxpayer authentication activities in-house nor build IRS- specific authentication solutions: there are many additional tools and partners IRS could consider. Further, developing a repeatable, comprehensive process to identify and evaluate potential authentication technologies and services will help IRS avoid missing opportunities for improving authentication. Further, including and prioritizing these authentication technologies and services in IRS’s Roadmap could provide useful information to decision makers given IRS’s concerns over competing IT priorities and limited resources. Recommendations for Executive Action We are making the following 11 recommendations to IRS: The Commissioner of Internal Revenue should direct the Identity Assurance Office, in collaboration with other IRS business partners, to estimate the resources (i.e., financial and human) required for the foundational initiatives and supporting activities identified in its Identity Assurance Strategy and Roadmap. (Recommendation 1) Based on the estimates developed in Recommendation 1, the Commissioner of Internal Revenue should direct the Identity Assurance Office to prioritize foundational initiatives in its Identity Assurance Strategy and Roadmap. (Recommendation 2) The Commissioner of Internal Revenue should establish a policy for conducting risk assessments for telephone, in-person, and correspondence channels for authentication. This policy should include, for example, the frequency of assessments to be performed and timeframes for addressing deficiencies. (Recommendation 3) Consistent with the policy developed in Recommendation 3, the Commissioner of Internal Revenue should direct the Identity Assurance Office and IRS business owners to develop a plan for performing risk assessments for telephone, in-person, and correspondence channels for authentication. (Recommendation 4) The Commissioner of Internal Revenue should establish a mechanism to collect data on outcomes for telephone, in-person, and correspondence authentication, consistent with federal standards for internal control. (Recommendation 5) The Commissioner of Internal Revenue should revise or establish, as appropriate, procedures to ensure data quality in the Account Management Services (AMS) consistent with federal standards for internal control. (Recommendation 6) The Commissioner of Internal Revenue should ensure that IRS business units have access to complete AMS data to monitor authentication performance and identify potential issues. (Recommendation 7) The Commissioner of Internal Revenue should direct the Identity Assurance Office and other appropriate business partners to develop a plan—including a timeline, milestone dates, and resources needed—for implementing changes to its online authentication programs consistent with new NIST guidance. (Recommendation 8) In accordance with the plan developed in Recommendation 8, the Commissioner of Internal Revenue should implement improvements to IRS’s systems to fully implement NIST’s new guidance. (Recommendation 9) The Commissioner of Internal Revenue should develop a repeatable, comprehensive process to identify and evaluate alternative options for improving taxpayer authentication, including technologies in use by industry, states, or other trusted partners. (Recommendation 10) Based on the approach developed in Recommendation 10, the Commissioner of Internal Revenue should include and prioritize these options, as appropriate, in IRS’s Identity Assurance Strategy and Roadmap. (Recommendation 11) Agency Comments and Our Evaluation We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. In its written comments, which are summarized below and reproduced in appendix III, IRS agreed with our 11 recommendations and stated that it is taking action to address them. IRS agreed with our recommendations to identify resources and prioritize the foundational authentication initiatives identified in its Roadmap. IRS noted that the Roadmap is a concept document outlining potential strategic initiatives and IRS has not finalized its approach. IRS stated that once it finalizes its authentication approach, it will estimate the resources required for each initiative and prioritize them, consistent with our recommendation. As stated earlier, we recognize that a strategy is a high- level plan and may need to change based on agency needs. Nevertheless, IRS’s timely attention to identifying resources and prioritizing its approved authentication initiatives will better position the agency to respond to known and unknown threats to the tax system. Further, IRS agreed with our recommendations to develop a plan for fully implementing NIST’s new authentication guidance and make the necessary improvements to its systems. In its written comments, IRS noted that its ability to complete these efforts will depend on the availability of resources. As noted throughout our report, we recognize the challenge of balancing competing IT priorities and limited resources, but given the importance of implementing authentication improvements consistent with NIST’s guidance, we continue to believe it should be a high priority. Additional actions, including addressing our recommendations, will help IRS further mitigate potential security weaknesses in its existing online authentication programs and help prevent potentially hundreds of millions of dollars in fraudulent refunds from being issued. IRS also agreed with our other seven recommendations, but did not provide additional details on how it plans to address them. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies of the report to the Commissioner of Internal Revenue and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe the taxpayer interactions that require authentication, including the general rationale behind the requirements, and the Internal Revenue Service’s (IRS) authentication methods; (2) assess what IRS is doing to monitor and improve its authentication methods, both internally and collaboratively through the Security Summit, to secure taxpayer information and reduce identity theft refund fraud; and (3) evaluate what else, if anything, IRS can do to strengthen its authentication methods while improving services to taxpayers. To describe the interactions that require taxpayer authentication and IRS’s methods to do so, we reviewed IRS documents, policies and procedures, IRS data and information on the number of taxpayers authenticated by channel, and interviewed knowledgeable IRS officials. IRS documents and policies we reviewed included IRS’s Authentication Strategy: Current State Touchpoints, IRS’s Identity Assurance Strategy and Roadmap (Roadmap), and Internal Revenue Manuals related to taxpayer authentication. For this report, we focused on the following four IRS programs and services because they require taxpayer authentication, verify a significant number of taxpayer identities each year, and illustrate IRS’s different approaches to authentication: the Taxpayer Protection Program (TPP), Get Transcript, Identity Protection Personal Identification Number (IP PIN), and IRS’s online services. We reviewed IRS-reported data and information on taxpayer authentication volumes and per transaction costs for these programs for fiscal years 2016 and 2017. To assess the reliability of this data, we examined it for errors and talked with knowledgeable IRS officials. We determined that the data were sufficiently reliable for our purposes. We also interviewed knowledgeable IRS officials on the agency’s authentication programs and services to understand different authentication options offered to taxpayers through various channels: in-person, online, telephone, and correspondence. To assess IRS’s efforts to monitor and improve authentication internally and through the Security Summit, we reviewed IRS policies, procedures, authentication risk assessments, and data from IRS systems on authentication performance. We compared IRS’s efforts to applicable activities in the Roadmap, IRS’s Strategic Plan Fiscal Years 2014-2017 (Strategic Plan), Standards for Internal Control in the Federal Government, GAO’s Framework for Managing Fraud Risks in Federal Programs, and relevant National Institute of Standards and Technology (NIST) guidance. We interviewed IRS officials in IRS’s Return Integrity and Compliance Services (RICS), Identity Assurance Office (IAO), and Information Technology (IT) knowledgeable about the agency’s taxpayer authentication programs. For additional context and informational purposes, we visited IRS’s Andover, Massachusetts call center to observe IRS customer service representatives (CSR) authenticating taxpayers for TPP. We also interviewed IRS, state, and industry co-leads from the Security Summit’s Authentication workgroup and Strategic Threat Assessment and Response workgroup to understand IRS’s collaborative efforts to improve taxpayer authentication. To better understand IRS’s efforts to authenticate taxpayers via telephone and in person, and how CSRs record data for TPP authentication, we obtained data from IRS’s Accounts Management System (AMS) for the weeks January 1, 2017, through October 23, 2017. This was the most recent and complete set of data at the time of our review. We reviewed AMS records coded with any of the nine TPP authentication outcome codes for tax years 2015, 2016, or with “0.” We assessed the reliability of the data by: (1) performing electronic testing of key data elements, including checks for missing, out-of-range, or logically inaccurate data; (2) reviewing documents for information about the data and IRS’s systems; and (3) interviewing officials knowledgeable about the data to discuss any limitations. During these discussions, IRS officials stated that the AMS data we received may not include all available records in AMS. This is because the IRS office that creates the weekly AMS data report includes only the first 5,000 records for each outcome code. To assess whether this was an issue for our data set, we reviewed record counts for each of the nine TPP outcome codes for the 42 weeks of data IRS provided us. We found 12 out of these 378 instances (3 percent) where the data appeared to be affected by the 5,000 record cutoff. Each of these instances occurred in the “TPP- Other – Sent to TAC” issue code for which we planned no further analysis. Specifically, we did not include this issue code in the generalizable random probability sample described below. As a result, we determined that the data were sufficiently reliable for the purposes of our report. To assess the quality and usefulness of the data CSRs enter into AMS for TPP, we selected a random, generalizable sample of records CSRs coded as a TPP authentication failure. We stratified the population into two groups: (1) high-risk authentication failures, and (2) all other authentication failures. From each population, we drew a random sample of 96 records independently, reflecting the population size of each stratum and to be able to detect a 10 percent difference in absolute value between the sample estimate and true population number with a 95 percent confidence level; that is, a 1 out of 20 chance of failure. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Each sample record was subsequently weighted in the analysis to account statistically for all the cases in the population, including those which were not selected. Two analysts independently reviewed each sample record to determine (1) whether the TPP authentication outcome code generally aligned with the CSR’s notes and (2) the extent to which the CSR notes were useful in understanding why a taxpayer failed authentication. First, the analysts categorized each record in the sample as “aligned” (authentication outcome code and content of CSR notes are clearly aligned); “not aligned” (authentication outcome code and content of CSR notes are clearly not aligned); or “cannot determine” (if the content of the CSR notes was unclear and the analyst could not confidently determine that the record was aligned or not aligned). Next, for each record in the sample, the analysts categorized the content of the notes as one of the following: Useful: CSR notes provided a clear explanation of why the taxpayer failed authentication (e.g., question failed; taxpayer did not have proper identification; or taxpayer did not have copy of tax return during the call/visit). Somewhat Useful: CSR notes provided some information on where in the process or why a taxpayer failed, but no clear explanation of the specific reason (e.g., taxpayer passed disclosure, but could not answer high risk questions). Not Useful: CSR notes were blank, or provided no useful information on where in the process or why a taxpayer failed authentication. Cannot Determine: This was selected when the content of the CSR notes was unclear and the analyst could not determine if information was useful. After the independent review, the analysts discussed their results and resolved any disagreements. Based on these results, we determined how many records in the sample were “aligned,” “not aligned,” or “unable to determine.” Further, we analyzed records categorized as “aligned” to determine how many included CSR notes that were useful, somewhat useful, or not useful. To evaluate what else, if anything, IRS can do to strengthen its authentication methods while improving services to taxpayers, we interviewed knowledgeable officials from IRS and reviewed documentation to understand IRS’s current authentication methods, future plans for authentication, and challenges IRS faces in taxpayer authentication. We also interviewed knowledgeable officials at the General Services Administration/18F to understand their work on a government-wide authentication platform, Login.gov, and how IRS may be able to use this technology in the future. We also interviewed Department of Veterans Affairs officials to understand how they authenticate veterans applying for benefits at www.vets.gov. Further, we met with knowledgeable officials from NIST on their guidelines for online identity-proofing and authentication, which were released in June 2017. To understand current and emerging authentication strategies and technologies, we interviewed representatives from state departments of revenue and from industry. We also interviewed knowledgeable officials from the Office of Management and Budget’s (OMB) U.S. Digital Service to understand their work with IRS in 2016 in launching IRS’s Secure Access online authentication platform and to understand any emerging technologies and standards for authentication. We interviewed a nongeneralizable selection of knowledgeable state and industry representatives based on referrals from NIST officials, and other government and industry representatives knowledgeable on tax issues, including co-chairs from the Security Summit’s Authentication workgroup. In total we met with representatives from five state departments of revenue, one association representing state tax officials, three financial institution organizations, one financial service industry association, three identity-proofing/authentication organizations, and four tax industry organizations. Finally, we compared IRS’s authentication programs and plans for future improvements to its Roadmap, Standards for Internal Control, GAO’s Information Technology Investment Management framework, principles for project planning, GAO’s prior work on the Government Performance and Results Act, GAO’s Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, and NIST and OMB guidance to determine ways IRS could strengthen its authentication methods, while improving taxpayer service. We conducted this performance audit from January 2017 to June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Overview of IRS’s Identity Assurance Strategy and Roadmap Prioritize technology and processes for e- Authentication to enhance identification, verification, and authorization capabilities as taxpayers continue to shift toward electronically filing. Establish a central authentication policy across the enterprise (i.e., channels and functions) Appendix III: Comments from the Internal Revenue Service Appendix IV: GAO Contact and Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Neil Pinney (Assistant Director), Dawn Bidne, Matthew Bond, Mark Canter, Jehan Chase, Heather A. Collins (Analyst-in-Charge), Michele Fejfar, Robert Gebhart, Steven Flint, Dae Park, and Robert Robinson made significant contributions to this report. Related GAO Products Tax Fraud and Noncompliance: IRS Can Strengthen Pre-refund Verification and Explore More Uses. GAO-18-224. Washington, D.C.: January 30, 2018. Identity Theft: Improved Collaboration Could Increase Success of IRS Initiatives to Prevent Refund Fraud. GAO-18-20. Washington, D.C.: November 28, 2017. Financial Audit: IRS’s Fiscal Years 2017 and 2016 Financial Statements. GAO-18-165. Washington, D.C.: November 9, 2017. Information Technology: Management Attention Is Needed to Successfully Modernize Tax Processing Systems. GAO-18-153T. Washington, D.C., October 4, 2017. 2017 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-17-491SP. Washington, D.C.: April 26, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. 2016 Filing Season: IRS Improved Telephone Service but Needs to Better Assist Identity Theft Victims and Prevent Release of Fraudulent Refunds. GAO-17-186. Washington, D.C.: January 31, 2017. Information Technology: Federal Agencies Need to Address Aging Legacy Systems. GAO-16-468. Washington, D.C.: May 25, 2016. Identity Theft and Tax Fraud: IRS Needs to Update Its Risk Assessment for the Taxpayer Protection Program. GAO-16-508. Washington, D.C.: May 24, 2016. Information Security: IRS Needs to Further Improve Controls over Taxpayer Data and Continue to Combat Identity Theft Refund Fraud. GAO-16-589T. Washington, D.C.: April 12, 2016. Information Security: IRS Needs to Further Improve Controls over Financial and Taxpayer Data. GAO-16-398. Washington, D.C.: March 28, 2016. Information Security: IRS Needs to Continue Improving Controls over Financial and Taxpayer Data. GAO-15-337. Washington, D.C.: March 19, 2015. Identity Theft and Tax Fraud: Enhanced Authentication Could Combat Refund Fraud, but IRS Lacks an Estimate of Costs, Benefits and Risks. GAO-15-119. Washington, D.C.: January 20, 2015. Identity Theft: Additional Actions Could Help IRS Combat the Large, Evolving Threat of Refund Fraud. GAO-14-633. Washington, D.C.: August 20, 2014. Internal Revenue Service: 2013 Tax Filing Season Performance to Date and Budget Data. GAO-13-541R. Washington, D.C.: April 15, 2013.
Why GAO Did This Study Strong preventive controls can help IRS defend itself against identity theft refund fraud. These controls include taxpayer authentication—the process by which IRS verifies identities before allowing people access to a resource; sensitive data; or, in some cases, a tax refund. The risk of fraud has increased as more personally identifiable information has become available as a result of, for example, large-scale cyberattacks on various entities. IRS's ability to continuously monitor and improve taxpayer authentication is a critical step in protecting billions of dollars from fraudsters. GAO was asked to examine IRS's efforts to authenticate taxpayers. This report (1) describes the taxpayer interactions that require authentication and IRS's methods; (2) assesses what IRS is doing to monitor and improve taxpayer authentication; and (3) determines what else, if anything, IRS can do to strengthen taxpayer authentication in the future. To meet these objectives, GAO reviewed IRS documents and data, evaluated IRS processes against relevant federal internal control standards and guidance, and interviewed IRS officials and state and industry representatives. What GAO Found The Internal Revenue Service (IRS) has identified over 100 interactions requiring taxpayer authentication based on potential risks to IRS and individuals. IRS authenticates millions of taxpayers each year via telephone, online, in person, and correspondence to ensure that it is interacting with legitimate taxpayers. IRS's estimated costs to authenticate taxpayers vary by channel. IRS has made progress on monitoring and improving authentication, including developing an authentication strategy with high-level strategic efforts. However, it has not prioritized the initiatives supporting its strategy nor identified the resources required to complete them, consistent with program management leading practices. Doing so would help IRS clarify relationships between its authentication efforts and articulate resource needs relative to expected benefits. Further, while IRS regularly assesses risks to and monitors its online authentication applications, it has not established equally rigorous internal controls for its telephone, in-person, and correspondence channels, including mechanisms to collect reliable, useful data to monitor authentication outcomes. As a result, IRS may not identify current or emerging threats to the tax system. IRS can further strengthen authentication to stay ahead of fraudsters. While IRS has taken preliminary steps to implement National Institute of Standards and Technology's (NIST) new guidance for secure digital authentication, it does not have clear plans and timelines to fully implement it by June 2018, as required by the Office of Management and Budget. As a result, IRS may not be positioned to address its most vulnerable authentication areas in a timely manner. Further, IRS lacks a comprehensive process to evaluate potential new authentication technologies. Industry representatives, financial institutions, and government officials told GAO that the best authentication approach relies on multiple strategies and sources of information, while giving taxpayers options for actively protecting their identity. Evaluating alternatives for taxpayer authentication will help IRS avoid missing opportunities for improving authentication. What GAO Recommends GAO is making 11 recommendations to IRS to estimate resources for and prioritize its authentication initiatives, address internal control issues to better monitor authentication, develop a plan to fully implement new NIST guidance, and develop a process to evaluate potential authentication technologies. IRS agreed with GAO's recommendations.
gao_GAO-18-27
gao_GAO-18-27_0
Background The federal government began preparing and we began auditing the consolidated financial statements of the U.S. government in fiscal year 1997. However, we have been unable to render an opinion on them in part because of serious financial management problems at DOD that have prevented its financial statements from being auditable. Pursuant to the NDAA for Fiscal Year 2010, DOD developed and has updated a Financial Improvement and Audit Readiness (FIAR) Plan. The FIAR Plan includes the specific actions to be taken and costs associated with (1) correcting the financial management deficiencies that impair DOD’s ability to prepare complete, reliable, and timely financial management information and (2) ensuring that DOD’s financial statements are validated as ready for audit by September 30, 2017. Further, the NDAA for Fiscal Year 2014 mandated that the Secretary of Defense ensure that an audit is performed on DOD’s fiscal year 2018 financial statements and that the results are submitted to Congress no later than March 31, 2019. DOD has undertaken several financial management improvement initiatives over the years to address deficiencies in business systems, processes, and controls through its FIAR Plan and financial management reform methodology contained in its FIAR Guidance. In its April 2016 FIAR Guidance, DOD identified seven critical capabilities that are necessary to achieve auditability. One of the seven critical capabilities identified is the ability to support or eliminate certain material JVs and other adjustments made to financial transactions, trial balances, and financial statements. As of September 2015, DOD determined that the largest portion attributable to the billions of dollars of unsupported JVs relates to the Army’s general fund. In 2015, DOD created the Working Group, made up of both Army and DFAS personnel, to develop solutions to reduce the number of required JVs and reduce or eliminate the number of unsupported JVs that affect the Army’s general fund. In July 2016, DOD’s Office of the Inspector General (OIG) released a report indicating that about 90 percent of the dollar value and number of the Army general fund JVs the OIG tested, which were defined by the Army as “supported,” were in fact unsupported because the JVs either (1) forced account balances to agree with other data sources without reconciling the differences or determining which data sources were correct, (2) corrected errors or reclassified amounts to other accounts without adequately documenting why the JVs were needed, or (3) changed ending balances of accounts without adequate documentation to support the JVs. DOD acknowledged in its November 2016 FIAR Plan Status Report that it still faces a significant challenge in reducing the number and amount of unsupported JVs. The November 2016 FIAR Plan Status Report also set a December 2016 completion date for the Army to (1) perform JV root cause analyses for all financial statements for all JVs and (2) implement corrective action plans and verify successful implementation of DFAS’s corrective actions. Army’s Financial Systems As shown in figure 1, the Army uses multiple systems to document, record, and report activities at the transaction level, which are then consolidated and uploaded into DOD’s reporting system known as the Defense Departmental Reporting System (DDRS). JVs can be used to record accounting entries and make any necessary corrections or adjustments within any of these systems at the transaction level or consolidated level. JVs can also be manually entered or system generated in the accounting systems. Manual JVs are those prepared by DFAS personnel to adjust errors identified during financial statement compilation, to record necessary accounting entries caused by system limitations or timing differences, and to prepare required month- and year- end closing accounting entries. System-generated JVs are those automatically generated based on system change requests and without manual involvement by DFAS personnel. The General Fund Enterprise Business System (GFEBS) is the Army’s primary accounting system used to record the majority of the Army’s activities, including budget execution, procurement, civilian pay, collections, and disbursements. The Army also has a separate accounting system for supply-related transactions called the Global Combat Support System (GCSS). Accounting entries for transactions recorded in GCSS are sent to GFEBS to consolidate all of the Army’s transaction data into a single accounting system. Then, at month’s end, all the information in GFEBS is uploaded into the budgetary module of DDRS known as the Defense Departmental Reporting System-Budgetary (DDRS-B). The Army also has several legacy systems that continue to process transactions but were not designed to interface with GFEBS as the accounting formats are not compatible. Therefore, at each month’s end, the balances for the legacy systems are configured into a format that can be read by DDRS-B and crosswalks to the accounts within the DDRS-B accounting system before they are sent to DDRS-B. Once the balances have been uploaded, DDRS-B performs edit checks to identify any errors that may exist. DFAS personnel then pull the data into the Electronic Error Correction and Transaction Analysis (ELECTRA) application to review any errors identified during the edit checks and use ELECTRA to create a JV to be uploaded into DDRS-B that makes necessary corrections or adjustments. After the Army has consolidated all information into DDRS-B from both GFEBS and legacy systems each month, DFAS personnel will review the information to determine whether any additional JVs are necessary to correct errors identified during the reconciliation process or to record any JVs that could not be processed prior to the upload into DDRS-B. On a quarterly basis after all necessary corrections are made, all information in DDRS-B is consolidated and uploaded into a second module within DDRS known as the Defense Departmental Reporting System-Audited Financial Statements (DDRS-AFS), which is used to prepare the financial statements. At this level, additional JVs can be made if DFAS personnel determine that further corrections are necessary or if information is received subsequent to the upload from DDRS-B. The Working Group Has Identified Some Root Causes but Has Not Started Analyses of the Majority of Unsupported Journal Vouchers The Working Group has been actively working toward implementing new processes to support JVs and eliminate unsupported JVs in the Army’s general fund. From October 2016 to March 2017, the Working Group, based on DFAS-produced metrics, reported that it had identified more than 121,000 unsupported JVs totaling $455 billion. The Working Group prioritized its identifying of root causes of unsupported JVs based upon factors such as potential correction complexity, materiality, and volume. In the May 2017 FIAR Plan Status Report, DOD stated that the Army had completed its root cause analyses for all JVs. However, we found that the analyses have been performed on only a small percentage of the total number of unsupported JVs that existed as of March 2017 and cannot be considered completed for all unsupported JVs. In addition, members of the Working Group confirmed as of June 2017 that the Working Group has not yet performed all analyses necessary and stated that the Working Group’s efforts will be an ongoing, iterative process because of anticipated new challenges that continually arise from new business processes or programs. A critical capability to achieve auditability, described in the April 2016 FIAR Guidance, is that all material JVs be supported and the population of accounting entries reconcile with each financial statement line item as well as the originating system of the accounting entry. As highlighted in figure 2, the Working Group focused its efforts on manual JVs processed at the consolidated level within DDRS-B as these JVs were consistent with its prioritization method in conducting the analyses and getting the financial statements audit ready. In December 2015, DFAS began requiring a form to be attached to all manual DDRS-B JVs that serves as a checklist of all documents necessary to be included for a JV to be considered supported. However, as indicated on the metrics provided through March 2017, manual unsupported JVs were still being recorded in DDRS-B. Further, as of March 2017, the Working Group had not included in its root cause analyses (1) any unsupported JVs at the transaction level, such as within GFEBS, which are mostly attributed to year-end closing accounting entries; (2) any unsupported JVs within DDRS-AFS, which are used to prepare the financial statements; or (3) system-generated JVs. As a result, a significant number of JVs still require analyses. Our review of the population of unsupported JVs from October 2016 to March 2017 found that system-generated JVs made up 90 percent in dollar value and 97 percent in number of total unsupported JVs. In July 2016, the DOD OIG similarly found that the Working Group had not been analyzing its system-generated JVs and recommended that the Working Group periodically review system-generated JVs to understand the reasons for the JVs and to verify the support for the JVs. Acting on the DOD OIG’s recommendation, members of the Working Group indicated that in March 2017, the Working Group began including system- generated JVs in its population of JVs to analyze but had not yet begun its analyses or identified any root causes. In addition, members of the Working Group stated that in February 2017, the Working Group began planning a new initiative called the Business Mission Area Champions (BMAC) Initiative. This initiative is to expand the Working Group’s analyses of root causes for JVs beyond those recorded at the consolidated level within DDRS-B and begin analyzing transaction-level JVs recorded in its systems that feed into DDRS-B before consolidation, such as GFEBS. However, as of May 2017, no details have been developed, such as planned actions or timelines. Therefore, because of the recent inclusion of the large population of system-generated JVs in the total population and because the BMAC Initiative is not yet under way, a significant amount of analyses remains to fully identify root causes and ultimately correct the Army’s overall unsupported JV issue. Figure 3 summarizes the progress of the Working Group’s root cause analyses of unsupported JVs as of March 2017. According to OMB Circular A-123, agencies should perform a root cause analysis of the deficiencies to ensure that subsequent strategies and plans address the root cause of the problem and not just the symptoms. Identifying and developing an understanding of the root cause is management’s responsibility. In addition, Standards for Internal Control in the Federal Government states that management should complete and document corrective actions to remediate control deficiencies on a timely basis. Because it has not yet performed root cause analyses on the majority of unsupported JVs, the Working Group has not been able to identify appropriate corrective actions to eliminate such unsupported JVs and ultimately help produce auditable financial statements for the Army’s general fund. The Working Group Has Developed Corrective Action Plans for All Identified Root Causes but Is Not Sufficiently Monitoring Implementation As of March 2017, members of the Working Group reported that the Working Group had developed 38 corrective action plans to address all of the identified root causes of unsupported manual JVs at the consolidated level in DDRS-B for the Army’s general fund. According to these members, of the 38 corrective action plans developed, 18 have been implemented. In addition, because the root cause analyses have been limited, the Working Group has not yet been able to determine how many more corrective action plans will need to be developed to resolve the unsupported JV issue. Further, we found that the Working Group’s monitoring of corrective action plan implementation does not include a method that sufficiently identifies the progress that has been made toward fully addressing the issue of unsupported JVs or to what extent each implemented corrective action plan has reduced unsupported JVs. The April 2016 FIAR Guidance provides specific tasks for DOD reporting entities, including the Army, to complete so that DOD can achieve its audit readiness objective. One of these tasks specific to JVs is to describe plans for implementing corrective actions to address the root causes of unsupported JVs. In addition, the Implementation Guide for OMB Circular A-123 recommends that management identify measurable indicators to better assess progress and more rapidly make course corrections to ensure timely and effective resolution of identified issues. In order to monitor whether a corrective action plan was implemented effectively, members of the Working Group stated that the Working Group uses metrics to monitor implementation of corrective actions for unsupported manual JVs at the consolidated level in DDRS-B. DFAS creates monthly metrics reports of DDRS-B accounting entries that include all unsupported and supported JVs for the month. The JVs used to create these metrics are the same JVs that the Working Group uses to conduct its JV root cause analyses. However, the metrics organize the JVs into broad categories that do not provide enough details to link to the categories that the Working Group uses. For example, the metrics use broad categories, such as data calls or correcting entries to classify the JVs, whereas the root cause analyses are conducted at a more detailed level, such as for a specific transaction. The Working Group was unable to demonstrate to us which metric category was affected by the corrective action plan implemented for each root cause identified. In addition, the Working Group’s reported progress was based on an increase in the percentage of supported JVs in relation to the total population of JVs from October 2016 to March 2017 as reported in the metrics. However, during this period, the metrics also show an increase in the actual number and dollar amount of manual unsupported JVs. Specifically, based on the Working Group’s reporting method, the percentage of unsupported manual JVs decreased in relation to the total population of manual JVs from October 2016 to March 2017 by 8 percent in the number of JVs and 14 percent in the dollar value of JVs. However, the actual number and dollar value of the unsupported manual JVs reported on the October 2016 and March 2017 metrics increased from 329 to 462 and from a total of $3.3 billion and $7.0 billion, respectively. Because the metrics used to monitor corrective action plans do not clearly indicate which JVs are affected by a specific root cause identified, resolved JVs cannot be tied to an implemented corrective action plan. Therefore, the Working Group is unable to demonstrate to what extent it has moved closer to eliminating and reducing unsupported JVs and to what extent the Army’s financial statements are becoming auditable for fiscal year 2018. In addition, according to members of the Working Group as of March 2017, 16 of the 20 remaining corrective action plans developed are pending implementation because they require system changes, which need to be negotiated with the contractor selected to make the system changes. Therefore, the Working Group has developed temporary mitigating procedures to support those JVs required to be recorded until system changes are implemented. For example, one of the root causes that the Working Group identified was a JV recorded in DDRS-B related to the accounting entry to record a customer refund from a prior year purchase. When DFAS personnel record an accounting entry for a customer refund from a prior year purchase into GFEBS, the system erroneously records the transaction as a current year refund, through the reimbursements and other income earned—collected account, rather than as a current year obligation and outlay as required by OMB Circular A-11, Preparation, Submission, and Execution of the Budget. Until this system change is made to GFEBS so that it can differentiate between the return of a prior year purchase or a current year purchase, DFAS has to reverse each entry and record the current year obligation and outlay with an additional JV into DDRS-B to correct the error made and include with the JV an explanation of the situation as well as detail level transactions to support the JV. DFAS considers this type of support to be a temporary solution; however, until system changes are made, the Working Group will be unable to implement the corrective action plans developed and resolve the root causes identified or determine whether the corrective action plan developed for this issue will be effective. Conclusions Since 2015, the Working Group has been analyzing manual unsupported JVs in DDRS-B to determine their root causes and to concentrate its efforts on getting the Army’s financial statements audit ready. As of March 2017, the Working Group had not analyzed system-generated JVs that constitute 90 percent of the total dollar value or 97 percent of the total number of unsupported JVs because of the volume and potential correction complexity of these JVs. Also, the Working Group had not analyzed unsupported JVs at the transaction level or in DDRS-AFS. Although we recognize that these efforts will be an ongoing, iterative process, until the Working Group has identified and completed its analyses of root causes for all unsupported JVs, the Army will not be able to correct the overall unsupported JV issue. As of March 2017, the Working Group had developed 38 corrective action plans and verified successful implementation of 18 corrective action plans to address the issue of unsupported JVs in the Army’s general fund. However, because the Working Group’s root cause analyses were limited to a small percentage of the total unsupported JVs, the development of corrective action plans was likewise limited. In addition, the metrics used to analyze JVs and monitor implemented corrective actions are not designed to provide the level of detail necessary to sufficiently monitor whether root causes identified are resolved through corrective action plans. As a result, the Army cannot determine to what extent it has reduced unsupported JVs in its general fund and how much more effort is required to fully address the issue and help ensure that the Army’s financial statements will be auditable for fiscal year 2018. Recommendations for Executive Action We are making the following two recommendations to the Army: The Assistant Secretary of the Army for Financial Management and Comptroller should ensure that the Working Group identifies and analyzes the full population of manual unsupported JVs at the transaction level and in DDRS-AFS and determines the root causes for these JVs. (Recommendation 1) The Assistant Secretary of the Army for Financial Management and Comptroller should work with DFAS to enhance the monthly JV metrics report or develop another method to sufficiently monitor the extent to which the Working Group has identified the root causes of unsupported JVs and to determine the extent to which unsupported JVs are being reduced based on the implemented corrective actions. (Recommendation 2) Agency Comments We provided a draft of this report to the Army for review and comment. In its written comments, reprinted in appendix II, the Army concurred with our recommendations and provided information on actions it has taken or plans to take to address them. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense (Comptroller) and Chief Financial Officer, the Deputy Chief Financial Officer, the Director of Financial Improvement and Audit Readiness, the Secretary of the Army, and the Director of the Defense Finance and Accounting Service. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to report on the extent to which the Journal Voucher Working Group (Working Group) had, as of March 2017, (1) performed analyses to determine the root causes of unsupported journal vouchers (JV) for the Department of the Army’s (Army) general fund and (2) developed and monitored the implementation of corrective action plans to address identified root causes of unsupported JVs. To address our first objective, we reviewed our prior relevant reports and reports by the Department of Defense (DOD) and DOD’s Office of Inspector General to gain an understanding of the nature of the issues identified related to the Army’s unsupported JVs for its general fund and how they affect DOD’s audit readiness as described in the Financial Improvement and Audit Readiness (FIAR) Guidance. We also reviewed various Army, Defense Finance and Accounting Service, and DOD financial reporting guidance; performed walk-throughs of the JV process; and interviewed agency officials to gain an understanding of the processes for recording JVs and analyzing causes of unsupported JVs, including (1) the various types of JVs recorded and reasons why the JVs are unsupported, (2) parties responsible for preparing and approving JVs, and (3) requirements for determining and documenting the root causes of unsupported JVs and monitoring such efforts. Further, we obtained the Working Group’s root cause analyses performed and completed as of March 2017. We analyzed the documentation provided and interviewed agency officials to determine the extent to which (1) root cause analyses have been performed for all unsupported JVs of the Army’s general fund and (2) root causes had been identified to address the issue of unsupported JVs. We also reviewed documentation provided to determine whether the Working Group was including all unsupported JVs in the population used for analyses. To address our second objective, we obtained the Working Group’s documentation of corrective action plans developed and implemented as of March 2017 to address the root causes identified by the Working Group. We inquired about the development and the status and implementation of each corrective action plan. We also inquired about the steps the Working Group has taken to validate and monitor the effectiveness of corrective action plans that have been implemented to address root causes identified. We also inquired about how the Working Group measures its progress using the monthly JV metrics reports, which the Working Group identified as its primary source for monitoring progress, and analyzed the JV metrics reports from October 2016 to March 2017. Further, we inquired of the reasons why corrective action plans had not been fully implemented. We also interviewed members of the Working Group to confirm our understanding of any limitations or challenges identified during the root cause analyses. We obtained their views on these or other concerns and reviewed relevant documentation supporting their evaluation of the root causes, efforts to address the root causes, and the potential impact on the Army’s financial management and future auditability. We interviewed Army officials about efforts to monitor the effectiveness of the Working Group’s corrective action plans to determine what actions, if any, the Army has taken to monitor the corrective action plans regarding deficiencies related to unsupported JVs. We considered whether the Working Group’s efforts for developing and monitoring the implementation of corrective action plans addressing identified deficiencies followed the relevant criteria contained in internal control standards; Office of Management and Budget Circular A-123, Management’s Responsibility for Enterprise Risk Management and Internal Control; and DOD FIAR Guidance. We conducted this performance audit from December 2015 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of the Army Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following individuals made key contributions to this report: Lynda Downing (Assistant Director), Francine DelVecchio, Natasha Guerra, Cole Haase, Jason Kelly, and David Scouten.
Why GAO Did This Study The Department of Defense (DOD) remains on GAO's High-Risk List in part because of its long-standing financial management deficiencies, which have prevented it from having auditable financial statements. One of the contributing factors is the billions of dollars of unsupported JVs within DOD's accounting systems, with the largest portion attributable to the Army's general fund. Because the National Defense Authorization Act for Fiscal Year 2014 required that DOD submit audit results to Congress for fiscal year 2018, a Working Group was established to address the Army's unsupported JVs, including analyzing root causes and developing corrective action plans. This report examines to what extent the Working Group, as of March 2017, has performed analyses and developed corrective action plans to address identified root causes. GAO reviewed and analyzed relevant documentation and interviewed agency officials and members of the Working Group. What GAO Found Since February 2015, the Journal Voucher Working Group (Working Group), which is comprised of Department of the Army (Army) and Defense Finance and Accounting Service personnel, has been actively working toward implementing new processes to address inadequate support for journal vouchers (JV) in the Army's general fund. JVs are accounting entries manually entered or system generated to record corrections or adjustments in an accounting system. From October 2016 to March 2017, the Working Group identified more than 121,000 unsupported JVs totaling $455 billion in one of its reporting systems, Defense Departmental Reporting System-Budgetary (DDRS-B). In the May 2017 Financial Improvement and Audit Readiness Plan Status Report, the Army stated that it had completed its root cause analyses for all JVs. However, GAO found that the Working Group had conducted the analyses on only a small percentage of the total number of unsupported JVs that existed as of March 2017. Specifically, as of March 2017, the Working Group had been focusing on manual JVs processed in DDRS-B, which only represent 10 percent of the dollar value and 3 percent in the total number of unsupported JVs in that system alone. Therefore, the analyses cannot be considered complete because the Working Group had not yet analyzed the remainder of the population, including those in other systems. Members of the Working Group indicated that in March 2017, the Working Group began including system-generated JVs in its analyses, which made up 90 percent in dollar value and 97 percent in total number of unsupported JVs processed in DDRS-B as of March 2017, but had not yet begun identifying any root causes. Members of the Working Group stated that these efforts will be an ongoing, iterative process because of anticipated new challenges that continually arise from new business processes or programs. As of March 2017, the Working Group reported that it had developed 38 corrective action plans to address all of the identified root causes of unsupported manual JVs in DDRS-B for the Army's general fund. According to members of the Working Group, 18 of the 38 have been implemented. However, the Working Group is unable to determine how many more corrective action plans will need to be developed to resolve the unsupported JV issue until the root cause analyses are complete. Further, GAO found that the Working Group's monitoring of corrective action plan implementation does not include a method that sufficiently identifies the progress toward fully addressing the issue of unsupported JVs or to what extent each implemented corrective action plan has reduced unsupported JVs. Members of the Working Group stated that the Working Group uses monthly JV metrics reports to monitor its implemented corrective actions, but the reports are not designed to provide the level of detail necessary to sufficiently monitor whether root causes identified are resolved through corrective action plans. Therefore, the metrics cannot demonstrate to what extent the Working Group has reduced unsupported JVs and how much more effort is required to fully address the issue and help ensure that the Army's financial statements are auditable for fiscal year 2018. What GAO Recommends GAO recommends that the Army (1) ensure that the entire population of unsupported JVs is identified and analyzed and (2) develop metrics that sufficiently monitor the extent to which the Working Group has identified root causes and determine the extent to which unsupported JVs are being reduced based on the implemented corrective actions. The Army concurred with GAO's recommendations and provided information on actions it has taken or plans to take to address them.
gao_GAO-18-387
gao_GAO-18-387_0
Background Application of Federal Laws in Puerto Rico Puerto Rico, which has approximately 3.3 million residents according to U.S. Census Bureau (Census) estimates, is the largest and most populous territory of the United States. As a territory, Puerto Rico is subject to congressional authority, though Congress has granted it broad authority over matters of internal governance—notably, by approving Puerto Rico’s constitution in 1952. Individuals born in Puerto Rico are U.S. citizens and can migrate freely to the states. Puerto Rico and its residents are generally subject to the same federal laws as the states and their residents, except in cases where specific exemptions have been made, such as with certain federal programs. For example, Puerto Rico residents generally have full access to Social Security and unemployment insurance; however, for some programs, such as Medicaid, federal funding in Puerto Rico is restricted as compared to funding in the states. Residents of Puerto Rico are exempt from paying federal income tax on income from sources in Puerto Rico. Residents are required to pay federal income tax on income from sources outside of Puerto Rico. They are also required to pay federal employment taxes, such as Social Security and Medicare taxes, on their income regardless of where it was earned. Puerto Rico residents are also ineligible for certain federal tax credits. Corporations located in Puerto Rico are generally subject to the same federal tax laws as corporations located in a foreign country. Corporations in Puerto Rico are generally exempt from federal taxes on profits except as such profits are effectively connected to a trade or business in the states, and so long as those profits remain held outside of the states. Additionally, these corporations were subject to a withholding tax on certain investment income from the United States not connected to a trade or business. Under the 2017 Public Law 115-97, starting in 2018 U.S. corporations that are shareholders in foreign corporations, such as those organized under Puerto Rico law, generally do not owe tax on dividends received from those foreign corporations. Prior to this law, dividend payments to U.S. corporate shareholders were considered taxable interest for the U.S. parent corporation. Prior to 1996, a federal corporate income tax credit—the possessions tax credit—was available to certain U.S. corporations that located in Puerto Rico. In general, the credit equaled the full amount of federal tax liability related to an eligible corporation’s income from its operations in a possession—including Puerto Rico—effectively making such income tax- free. In 1996, the tax credit was repealed, although corporations that were existing credit claimants were eligible to claim credits through 2005. Puerto Rico’s Economy and Labor Force Puerto Rico’s economy is in a prolonged period of economic contraction. According to data from Puerto Rico’s government, Puerto Rico’s economy grew in the 1990s and early 2000s. However, between 2005 and 2016— the latest year for which data were available as of March 1, 2018—Puerto Rico’s economy experienced year-over-year declines in real output in all but two years, as measured by real gross domestic product (GDP). From 2005 to 2016, Puerto Rico’s real GDP fell by more than 9 percent (from $82.8 billion to $75.0 billion in 2005 dollars). Puerto Rico’s gross national product (GNP) followed a similar pattern over the same period, declining by more than 11 percent from 2005 to 2016 (from $53.8 billion to $47.7 billion in 2005 dollars). Figure 1 shows Puerto Rico’s real GDP and GNP growth rates from 1991 through 2016. The decline in Puerto Rico’s output has, in more recent years, occurred in conjunction with a decline in Puerto Rico’s population. According to Census estimates, Puerto Rico’s population declined from a high of approximately 3.8 million people in 2004 to 3.3 million people in 2017, a decline of 12.8 percent. This population loss closely matched the decline in real output. From 2004 to 2016, Puerto Rico’s real GNP fell by 9.5 percent, while its real GNP per capita increased by 1.6 percent over the same time period. In addition to Puerto Rico’s declining population, the territory also has a lower share of employed persons compared to the United States as a whole. As of 2017, approximately 37 percent of Puerto Rico residents were employed compared to approximately 60 percent for the United States as a whole. Puerto Rico’s employment-to-population ratio reached highs in 2005 and 2006 when it was approximately 43 percent, according to data from the Federal Reserve Bank of St. Louis. According to data from the Bureau of Labor Statistics (BLS), between 2005 and 2017, Puerto Rico’s unemployment rate fluctuated between 10.2 percent and 17.0 percent, with an average of 13.1 percent. During the same period, the nationwide unemployment rate fluctuated between 4.1 percent and 10.0 percent, with an average of 6.5 percent. These factors have combined to leave Puerto Rico with a small and declining labor force. From January 2006 to December 2017—the latest month for which data were available as of March 1, 2018—Puerto Rico’s labor force decreased from approximately 1.4 million persons to 1.1 million persons, according to data from BLS. Puerto Rico Government Financial Condition Puerto Rico’s government has operated with a deficit—where expenses exceed revenues—in each fiscal year since 2002, and its deficits grew over time (see figure 2). Puerto Rico’s governmental activities can be divided among the primary government and component units. Puerto Rico’s primary government provides and funds services such as public safety, education, health care, and economic development. Puerto Rico’s component units are legally separate entities for which its government is nonetheless financially accountable, and provide services such as public transportation, highways, electricity, and water. In fiscal year 2014, the latest for which audited financial data are available, the Puerto Rico government collected $32.5 billion in revenue, of which $19.3 billion was collected by the primary government, and $13.2 billion was collected by the component units. That year Puerto Rico’s government spent $38.7 billion, of which $22.0 billion was spent directly by the primary government, while $16.7 billion was spent by the government’s various component units. The Puerto Rico Electric Power Authority (PREPA), which operates the territory’s electricity generation and distribution infrastructure, represented the largest component unit expenditure in fiscal year 2014. Figures 3 and 4 show a breakdown of expenses for Puerto Rico’s primary government and its component units, respectively. Puerto Rico’s government spending accounts for more than a third of the territory’s GDP. In fiscal year 2014—the latest year for which audited spending data were available as of March 1, 2018—primary government expenditures of $22.0 billion represented 21 percent of the territory’s GDP. Including component spending, total public expenditures were $38.7 billion, which represented 38 percent of the territory’s GDP. By comparison, our prior work has shown that in 2014, total state and local government expenditures represented about 14 percent of GDP for the United States as a whole, excluding territories. Federal government expenditures were 20 percent of GDP for the United States as a whole in 2014. Puerto Rico Debt Puerto Rico’s total public debt as a share of its economy has grown over time. In 2002, the value of its debt was 42 percent of the territory’s GDP, and 67 percent of its GNP. Both of these ratios grew over time such that by 2014, Puerto Rico’s total public debt was 66 percent of the territory’s GDP and 99 percent of its GNP. Figure 5 compares Puerto Rico’s total public debt to its GDP and GNP, in both aggregate and per capita. As of the end of fiscal year 2014, the last year for which Puerto Rico issued audited financial statements, Puerto Rico had $67.8 billion in net public debt outstanding, or $68.1 billion excluding accounting adjustments that are not attributed in the financial statements to specific agencies. Of the $68.1 billion, $40.6 billion was owed by Puerto Rico’s primary government, and $27.6 billion was owed by its component units, as shown in figure 6 (these amounts do not sum to $68.1 billion because of rounding). The growth of Puerto Rico’s total debt resulted in greater annual debt servicing obligations. In fiscal year 2002, it cost Puerto Rico $2.7 billion to service its debt, representing about 12 percent of Puerto Rico’s $21.6 billion in total public revenue for that year. By fiscal year 2014, Puerto Rico’s annual debt service cost rose to $5.0 billion, representing just over 15 percent of Puerto Rico’s $32.5 billion in total public revenue for that year. Following years of expenditures that exceeded revenue, and a growing debt burden, in August 2015, Puerto Rico failed to make a scheduled bond payment. Since then, Puerto Rico has defaulted on over $1.5 billion in debt. In June 2016, Congress enacted and the President signed PROMESA in response to Puerto Rico’s fiscal crisis. PROMESA established a Financial Oversight and Management Board for Puerto Rico (Oversight Board), and granted it broad powers of fiscal and budgetary control over Puerto Rico. PROMESA also established a mechanism through which the Oversight Board could petition U.S. courts on Puerto Rico’s behalf to restructure debt. Under federal bankruptcy laws, Puerto Rico is otherwise prohibited from authorizing its municipalities and instrumentalities from petitioning U.S. courts to restructure debt. The Oversight Board petitioned the U.S. courts to restructure debt on behalf of Puerto Rico’s Highways and Transportation Authority and the Government Employees Retirement System on May 21, 2017 and on behalf of PREPA on July 2, 2017. Pension Obligations In addition to its debt obligations, Puerto Rico also faces a large financial burden from its pension obligations for public employees. Puerto Rico’s public pension systems had unfunded liabilities of approximately $49 billion as of the end of fiscal year 2015, the most recent year for which data are available. Unfunded pension liabilities are similar to other kinds of debt because they constitute a promise to make a future payment or provide a benefit. Officials and Experts Cited Various Factors as Contributing to Puerto Rico’s Financial Condition and Levels of Debt Factors that Contributed to Puerto Rico’s Persistent Deficits Based on interviews with current and former Puerto Rico officials, federal officials, and other relevant experts, as well as a review of relevant literature, the factors that contributed to Puerto Rico’s financial condition and levels of debt related to: (1) Puerto Rico’s government running persistent deficits and (2) its use of debt to cope with deficits. As previously mentioned, Puerto Rico’s government has operated with a deficit in all years since 2002, and deficits grew over time. To cope with its deficits, Puerto Rico’s government issued debt to finance operations, rather than reduce its fiscal gap by cutting spending, raising taxes, or both. Through interviews with current and former Puerto Rico officials; federal officials; experts in Puerto Rico’s economy, the municipal securities markets, and state and local budgeting and debt management; as well as a review of relevant literature, we identified three groups of factors that contributed to Puerto Rico’s persistent deficits: (1) inadequate financial management and oversight practices, (2) policy decisions, and (3) prolonged economic contraction. Some of the factors in these groups may be interrelated. Factors that Enabled Puerto Rico to Use Debt to Finance Operations To cope with its persistent deficits, Puerto Rico issued debt to finance operations. In reviewing 20 of Puerto Rico’s largest bond issuances from 2000 to 2017, totaling around $31 billion, we found that 16 were issued exclusively to repay or refinance existing debt and to fund operations. According to ratings agency officials and experts in state and local government, states rarely issue debt to fund operations, and many states prohibit this practice. According to former Puerto Rico officials and experts on Puerto Rico’s economy, high demand for Puerto Rico debt and the Government Development Bank for Puerto Rico (GDB) facilitating rising debt levels enabled Puerto Rico to continue to use debt to finance operations. High Demand for Puerto Rico Debt Puerto Rico issued a relatively large amount of debt, given the size of its population. Based on an analysis of fiscal year 2014 comprehensive annual financial reports of the 50 states and Puerto Rico, Puerto Rico had the second highest amount of outstanding debt among states and territories, while its population falls between the 29th and 30th most populous states. By comparison, California, the state with the largest amount of outstanding debt, is the most populated state. Various factors drove demand for Puerto Rico municipal bonds, even as the government’s financial condition deteriorated. Triple tax exemption: According to a former Puerto Rico official, Federal Reserve Bank of New York officials, and an expert on Puerto Rico’s economy, Puerto Rico’s municipal bonds were attractive to investors because interest on the bonds was not subjected to federal, state, or local taxes, regardless of where the investors resided. In contrast, investors may be required to pay state or local taxes on interest income earned from municipal securities issued by a state or municipality in which they do not reside. Investment grade bond ratings: Puerto Rico maintained investment grade bond ratings until February 2014, even as its financial condition was deteriorating. Credit ratings inform investment decisions by both institutional investors and broker dealers. According to a current Puerto Rico official and an expert on Puerto Rico’s economy, investment grade ratings for Puerto Rico municipal bonds may have driven demand for these securities in the states. Based on interviews with ratings agency officials and a review of rating agency criteria, we found that Puerto Rico may have maintained its investment grade rating for two reasons. First, Puerto Rico could not seek debt restructuring under federal bankruptcy laws, prior to the passage of PROMESA in 2016. According to rating agency officials, bonds with assumed bankruptcy protection tend to rate higher than those without such protection. Second, legal frameworks that prioritize debt service are often viewed as positive for credit ratings, according to rating agency criteria. In the event that the Puerto Rico government does not have sufficient resources to meet appropriations for a given fiscal year, Puerto Rico’s constitution requires that the government pay interest and amortization on the public debt before disbursing funds for other purposes in accordance with the order of priorities established by law. The prior Puerto Rico Governor cited this constitutional provision as providing the authority to redirect revenue streams from certain entities to the payment of general obligation debt. This redirection of revenue streams is commonly known as a clawback. Lack of transparency on its financial condition: Municipal market analysts told us that untimely financial information made it difficult for institutional and individual investors to assess Puerto Rico’s financial condition, which may have resulted in investors not being able to fully take the investment risks into account when purchasing Puerto Rico debt. According to one report, between 2010 and 2016 municipal issuers issued their audited financial statements an average of 200 days after the end of their fiscal years. However, between fiscal years 2002 and 2014, Puerto Rico issued its statements an average of 386 days after the end of its fiscal year, according to our analysis of Puerto Rico’s audited financial statements. Moreover, Puerto Rico had not issued its fiscal years 2015 and 2016 audited financial statements as of March 1, 2018, or 975 and 609 days after the end of those fiscal years, respectively. Estate tax structures: Puerto Rico residents had incentive to invest in municipal bonds issued in Puerto Rico over those issued in the United States because of federal and Puerto Rico estate tax structures. Current and former Puerto Rico officials told us that this incentive drove demand among Puerto Rico residents for bonds issued in Puerto Rico. For federal estate tax purposes, Puerto Rico residents are generally considered non-U.S. residents and non-citizens for all of their U.S.-based property, including investments. Estates of Puerto Rico residents are required to pay the prevailing federal estate tax— which ranges from 18 percent to 40 percent depending on the size of an estate—for any U.S.-based property valued over $60,000. In contrast, prior to 2017, all Puerto Rico-based property was only subject to the Puerto Rico estate tax of 10 percent. Puerto Rico’s estate tax was repealed in 2017. Puerto Rico’s Government Development Bank Facilitated Rising Debt Levels In addition to financing from the municipal bond markets, GDB also provided an intragovernmental source of financing. Prior to April 2016, GDB acted as a fiscal agent, trustee of funds, and intergovernmental lender for the Government of Puerto Rico. GDB issued loans to Puerto Rico’s government agencies and public corporations to support their operations. GDB provided loans to government entities valued at up to 60 percent of GDB’s total assets, as shown in Figure 11. In general, these entities did not fulfill the terms of their borrowing agreements with GDB, while they independently accessed the municipal bond market. Additionally, according to GDB’s audited financial statements, GDB did not reflect loan losses in its audited financial statements until 2014 because it presumed that Puerto Rico’s legislature would repay loans through the general fund or appropriations, as generally required by the acts that approved such loans. Facing non-repayment of public sector loans, GDB took on debt to maintain liquidity. According to GDB documents, repayment of amounts owed to GDB was a main reason for the creation of the Puerto Rico Sales Tax Financing Corporation (COFINA), an entity backed by a new sales tax, through which Puerto Rico issued some of its debt. Though initially intended as a means to repay GDB and other debt, COFINA bonds were also used to finance operations. Actions That Could Address Factors that Contributed to Puerto Rico’s Unsustainable Debt Levels Through our interviews and an assessment of relevant literature, we identified three potential federal actions that could help address some of the factors that contributed to unsustainable indebtedness in Puerto Rico. Consistent with the provision in PROMESA that was the statutory requirement for this work, we focused on actions that were non-fiscal in nature—that is, actions that would not increase the federal deficit. There are tradeoffs for policymakers to consider when deciding whether or how to implement any policy. For each action, we describe a specific challenge as it relates to debt accumulation in Puerto Rico, identify a possible federal response to the challenge, and describe other considerations for policymakers. Action 1: Modify SEC’s Authority over Municipal Securities Disclosure Requirements To help address the factors that contributed to the high demand for Puerto Rico debt relative to other municipal debt, legislative and executive branch policymakers could further ensure that municipal securities issuers provide timely, ongoing, and complete disclosure materials to bondholders and the public. Specifically, Congress could authorize SEC to establish requirements for municipal issuers on the timing, frequency, and content of initial and continuing disclosure materials. Challenge In general, the municipal securities market is less regulated and transparent than other capital markets, such as equity markets. For example, SEC’s authority to directly establish or enforce initial and continuing disclosure requirements for issuers—including those in Puerto Rico—is limited. SEC requires that underwriters (sellers of municipal securities) reasonably determine that issuers have undertaken continuing disclosure agreements (CDA) to publicly disclose ongoing annual financial information, operating data, and notices of material events. However, federal securities laws do not provide SEC with the authority to impose penalties on municipal issuers for noncompliance with CDAs, which may limit any incentive for issuers to comply with SEC disclosure and reporting guidance. As a result, SEC has limited ability to compel issuers to provide continuing disclosure information. As previously discussed, the Puerto Rico government often issued its audited financial statements in an untimely manner, thus failing to meet its contractual obligations to provide continuing disclosures for securities it issued. SEC could not directly impose any consequences on Puerto Rico’s government for failing to adhere to the terms of, or enforce compliance with, the CDAs. Additionally, as previously discussed, municipal market analysts told us that untimely financial information made it difficult for institutional and individual investors to assess Puerto Rico’s financial condition. Addressing the Challenge Timely disclosure of information would help investors make informed decisions about investing in municipal securities and help protect them against fraud involving the securities. These disclosures would be made to investors at the time of purchasing securities and throughout the term of the security, including when material changes to an issuer’s financial condition occur. According to SEC staff, enhanced authority could prompt more municipal issuers to disclose financial information, including audited financial statements, in a timelier manner. For example, SEC staff said that if the agency had required that issuers provide timely financial statements at the time of issuing a municipal security, this may have precluded Puerto Rico from issuing its $3.5 billion general obligation bond in 2014. However, any rulemaking SEC would or could take as a result of enhanced authority would depend on a number of factors, such as compliance with other SEC guidance and related laws. Other Considerations Since this action would apply to all U.S. municipal securities issuers, it has policy and implementation implications that extend well beyond Puerto Rico. For example, establishing and enforcing initial and continuing disclosure requirements for municipal securities issuers could place additional burdens on state and local issuers, and not all municipal issuers use standardized accounting and financial reporting methods. As a result, state and local governments may need to spend resources to adjust financial reporting systems to meet standardized reporting requirements. However, in a 2012 report proposing this action, SEC said it could mitigate this burden by considering content and frequency requirements that take into account, and possibly vary by, the size and nature of the municipal issuer, the frequency of issuance of securities, the type of municipal securities offered, and the amount of outstanding securities. Action 2: Apply Federal Investor Protection Laws to Puerto Rico To help address the factors that contributed to the high demand for Puerto Rico debt relative to other municipal debt, Congress could ensure that investors residing in Puerto Rico receive the same federal investor protections as investors residing in states. Specifically, Congress could subject all investment companies in Puerto Rico to the Investment Company Act of 1940, as amended (1940 Act). In recent years, the House and Senate separately have passed legislation that would achieve this action. Challenge Certain investment companies in Puerto Rico and other territories— specifically, those whose securities are sold solely to the residents of the territory in which they are located—are exempt from the 1940 Act’s requirements. The 1940 Act regulates investment companies, such as mutual funds that invest in securities of other issuers and issue their own securities to the investing public. It imposes several requirements on investment companies intended to protect investors. For example, it requires that investment companies register with SEC and disclose information to investors about the businesses and risks of the companies in which they invest, and the characteristics of the securities that they issue. It also restricts investment companies from engaging in certain types of transactions, such as purchasing municipal securities underwritten by affiliated companies. According to a former Puerto Rico official, some broker-dealers in Puerto Rico underwrote Puerto Rico municipal securities issuances and investment companies managed by affiliated companies of these underwriters purchased the securities, packaged them into funds, and marketed the funds to investors residing in Puerto Rico. This practice would be prohibited or restricted for investment companies subject to the 1940 Act, as it might result in investment companies not acting in the best interests of their investors. Addressing the Challenge If all Puerto Rico investment companies had been subject to the 1940 Act, they would have been prohibited or restricted from investing in Puerto Rico municipal bonds underwritten by affiliated companies. Also, these investment companies may have further disclosed the risks involved in Puerto Rico municipal bonds to Puerto Rico investors. As a result, demand for Puerto Rico municipal bonds from Puerto Rico investment companies and residents may have been lower had the 1940 Act requirements applied to all Puerto Rico investment companies, and it may have been more difficult for the Puerto Rico government to issue debt to finance deficits. Other Considerations SEC staff told us that industry groups had raised objections to extending the 1940 Act provisions to all investment companies in Puerto Rico. These industry groups noted that, among other things, certain investment companies would have difficulty meeting the 1940 Act’s leverage and asset coverage requirements and adhering to some restrictions on affiliated transactions. However, SEC staff noted that under certain legislation that passed the House or Senate separately, as described above, Puerto Rico investment companies would have three years to come into compliance if they were newly subject to the 1940 Act. Further, under that legislation, after three years, investment companies in Puerto Rico could also request an additional three years to come into compliance. Regarding affiliated company restrictions, SEC has previously waived some requirements for investment companies if they are unable to obtain financing by selling securities to unaffiliated parties with an agreement to repurchase those securities at a higher price in the future, known as repurchase agreements. According to SEC staff, SEC would consider allowing companies in Puerto Rico to enter into reverse repurchase agreements with their affiliates if the 1940 Act applied to them. Action 3: Modify the Tax Exemption Status for Puerto Rico Municipal Securities To help address the factors that contributed to the high demand for Puerto Rico debt relative to other municipal debt, Congress could remove the triple tax exemption for Puerto Rico’s municipal securities. This action would mean that interest income from Puerto Rico municipal securities earned by investors residing outside of Puerto Rico could be taxed by states and local governments, while still being exempt from federal income taxes, similar to the current tax treatment of municipal bond income in the states. Challenge As mentioned previously, former Puerto Rico officials and experts in municipal securities told us that the triple tax exemption fueled investor demand and enabled Puerto Rico to continue issuing bonds despite deteriorating financial conditions. Some of the demand for Puerto Rico municipal securities came from certain U.S. municipal bond funds. These funds concentrated their investments in one state to sell to investors within that state, but also included Puerto Rico bonds in their portfolios. Puerto Rico bond yields generally were higher than state bonds yields, according to industry experts. When added to a fund, the higher yields from Puerto Rico bonds would increase the overall return on investment yield of a fund. Addressing the Challenge Modifying the triple tax exemption for Puerto Rico’s municipal securities might result in reduced demand for Puerto Rico’s debt. In response to reduced demand for its debt, Puerto Rico’s government may need to address any projected operating deficits by decreasing spending, raising revenues, or both. Other Considerations According to U.S. Treasury officials, this action could increase the proportionate share of investors in Puerto Rico debt that reside in Puerto Rico, because of reduced demand from investors in the states. In the event of a future debt crisis, this could result in a concentration of financial losses within Puerto Rico. Also, debt financing allows governments to make needed capital investments and provides liquidity to governments, and can be a more stable funding source to manage fiscal stress. Reduced market demand for Puerto Rico’s bonds could make access to debt financing difficult, as the Puerto Rico bond market may not support the Puerto Rico government’s future borrowing at reasonable interest rates, according to Treasury officials. Alternately, a variant of this action would be to retain the triple tax exemption for Puerto Rico debt only for bonds related to capital investments rather than for deficit financing, according to Treasury officials. Other Federal Actions Taken to Address Puerto Rico’s Fiscal Condition Various provisions in PROMESA were intended to help Puerto Rico improve its fiscal condition. PROMESA requires that the Oversight Board certify fiscal plans for achieving fiscal responsibility and access to capital markets. The intent of the fiscal plans is to eliminate Puerto Rico’s structural deficits; create independent revenue estimates for the budget process; and improve Puerto Rico’s fiscal governance, accountability, and controls, among other things. From March 2017 to April 2017, the Oversight Board certified the fiscal plans the Government of Puerto Rico developed for the primary government and certain component units, such as PREPA. As a result of the effects of Hurricanes Irma and Maria, the Oversight Board requested that the Government develop updated fiscal plans. Although the Government of Puerto Rico developed and submitted updated fiscal plans, the Oversight Board did not certify them, with the exception of the plan for GDB. Instead, in April 2018, the Oversight Board certified fiscal plans it developed itself, as PROMESA allows. PROMESA also requires the Oversight Board to determine whether or not Puerto Rico’s annual budgets, developed by the Governor, comply with the fiscal plans prior to being submitted to Puerto Rico’s legislature for approval. Technical assistance is another area where the federal government has taken action to help Puerto Rico address its fiscal condition. In 2015, Congress first authorized Treasury to provide technical assistance to Puerto Rico, and has continued to reauthorize the technical assistance, most recently through September 30, 2018. For example, Treasury officials told us that they helped Puerto Rico’s Planning Board develop a more accurate macroeconomic forecast, which should enable Hacienda to develop more accurate revenue estimates and receipt forecasts. Treasury officials also told us that the agency began helping Puerto Rico improve its collection of delinquent taxes—for example, by helping Hacienda develop an office dealing with Puerto Rico’s largest and most sophisticated taxpayers, which are often multinational corporations. With Puerto Rico focused on hurricane recovery efforts, Treasury and the Puerto Rico government are reassessing the types of assistance that Treasury might provide in the future, according to Treasury officials. Current and former Puerto Rico government officials and experts on Puerto Rico’s economy also told us that the federal government could further help Puerto Rico address its persistent deficits through federal policy changes that are fiscal in nature. For example, it could change select federal program funding rules—at a cost to the federal government—such as eliminating the cap on Medicaid funding and calculating the federal matching rate similar to how the rate is calculated in the states. Likewise, the Congressional Task Force on Economic Growth in Puerto Rico (Congressional Task Force), as established by PROMESA, issued a report in December 2016 that recommended changes to federal laws and programs that would spur sustainable long- term economic growth in Puerto Rico, among other recommendations. Puerto Rico Plans to Take Actions to Address Its Fiscal Condition and Debt Levels In addition to federal actions that could address the factors that contributed to Puerto Rico’s fiscal condition and debt levels, the Puerto Rico government plans to take various actions. For example, according to current Puerto Rico officials and the Puerto Rico government’s April 2018 fiscal plan, the government is: Planning to implement an integrated new information technology system for financial management, to include modernized revenue management and accounting and payroll systems. Hacienda officials stated that they are in the process of developing a project schedule for this long-term effort. Developing a new public healthcare model in which Puerto Rico’s government pays for basic services and patients pay for premium services. The government will begin implementing the new healthcare model in fiscal year 2019 and expects to achieve annual savings of $841 million by fiscal year 2023. Collaborating with the private sector for future infrastructure and service projects, including for reconstruction efforts related to Hurricanes Irma and Maria, which it expects will stimulate Puerto Rico’s weakened economy. We also asked Puerto Rico officials about progress made toward addressing many of the factors we identified. However, they did not provide us this information. Agency Comments, Third Party Views, and Our Evaluation We provided a draft of this report for review to Treasury, SEC, the Federal Reserve Bank of New York, the Government of Puerto Rico, and the Oversight Board. Treasury and SEC provided technical comments, which we incorporated as appropriate. The Federal Reserve Bank of New York and the Oversight Board had no comments. We received written comments from the Government of Puerto Rico, which are reprinted in appendix II. In its comments, the Government of Puerto Rico generally agreed with the factors we identified that contributed to Puerto Rico’s financial condition and levels of debt. It also provided additional context on Puerto Rico’s accumulation of debt, such as Puerto Rico’s territorial status and its effect on federal programs in Puerto Rico and outmigration. The Government of Puerto Rico also noted that the federal actions we identified to address factors contributing to Puerto Rico’s unsustainable debt levels did not include potential actions that were fiscal in nature or that addressed Puerto Rico’s long-term economic viability. As we note in the report, we excluded fiscal actions from our scope, consistent with the provision in PROMESA that was the statutory requirement for this work. We excluded potential actions that could promote economic growth in Puerto Rico because these actions would address debt levels in Puerto Rico only indirectly and because the Congressional Task Force on Economic Growth in Puerto Rico already recommended actions for fostering economic growth in Puerto Rico in its December 2016 report. We are sending copies of the report to the appropriate congressional committees, the Government of Puerto Rico, the Secretary of the Treasury, the Chairman of the Securities and Exchange Commission, and other interested parties. In addition, this report is available at no charge on the GAO website at http://gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to describe (1) the factors that contributed to Puerto Rico’s financial condition and levels of debt; and (2) federal actions that could address the factors that contributed to Puerto Rico’s financial condition and levels of debt. Consistent with the provision in the Puerto Rico Oversight, Management, and Economic Stability Act (PROMESA) that was the statutory requirement for this work, we focused on actions that would not increase the federal deficit. For both objectives we interviewed current Puerto Rico officials from several agencies—the Puerto Rico Department of Treasury (Hacienda in Spanish), Government Development Bank for Puerto Rico (GDB), the Puerto Rico Office of Management and Budget (Spanish acronym OGP), Fiscal Agency and Financial Advisory Authority (FAFAA), and the Puerto Rico Electric Power Authority. We also interviewed 13 former Puerto Rico officials that held leadership positions at Hacienda, GDB, or OGP, or a combination thereof. These former officials served between 1997 and 2016 for various gubernatorial administrations associated with the two political parties in Puerto Rico that held the governorship during that period. We also interviewed officials from the U.S. Department of the Treasury (Treasury), the Securities and Exchange Commission (SEC), the Federal Reserve Bank of New York, and the Financial Oversight and Management Board for Puerto Rico (created by PROMESA). Additionally, we conducted another 13 interviews with experts on Puerto Rico’s economy, the municipal securities markets, state and territorial budgeting and debt management—including credit rating agencies—and with select industry groups in Puerto Rico. We selected the experts we interviewed based on their professional knowledge closely aligning with our engagement objectives, as demonstrated through published articles, congressional testimonies, and referrals from agency officials or other experts. To describe the factors that contributed to Puerto Rico’s financial condition and levels of debt, we reviewed our prior work related to Puerto Rico’s financial condition and levels of public debt. We also collected and analyzed additional financial data from Puerto Rico’s audited financial statements for the fiscal years 2002 to 2014, the last year for which audited financial statements were available. To determine how the Puerto Rico government used bond proceeds, we reviewed a nongeneralizable sample of Puerto Rico bonds prospectuses issued between 2000 and 2017 from the Electronic Municipal Market Access database of the Municipal Securities Rulemaking Board. We reviewed literature—including academic reports, congressional hearing transcripts, and credit rating agency reports—that described Puerto Rico’s economy and factors that contributed to Puerto Rico’s levels of debt. We also reviewed credit rating agency reports that described Puerto Rico’s municipal debt and the agencies’ methodologies for rating municipal debt. We also collected and reviewed Puerto Rico government documents related to budget formulation and execution, debt issuance, and financial management. We considered factors to include, but not be limited to, macroeconomic trends, federal policies, and actions taken by Puerto Rico government officials. Our review focused largely, though not exclusively, on conditions that contributed to the debt crisis during those years for which we collected financial data on Puerto Rico, fiscal years 2002 to 2014. Finally, we also conducted a thematic analysis of the summaries of our interviews to identify common patterns and ideas. Although these results are not generalizable to all current and former officials and experts with this subject-matter expertise, and do not necessarily represent the views of all the individuals we interviewed, the thematic analysis provided greater insight and considerations for the factors we identified. To describe federal actions that could address the factors that contributed to Puerto Rico’s financial condition and levels of debt, we reviewed our prior reports and documents from Treasury and SEC, conducted a literature review, and conducted various interviews. Specifically, we met with federal agencies with subject-matter expertise or whose scope of responsibilities related to these actions, as well as with current and former Puerto Rico officials and municipal securities experts. Consistent with PROMESA, we omitted from our scope: (1) actions that could increase the federal deficit (i.e., fiscal options), (2) actions that could be taken by the Puerto Rico government, (3) actions that could infringe upon Puerto Rico’s sovereignty and constitutional parameters, and (4) actions that would imperil America’s homeland and national security. We considered actions that could promote economic growth in Puerto Rico as outside of scope, as they could address debt levels in Puerto Rico indirectly, rather than directly, and because a study issued by the Congressional Task Force on Economic Growth in Puerto Rico already identified actions that Congress and executive agencies could take to foster economic growth in Puerto Rico. We also considered actions that could address Puerto Rico’s unfunded pension liability as outside of our scope. The actions we identified may also help avert future unsustainable debt levels in other territories; however, we did not assess whether and how each action would apply to other territories. We conducted this performance audit from January 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Government of Puerto Rico Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jeff Arkin (Assistant Director), Amy Radovich (Analyst in Charge), Pedro Almoguera, Karen Cassidy, Daniel Mahoney, A.J. Stephens, and Justin Snover made significant contributions to this report.
Why GAO Did This Study Puerto Rico has roughly $70 billion in outstanding debt and $50 billion in unfunded pension liabilities and since August 2015 has defaulted on over $1.5 billion in debt. The effects of Hurricanes Irma and Maria will further affect Puerto Rico's ability to repay its debt, as well as its economic condition. In response to Puerto Rico's fiscal crisis, Congress passed the Puerto Rico Oversight, Management, and Economic Security Act (PROMESA) in 2016, which included a provision for GAO to review Puerto Rico's debt. This report describes the factors that contributed to Puerto Rico's financial condition and levels of debt and federal actions that could address these factors. Consistent with PROMESA, GAO focused on actions that would not increase the federal deficit. To address these objectives, GAO reviewed documents and interviewed officials from the Puerto Rico and federal governments and conducted a review of relevant literature. GAO also interviewed former Puerto Rico officials and experts in Puerto Rico's economy, the municipal securities markets, and state and territorial budgeting, financial management, and debt practices, as well as officials from the Financial Oversight and Management Board for Puerto Rico (created by PROMESA). GAO is not making recommendations based on the federal actions identified because policymakers would need to consider challenges and tradeoffs related to implementation. The Puerto Rico government generally agreed with the factors we identified and provided additional information. GAO incorporated technical comments from SEC as appropriate. What GAO Found The factors that contributed to Puerto Rico's financial condition and levels of debt relate to (1) the Puerto Rico government running persistent annual deficits—where expenses exceed revenues—and (2) its use of debt to cope with deficits. Based on a literature review and interviews with current and former Puerto Rico officials, federal officials, and other relevant experts, GAO identified factors that contributed to Puerto Rico's persistent deficits: The Puerto Rico government's inadequate financial management and oversight practices. For example, the Puerto Rico government frequently overestimated the amount of revenue it would collect and Puerto Rico's agencies regularly spent more than the amounts Puerto Rico's legislature appropriated for a given fiscal year. Policy decisions by Puerto Rico's government. For example, Puerto Rico borrowed funds to balance budgets and insufficiently addressed public pension funding shortfalls. Puerto Rico's prolonged economic contraction. Examples of factors contributing to the contraction include outmigration and the resulting diminished labor force, and the high cost of importing goods and energy. Additional factors enabled Puerto Rico to use debt to finance its deficits, such as high demand for Puerto Rico debt. One cause of high demand was that under federal law, income from Puerto Rico bonds generally receives more favorable tax treatment than income from bonds issued by states and their localities. Based on an assessment of relevant literature and input from current and former Puerto Rico officials, federal officials, and other relevant experts, GAO identified three potential federal actions that may help address some of these factors. GAO also identified considerations for policymakers related to these actions. Modify the tax exempt status for Puerto Rico municipal debt. Making interest income from Puerto Rico bonds earned by investors residing outside of Puerto Rico subject to applicable state and local taxes could lower demand for Puerto Rico debt. However, reduced demand could hinder Puerto Rico's ability to borrow funds for capital investments or liquidity. Apply federal investor protection laws to Puerto Rico. Requiring Puerto Rico investment companies to disclose risks with Puerto Rico bonds and adhere to other requirements could lower demand for the bonds. However, this action could also limit Puerto Rico's ability to borrow funds. Modify the Securities and Exchange Commission's (SEC) authority over municipal bond disclosure requirements. SEC could be allowed to require timely disclosure of materials—such as audited financial statements—associated with municipal bonds. Over the past decade, Puerto Rico often failed to provide timely audited financial statements related to its municipal bonds. Timely disclosure could help investors make informed decisions about investing in municipal bonds. However, a broad requirement could place additional burdens on all U.S. municipal issuers, such as the costs of standardizing reporting.
gao_GAO-18-9
gao_GAO-18-9_0
Background The Coast Guard is required to develop, establish, maintain, and operate rescue facilities for the promotion of safety and may aid distressed persons, and protect and save property in waters subject to the jurisdiction of the United States. To carry out its responsibilities, the Coast Guard maintains a search and rescue system on the Atlantic, Pacific, and Gulf coasts; the Great Lakes; and other inland lakes and waterways. This system consists of about 190 boat stations, 183 of which are located in the contiguous United States. The Coast Guard also operates aircraft from 24 air stations and four air facilities. As of August 2017, these stations and facilities operated about 700 boats and about 200 aircraft. In fiscal year 2016, the Coast Guard reported that its SAR operations saved 5,174 lives and protected more than $63 million in property from loss. Laws Governing the Optimization of the Coast Guard’s Boat Station, Air Station, and Air Facility Locations The Coast Guard’s boat stations, air stations, and air facilities are subject to laws which require the Coast Guard to maintain specific minimum capabilities—such as a requirement to maintain at least one vessel at each station that is fully capable of operating within the prevailing weather and marine conditions in that station’s area of responsibility. In addition to maintaining capabilities requirements, if the Coast Guard reevaluates its station location needs and intends to close a boat station, air station, or air facility, it also must follow a statutorily defined process, which includes making a determination that adequate SAR coverage will remain in place. To close an air facility, the Coast Guard must also submit a proposal to close the facility to Congress in the President’s annual budget and notify members of Congress who represent the impacted communities, as well as certain committees. The Coast Guard’s Structure and Stations That Conduct Search and Rescue The Coast Guard’s field structure is divided into two Area Commands, Atlantic and Pacific, within which are nine Districts consisting of 37 Sectors and the stations within them (see figure 1). Stations are traditionally associated with search and rescue but they may perform the full range of Coast Guard missions. Coast Guard personnel live and work at or near their stations so they can rapidly respond to emergencies as they arise. This model facilitates the Coast Guard’s search and rescue response resource planning standard. Under this SAR standard, Coast Guard plans for its units with SAR responsibilities to arrive on the scene of a case within 2 hours of receiving a distress call. Stations vary in their mission mix and pace of operations (i.e., operational tempo) by geographic region or District, and by season. For example, Coast Guard boat stations in D7 (Florida, Puerto Rico, South Carolina, and the Caribbean) commonly conduct migrant interdiction operations, whereas boat stations located along the Great Lakes (D9) rarely conduct this mission. In some locations, SAR cases may be more common during the summer boating season than in the winter. Stations in D9 have a shorter boating season than stations in D7. According to Coast Guard officials, while D7 has more total SAR cases than D9, cases in D9 are concentrated in a shorter time period than in D7 (i.e., shorter boating season). Boat stations also vary widely in size and function. For example, Station New York in New York City has an authorized strength of 88 personnel, whereas Station Frankfort in Frankfort, Michigan, has an authorized strength of 15 personnel. Both stations perform SAR and other missions, but Station New York also conducts a high level of homeland security missions, while Station Frankfort provides ice rescue capability during the winter. Additionally, the Coast Guard operates 18 seasonal boat stations called “Stations (Small),” which are detached subunits of larger parent stations; the Coast Guard generally operates these during the summer boating season. How the Coast Guard Conducts a Search and Rescue Mission When the Coast Guard receives notification of a distressed mariner, a search and rescue mission coordinator evaluates the case and assigns assets, such as boats or aircraft, to respond. Cases may involve multiple assets depending on the complexity of the case, such as the need to locate a mariner whose position is only generally known or to operate in severe weather conditions. Figure 2 depicts the general steps for conducting a SAR case. The Coast Guard uses several different types of assets to carry out its search and rescue and other missions. These assets include boats, rotary wing aircraft (helicopters), fixed wing aircraft (planes), and cutters (including patrol boats and ships). Additional details regarding some of these assets, including boat speeds, are described in appendix II. Prior GAO Work on Station Optimization Over time, the need for Coast Guard stations at particular locations has changed due to changes in Coast Guard asset capabilities, boating activity, boating equipment, safety technology, and the capabilities of other search and rescue service providers, such as private towing firms. However, the Coast Guard’s decisions to close or reduce operations at boat stations based on changing conditions or budget reductions have been sensitive. We previously reported that these sensitivities were based on the perception that reducing operations or closing stations would reduce the agency’s ability to save lives and property. In 1990, we reported that the Coast Guard’s attempts to close stations in 1988 were not successful because the Coast Guard did not have policies or procedures for what criteria should be used or how the criteria should be applied, and because the Coast Guard applied its evaluation criteria to a limited universe—only 34 stations instead of all stations. We also found that the Coast Guard did not adequately address how closing stations would impact the Coast Guard’s effectiveness in saving lives or performing other missions. In 1994, we reported that the Coast Guard had created a new process for determining the need for boat station changes. We also found that the new process included detailed criteria to evaluate the appropriate need for stations, such as boating and economic trends and the availability of alternative SAR resources. The Coast Guard then unsuccessfully attempted to close stations in 1995 using this process, and again in 2008, efforts which we describe later in this report. Prior Work on Fragmentation, Overlap, and Duplication In 2010, federal law required that we identify programs, agencies, offices, and initiatives with duplicative goals and activities within departments and government-wide, and report annually. The annual reports describe areas in which we have found evidence of fragmentation, overlap, or duplication among federal programs and have resulted in $136 billion in financial benefits for the federal government. Figure 3 outlines the definitions we have used since 2011 in our work to address fragmentation, overlap and duplication. Coast Guard Has a Sound Process for Analyzing the Need for Boat Stations and the Results Identified Overlap and Unnecessary Duplication The Coast Guard has a sound process for analyzing the need for boat stations that is consistent with GAO’s Program Evaluation guidance, which calls for choosing well-regarded criteria against which to make comparisons in order to achieve strong, defensible conclusions. The primary criteria Coast Guard subject matter experts established, consistent with statutory requirements that the Coast Guard make a determination that adequate SAR coverage would remain in place, were (1) a minimum threshold of overlapping SAR coverage had to be maintained and (2) the Coast Guard’s ability to meet its nationwide 2-hour SAR response standard had to be maintained. By applying these criteria, the Coast Guard’s process identified overlapping search and rescue coverage where three or more stations can respond to a single SAR case within 2 hours, and unnecessary duplication where stations could be closed without negatively impacting the Coast Guard’s ability to meet mission requirements, such as its 2-hour SAR response standard. In June 2012, the Coast Guard established a Station Optimization Process Charter that called for the Coast Guard to develop a defendable process with criteria for analyzing stations for potential closure. The charter stated and Coast Guard officials confirmed that the process was developed to ensure that closure recommendations would be based on solid justifications for stations selected, and would stand up to rigorous scrutiny. The charter called for (1) the process to be data driven; (2) criteria to be applied consistently; (3) consideration of previous GAO recommendations on assessing stations for closure; and (4) adherence to statutory requirements to conduct outreach to affected communities. The Coast Guard then established a working group of subject matter experts who developed a Station Optimization Process with nine analytical steps. The Station Optimization Process included criteria for analyzing the need for boat stations based on data analysis, consistent application of criteria, and legal requirements. Figure 4 shows the Station Optimization Process and its nine steps. Coast Guard Used Its Station Optimization Process to Analyze Boat Stations and Identified Overlap and Unnecessary Duplication In April 2013, the Coast Guard initiated its 9-step Station Optimization Process to analyze its boat stations, and the results identified 18 stations that could be closed because they provide overlapping and unnecessarily duplicative SAR coverage. The Coast Guard hired a contractor to carry out the analysis and identify potential cost savings from permanent closures of such stations. Although focused on SAR coverage, the process also included consideration of all Coast Guard missions carried out at these stations. The contractor followed the 9-step process, with certain steps conducted by the Coast Guard––such as step 1, which analyzed the system and identified overlapping SAR coverage––and developed and ranked different closure options to maximize cost savings. Coast Guard officials provided additional district input on unique characteristics of certain stations to further refine the closure options. The final study identified 18 stations for closure that it estimated would achieve cost savings without impeding the Coast Guard’s ability to meet its SAR response standard and carry out its other missions. We discuss this further later in this report. The Coast Guard considers some overlap or redundancy to be necessary, to account for such things as operational challenges, boat maintenance downtime, personnel training requirements, and the need for surge capacity to respond to certain incidents. Therefore, the Coast Guard directed the contractor to analyze areas with triple or greater station coverage as its baseline for analyzing whether stations were unnecessarily duplicative. Based on the Coast Guard’s review of this coverage, it determined that the greatest extent of overlapping coverage existed in Districts 1, 5, and 9, and directed the contractor to focus on stations in those areas. Figure 5 shows the extent of overlapping Coast Guard boat station SAR coverage as of September 2013 that was used for the contractor study and is still accurate as of May 2017. It shows for Districts 1, 5, and 9, up to quadruple or greater SAR coverage provided by boat stations with overlapping response capabilities. According to the Coast Guard, in an attempt to be conservative in maintaining SAR coverage, the optimization process did not consider the use of Coast Guard air assets such as helicopters—an additional layer of coverage— nor did it consider the availability of some local agencies that respond to SAR cases, such as police departments and emergency responders. Therefore, overlapping coverage depicted in figure 5 excludes air asset responses and any responses or assistance provided by state and local agencies. The extent of coverage in 2017 was the same as the Coast Guard’s 2013 contractor study reported. We determined that the actions taken to complete the station optimization process are sound, consistent with our Program Evaluation guidance which calls for, among other things, evaluating programs based on well- regarded criteria to achieve strong, defensible conclusions. In addition to using the 2-hour response standard as a criterion, the optimization steps identified actions to systematically analyze quantitative measures using a documented ranking system to remove critical stations from consideration for closure. For example, step 4 of the process evaluated the number of security boardings conducted by selected stations, among other metrics, and removed certain stations for consideration from closure based on a systematic application of criteria related to other mission responsibilities. Further, as described in table 1, the process began with consideration of all boat stations in the contiguous United States, included steps to ensure that data were reliable and appropriate, clearly identified limitations of the analysis, and conducted simulations to assess how well the Coast Guard would be prepared to carry out its responsibilities under different closure alternatives, such as whether a station closure reduces or changes the Coast Guard’s ability to meet its response standard—all actions included in our Program Evaluation guidance. Table 1 provides details of actions taken by the contractor and the Coast Guard to complete the 9-step station optimization process. Additional District Input Helped Refine List of Closure Recommendations Consistent with the 9-step optimization process and to validate the closure scenario results, the contractor and Coast Guard Headquarters obtained regional input from district officials to gain context about the stations under consideration for closure such as unique rescue characteristics that were not quantifiable. Coast Guard officials within Districts 1, 5, and 9 generally supported the contractor recommendations to close some stations, with a few exceptions. For example, District 1’s input stated that one station recommended for closure by the contractor analysis had a unique surf rescue capability that was not available at adjacent or other nearby stations and thus this station did not provide unnecessarily duplicative SAR coverage since no nearby station could provide this capability. Thus, District 1 recommended that the station remain open. Given this input, the contractor removed this station from consideration for closure. In another example, District 5 officials reported that closure of one of its stations would increase response times from adjacent stations due to the presence of shoaling and barrier island conditions that could not be accounted for in the quantitative modeling. Therefore, the contractor eliminated that station from consideration for closure and recommended an alternative station for closure. This process of obtaining regional input and validation from district officials was carried out such that if a station identified for closure would negatively impact critical missions, it was removed from closure consideration. This additional district input resulted in a final contractor study that recommended station closures that would achieve the greatest cost saving without negatively impacting the Coast Guard’s ability to meet mission requirements. In addition to identifying stations with unique characteristics that warranted keeping them open, additional district input also confirmed contractor recommendations that some stations should be permanently closed. For example, District 5’s input concurred with the closure of six stations, including one where officials we interviewed on site confirmed its steadily diminishing SAR caseload. Our analysis of Coast Guard data validated this station’s low workload showing an average of seven single- boat response SAR cases annually from fiscal years 2010 through 2016. We also found that this station had been recommended for closure in the past. In another example, District 9 input sought an additional, seasonal closure of one station that the contractor analysis did not evaluate for permanent closure due to one criterion applied by the process. District 9’s input provided additional context for this station, saying that seasonal closure was preferable to taking no action because there was significant response redundancy in this region. Moreover, the district input noted that the acquisition of modern boats has increased the range and reduced the response time of many stations. District input also noted that improvements in public education and awareness of safe boating practices, technology and availability of communications equipment, and the increase in non-Coast Guard response resources has resulted in a steady and dramatic decline in the stations’ SAR workloads. Our analysis of all Coast Guard single-boat response data for cases within the contiguous United States for fiscal years 2010 through 2016 confirmed this decline, showing an annual average of 46 cases per station in 2010 to an annual average of 39 cases per station in 2016, a decline of about 15 percent. Appendix IV provides details from our analysis of the number of single-boat response SAR cases conducted by selected stations. A 2014 Analysis of Selected Coast Guard Air Stations and Air Facilities Identified Unnecessary Duplication but Coast Guard Would Benefit from a Comprehensive Process In 2014, the Coast Guard contracted for an analysis of selected air stations and air facilities that identified overlap and unnecessary duplication but it did not comprehensively review all air stations and air facilities. Specifically, the criteria-based analysis reviewed search and rescue capabilities, operational case data, and other mission requirements, and determined that certain air facilities provided overlapping search and rescue coverage, some of which was unnecessarily duplicative. Coast Guard officials said they used the results of this analysis to support proposed closures of air facilities in Newport, Oregon, and Charleston, South Carolina, in the President’s Fiscal Year 2014 Budget. Subsequent appropriations for fiscal year 2014 also did not include funding for the operation of the two air facilities. However, shortly before their planned closure date, the Coast Guard encountered strong opposition to the closures at the local, state, and Congressional levels, and did not close them. As with boat stations, the Coast Guard considers some overlapping coverage among air stations and air facilities desirable to mitigate potential risks such as those posed by asset maintenance downtime, limitations in the number of qualified personnel, restrictive weather conditions, or case complexity. Coast Guard officials stated that the 2014 analysis considered many factors to address potential impacts of the closure of the Newport and Charleston air facilities. For example, the Coast Guard used modeling tools to determine the operational impact of altering facility locations and the availability of aviation assets. Coast Guard officials told us they also conducted outreach to the affected communities and their political representatives in advance of the proposed closure date, as required by law. Further, Coast Guard officials explained that the fiscal outlook at the time (e.g., sequestration) required changes to optimize assets, and their proposal accomplished this without sacrificing operational capability because the response time of neighboring SAR units would remain within the Coast Guard’s SAR standards. The 2014 analysis also determined that the majority of SAR cases involving these two facilities occurred close to shore, with boat responses generally arriving on scene and conducting the search and rescue instead of air assets. Circles in figure 6 represent air asset response capabilities nationwide, as of August 2017, with darker shades reflecting greater overlapping coverage. In 2014 and 2016, two laws were enacted that required the Coast Guard to keep the air facilities open for a specific period of time, and established a number of requirements the Coast Guard is required to follow if it proposes closing or terminating operations at its air facilities. Thus, the two air facilities remained open. As of May 2017, Coast Guard officials told us they have no plan to close air facilities or air stations, nor do they plan to develop a process to comprehensively review air stations or facilities to optimize their locations because previous attempts to close stations or facilities have been prohibited by law or subject to certain requirements. However, the Coast Guard has responsibility for evaluating the need for its air stations and air facilities to ensure that it is using resources as effectively and efficiently as possible. The Coast Guard’s station optimization charter calls for a defendable (i.e., sound) and data- driven analysis of boat stations that meets statutory requirements. This charter could be a template for establishing a parallel process for comprehensively analyzing the need for its air stations and air facilities. GAO’s Program Evaluation guidance calls for evaluating programs based on well-regarded criteria to achieve strong, defensible conclusions. Program evaluations can also provide accountability for the use of public resources (e.g., to determine the “value added” by the expenditure of those resources), such as whether scarce resources are being spent on unnecessarily duplicative air facilities. Having a sound and reproducible process for comprehensively analyzing the need for air stations and air facilities will better position the Coast Guard to make decisions to enhance the efficiency of its operations and more effectively allocate its resources. These actions will also better inform Congress as to the status of the Coast Guard’s resource needs and the efficiency of its operations. Coast Guard Has Not Taken Actions nor Developed a Plan to Close Unnecessarily Duplicative Stations Its Analyses Identified The 2013 analysis of Coast Guard stations identified unnecessary duplication and recommended certain stations for potential closure; however, as of August 2017 the Coast Guard had not closed any stations, nor developed a plan with time frames for closing stations. In their input to the station optimization process, Coast Guard officials in affected districts supported recommended station closures to achieve operational improvements, and Coast Guard leadership continues to believe the study results are valid. Implementing station closures could also result in costs savings. Coast Guard Has Attempted to Close Stations At Least Eight Times since 1973 The need to close some Coast Guard stations that provide unnecessarily duplicative SAR coverage to efficiently respond to changed circumstances such as improved technology is not a new issue. Coast Guard officials reported, and our prior work has shown, that the Coast Guard has attempted to permanently or seasonally close stations at least eight times since 1973. However, closing unneeded stations has historically been difficult due to public concern about the effect of closures on local communities and other factors. In some cases over the years, Congress has intervened and enacted federal laws that have affected Coast Guard’s proposed closures. For example, in 1988 the Department of Transportation and Related Agencies Appropriations Act, 1989, required the Coast Guard to reopen boat stations 1 year after they had been closed, and at the same time provided that GAO was to evaluate the methods behind the Coast Guard decision. Responding to this provision in 1990, we reported that the Coast Guard’s 1988 closure decisions were based on flawed methods, incomplete analysis, and incomplete data. The Coast Guard subsequently updated its process and by 1994 we reported that that it was using a reasonable approach to recommend stations for closure. Despite the improved Coast Guard process, no stations have been closed since 1988. Coast Guard officials reported that Congress continues to oversee and manage the closure of stations, such as by establishing new requirements in the Coast Guard Authorization Act of 1996, which must be met to change any boat stations, after the Coast Guard attempted to close 23 stations in 1995. Similarly, after the Coast Guard attempted to close two air facilities in 2014, legislation was passed in 2014 and 2016 that prohibited Coast Guard air facility closures until January 2016 and 2018, respectively. Figure 7 provides a timeline of Coast Guard station change proposals or actions, including at least eight Coast Guard attempts to close stations between 1973 and 2014. The figure also includes statutory requirements established in 1989, 1996, 2014, and 2016, and two data-driven analyses and studies with recommendations to address unnecessary duplication, among other information. Past Coast Guard efforts to analyze and close stations have frequently identified the same stations as candidates for closure. For example, prior to the 2013 contractor study, at least two Coast Guard districts conducted their own station analyses to identify opportunities to improve their stations’ operations. These analyses also recommended permanent and seasonal closures of some stations. Specifically, in 2010, Coast Guard District 9 began conducting a data-driven analysis of its stations to optimize its boat forces. District 9 officials told us they initiated the analysis due to budget constraints, the challenges they had in fully staffing their stations, and their awareness of overlapping SAR coverage within the district. District 9’s analysis reviewed more than 16,000 SAR cases over a 5-year period (2008–2012) to understand and quantify potential response inefficiencies. According to Coast Guard officials, their analysis determined that overall SAR caseload in District 9 was extremely high in the summer months, but there was little or no SAR caseload for some stations during the winter, a factor which also affected training proficiency as personnel were not able to respond to enough cases to maintain required qualifications. Based on the results of this analysis, in December 2012, District 9 requested approval to permanently close five stations and seasonally close three stations to achieve more effective operations and improve maritime safety in the Great Lakes region. According to Coast Guard district officials, these recommended closures provided no calculated savings to taxpayers because they involved movement of personnel positions and assets to other stations, not their elimination. Instead, the recommendations showed an effort to improve operational efficiency and conserve Coast Guard resources. Furthermore, among those stations in Districts 1, 5, and 9 recommended for permanent closure in 2013, at least five—Ashtabula, Ohio; Frankfort, Michigan; Harbor Beach, Michigan; Shark River, New Jersey; and Block Island, Rhode Island—were also recommended for closure between 1985 and 1988. When we compared the 2012 recommendations from the District 9 analysis, the 2013 contractor analysis recommendations that used the 9- step Station Optimization Process, and additional 2013 district input, we found similar results among the various analyses with respect to which stations should be permanently or seasonally closed. Based on our review of documentation and interviews with District 9 officials, as well as our comparison of the results of the District 9 analysis with the results of the contractor analysis, the 2013 recommendations are affirmed by the District 9 analysis. We provide a comparison of selected recommendations and Coast Guard Headquarters’ tentatively planned actions in table 2. District 9 and Station Input Supported Recommended Permanent and Seasonal Station Closures Input from District 9, which had the greatest number of affected stations in the 2013 analysis, supported recommended changes and stated that “the existing unnecessary redundancies, unsustainable complexities, and unacceptable resource gaps negatively affected mission execution in the Great Lakes, where staffing shortfalls exist.” District 9’s input further stated that in some regions, four stations could respond to SAR cases within the Coast Guard’s SAR standard, and that while some redundancy is merited, these areas demonstrate redundancy that is operationally unnecessary, inefficient, and detrimental to the training needs of station personnel. Our interview with officials at one affected station confirmed some of the complexities facing the region. For example, officials told us that because one station recommended for seasonal closure does not operate a boat capable of offshore SAR responses, adjacent stations are already directed to respond to certain offshore SAR cases in that station’s area of responsibility to meet the Coast Guard’s 2-hour SAR standard. Officials we interviewed from each of the seven stations we visited in District 9 noted their station’s high SAR caseload concentration during the summer months and the low or nonexistent SAR caseload during the winter. For example, officials from two stations that the Coast Guard would like to seasonally close during the winter told us that their stations had not responded to an ice rescue in more than 7 years. Officials we interviewed at one station recommended for permanent closure noted that commercial boating traffic and the local population have been declining for many years, that the station was not busy during the winter season, and that the station had not conducted an ice rescue since 2002. In 2017, the Coast Guard affirmed that its leadership believed that the results of the 2013 study remained valid as station workloads have remained relatively consistent. Headquarters officials also told us that the 2013 study criteria and subsequent recommendations for permanent closures were conservative because of previous unsuccessful attempts to close stations, and to meet statutory requirements to maintain a certain level of SAR coverage. They also told us that the analysis did not consider additional layers of response even though these layers could provide some additional SAR response backup for Coast Guard stations. For example, the contractor’s analyses of boat stations did not consider SAR support provided by Coast Guard aviation assets, which generally provide an additional layer of SAR coverage for boat stations. Moreover, district officials told us that aviation assets in District 9 were recently realigned to provide even greater response capability, including longer range helicopters with de-icing capability to improve winter response capability. The contractor analysis also did not take into account the potential SAR capabilities of commercial towing operators and local first responders which can also provide another layer of coverage to assist Coast Guard stations with SAR coverage. For example, officials from each of the seven stations we visited in District 9 told us that they coordinate with other entities, such as commercial towing operators, who can conduct responses for non-life-threatening incidents, such as providing fuel to or towing disabled boats in their station’s area of responsibility. Officials from one station also told us that the local fire department has performed ice rescues in the past, because people who require ice rescues tend to dial 911 first rather than call the Coast Guard, and thus local emergency responders are able to respond faster than the Coast Guard. Officials from another station told us that the local sheriff has two response boats, and that the Coast Guard coordinates with local government and responders. Station Closures Could Achieve Cost Savings Station closures could also achieve cost savings in addition to the aforementioned efficiency improvements. For example, based on our analysis of the contractor study, if its recommendations to permanently close the 18 stations from D1, D5, and D9 were implemented, and personnel and boat assets were moved or reduced in accordance with the study recommendations, the study reported that these closures could achieve potential cost savings of about $290 million over 20 years. In addition, land disposition estimates were excluded from the study, which could result in one-time proceeds from the sale of the land on which the stations are sited, if the land value exceeded remediation costs. In addition to lost opportunities to improve operational efficiency and effectiveness because stations were not closed previously, some of these stations have also fallen into physical disrepair and will require funding for repairs if the stations remain open, even if they are only operated seasonally. For example, officials at one station we visited showed us a boat dock that was improperly installed and thus was subsequently damaged by waves and will need to be repaired or replaced. At this same station, officials informed us that the furnace system requires daily, manual adjustments to address temperature fluctuations that could cause damage to the station. One official also told us that this station’s building structure is too big and costly, and its condition too poor, to be worth keeping. Therefore, even if this station were seasonally closed, as currently recommended—despite the analysis results suggesting permanent closure—the station will continue to require personnel to be at the station on a daily basis year round. Another station, which multiple studies recommended for permanent closure because of unnecessary duplication and a caseload insufficient to sustain the training requirements of personnel stationed there, was rebuilt as a result of extensive damage from Hurricane Sandy. According to Coast Guard budget data, more than $2.3 million was expended to restore this station as of March 2017 using funds appropriated by the supplemental appropriations act enacted in response to Hurricane Sandy. Actions Needed to Address Unnecessary Duplication Given the extent of overlapping SAR coverage identified by the Coast Guard’s analyses and its attempts to address unnecessary duplication, we considered the stations’ levels of overlapping coverage in the context of the definitions we use for identifying overlap and duplication. Figure 8 depicts the extent of the Coast Guard’s overlapping boat and air station SAR coverage, with darker shading representing greater overlapping coverage, some of which the Coast Guard determined to be unnecessarily duplicative. Boat station coverage is represented by shading while aviation coverage is shown by the largest circle sizes. In April 2016, the Coast Guard completed statutory requirements associated with closing eight stations in District 9 by conducting outreach to regional and local communities that would be affected by seasonal closures. The Coast Guard held these meetings to explain why it was necessary to optimize station locations and reallocate personnel from closed stations to their adjacent stations; address overlapping SAR coverage; and seasonally close unnecessarily duplicative stations. Coast Guard officials from one station told us they held a public meeting with the local fire department, police, and commercial towing operators to describe planned changes for seasonal operations at the station, despite this station having been recommended for permanent closure by studies and district input. According to Coast Guard officials, while some local responders in the District 9 area expressed some concerns, they understood the need for change. In addition, according to headquarters officials, the Coast Guard has also completed outreach efforts with members of Congress who represent these communities. They further stated that they plan to follow the same outreach process when they finalize decisions about whether to permanently or seasonally close stations in Districts 1 and 5. The Coast Guard has not taken action to implement the results of its analyses which recommended closures even though it has completed requirements to pursue station closures in District 9. Officials stated that the Coast Guard has not implemented the results of its sound process because past station closure efforts have been met with resistance from affected communities. As a result, Coast Guard leadership decided to pursue a more cautious approach by maintaining seasonal daily operations rather than closing stations outright as recommended in multiple analyses. Standards for Internal Control in the Federal Government state that agencies should have policies and procedures for ensuring that findings of audits or other reviews, such as the Coast Guard’s 2013 station optimization study, are promptly resolved. The guidance further states that managers are to (1) correct identified deficiencies, (2) produce improvements, or (3) demonstrate that the findings and recommendations do not warrant management action. Coast Guard officials stated they recognize that their planned actions do not fully match the identified recommendations, but given historical challenges with closing stations, seasonal closures are preferable to taking no action. In March 2017, Coast Guard officials told us they intended to begin the process for seasonal closures of stations in District 9 at the end of the 2017 boating season while actions in Districts 1 and 5 are pending as the Coast Guard has not finalized its decisions about these stations. The Project Management Institute’s Standard for Program Management describes, among other things, how resource planning, goals, and milestones are good practices that can enhance management for most programs. By executing decisions to close stations based on the results of its analyses and developing a plan with milestones to execute actions it has identified to address unnecessary duplication, the Coast Guard will be better positioned to follow through with both permanent and seasonal closures of unnecessary stations, can improve its operational and training proficiency by consolidating the remaining stations’ workloads to allow for sufficient training, and may realize cost savings. Conclusions The Coast Guard’s 2013 analysis, based on a sound, data-driven process that applied established criteria—its 2-hour SAR response standard— supports permanently closing some boat stations. Nevertheless, Coast Guard officials do not intend to follow the recommendations to permanently close the stations the study recommended, due, in part, to views expressed by community representatives. The Coast Guard’s 2014 air station and air facilities study also supported closing two air facilities and was criteria-based, but was not comprehensive. An optimization process similar to that applied to boat stations could make a better case for closing selected air stations and air facilities, if it is methodologically sound. The need to close Coast Guard stations that provide unnecessary duplication of SAR coverage, in response to changing circumstances, is not a new issue. Closing unneeded stations has historically been difficult, but with improvements in technology, severely decreased workloads, and continuing budget constraints, the importance of reevaluating the operations of these stations is even greater. In addition to lost opportunities to improve operational efficiency and effectiveness that would be gained by closing unnecessary stations, some of these stations have fallen into physical disrepair and will require funding for repairs if they remain open. Given these factors, Coast Guard action is clearly warranted. Recommendations for Executive Action We are recommending the following three actions to the Coast Guard: The Commandant of the Coast Guard should establish and follow a sound air station optimization process similar to its process for analyzing boat stations to allow it to comprehensively analyze its need for air stations and air facilities and determine what changes may be needed. (Recommendation 1) The Commandant of the Coast Guard should establish a plan with target dates and milestones for closing boat stations that it has determined, through its 9-step process and subsequent analysis, provide overlapping search and rescue coverage and are unnecessarily duplicative. (Recommendation 2) The Commandant of the Coast Guard should take action to close the stations identified according to its plan and target dates. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to DHS for review and comment. In its comments, reproduced in appendix V, DHS concurred with our recommendations. DHS, through the Coast Guard, also provided technical comments, which we incorporated as appropriate. DHS concurred with our first recommendation that the Coast Guard establish and follow a sound air station optimization process similar to its process for analyzing boat stations so it may comprehensively analyze its air station and air facility needs. DHS further stated that the Coast Guard would utilize its fiscal year 2020 Planning, Programming, Budget, and Execution cycle to identify efficiencies in air station optimization using best practices employed in its boat station optimization efforts. DHS expects this effort to be completed in September 2019. DHS concurred with our second recommendation that the Coast Guard establish a plan with target dates and milestones for closing boat stations that it has determined provide overlapping search and rescue coverage and are unnecessarily duplicative. DHS stated that Coast Guard headquarters and appropriate district commands will continue to analyze operational coverage across the nation through the 9-step optimization process and recommend closures or seasonalization (e.g., seasonal closures) of boat stations to eliminate unnecessary duplication and overlap in search and rescue coverage. The Coast Guard’s internal analysis is expected to be completed in September 2020. DHS concurred with our third recommendation that the Coast Guard take action to close the identified stations according to its plan and target dates, stating that Coast Guard headquarters personnel and appropriate district commands will continue to analyze closing or seasonalizing operations at boat stations identified according to its plan and target dates. DHS further stated that it must complete implementation of the second recommendation before beginning to implement the third and that the estimated completion date for the third recommendation was to be determined. Given the robustness of the Coast Guard’s review process and the clear results showing unnecessary duplication among some boat stations, in addition to other valid analyses completed in previous years supporting the closure of unneeded boat stations, the Coast Guard should move forward with minimal delay to implement this third recommendation, once the plan as outlined in the second recommendation is completed. We will continue to monitor the Coast Guard’s actions to close unnecessarily duplicative stations in a timely manner through our annual report on duplication, overlap, and fragmentation in the federal government. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Appendix I: Scope and Methodology To identify the extent to which the U.S. Coast Guard (Coast Guard) has a sound process for analyzing the need for its boat stations, we reviewed laws, policies, and procedures related to its search and rescue (SAR) mission. We reviewed Coast Guard documentation of processes it used to analyze the need for boat stations, reviewed resource and budget factors, and analyzed station activity levels. We also reviewed prior GAO reports on the Coast Guard’s resource allocation process and its previous attempts to close stations. To verify and validate the Coast Guard’s specific analytical process used to determine overlapping coverage, we obtained and analyzed the Coast Guard’s analytical assumptions, including the operational parameters of the assets assigned to the stations (e.g., boat speeds), and station locations. This analysis also allowed us to verify the soundness of the Coast Guard’s model used to identify overlap. We then independently recreated and visually depicted overlapping SAR coverage provided by the stations, based on Coast Guard data, assumptions, and documentation, and compared it with SAR case data by geographic area. We then analyzed Coast Guard data on single boat SAR responses (sorties) by station for fiscal years 2010 through 2016, the most recent data available at the time of our review. We visited a nongeneralizable sample of 12 stations we selected from within districts where the Coast Guard had identified overlap, and interviewed officials to identify local policies, station characteristics, local coordination with emergency responders and federal agencies, and local input to the Coast Guard’s process for assessing station needs and implementing changes to the locations of stations, if any. Additionally, we interviewed Coast Guard officials, including field and headquarters personnel, to determine the extent to which the Coast Guard had assessed maritime activity trends and leveraged resources from outside entities, such as local first responders, federal agencies, and private industry. We also interviewed Coast Guard officials to obtain information on the extent to which the Coast Guard used findings and recommendations from selected studies, strategies, and plans in its analyses of the need for its boat stations. To assess the reliability of Coast Guard SAR data, we interviewed knowledgeable officials, reviewed documentation, and electronically tested the data for obvious errors and anomalies. We interviewed Coast Guard officials to discuss the reliability issues we identified, such as the inability to attribute multi-boat SAR case responses to individual stations, as well as inconsistent data related to the types of boats used to conduct SAR cases. Regarding attributing multi-boat responses to individual stations, Coast Guard officials told us that some cases involve multiple boats and that the outcome of a SAR case may not be attributable to an individual station. Regarding boat assets used to conduct SAR cases, in February 2017, officials informed us that in 2015 the Coast Guard implemented changes to its Marine Information for Safety and Law Enforcement (MISLE) system and added around 500 controls, such as built-in data entry checks, to prevent potential data entry errors. Officials told us that this change could have caused some inconsistences in how the data is captured, but that the implementation of the changes includes testing and ongoing actions to resolve the issues. We determined that the data were sufficiently reliable for the purposes of this report to demonstrate selected station caseloads in our report. We compared Coast Guard actions to evaluate stations against criteria established in GAO’s Designing Evaluations guidance, which call for adhering to established evaluation design practices in order to achieve reliable results, the Coast Guard’s SAR response standard, and statutory requirements to conduct public outreach. To identify the extent to which the Coast Guard has a sound process to analyze the need for its air stations and air facilities, we reviewed laws, policies, and procedures related to its SAR mission. We reviewed Coast Guard documentation of processes it used to analyze the need for selected air facilities in 2014. We obtained and analyzed Coast Guard assumptions and station locations for determining overlapping SAR coverage in 2014 and used a mapping program to visually depict overlapping coverage provided by aviation assets, based on Coast Guard data, assumptions, and documentation. Additionally, we interviewed Coast Guard officials to obtain information on the extent to which the Coast Guard used findings and recommendations from selected studies, strategies, and plans in its analyses of the need for and locations of its air stations. We compared Coast Guard actions to evaluate air stations and air facilities against criteria established in GAO’s Designing Evaluations guidance which calls for adhering to established evaluation design practices in order to achieve reliable results, to determine if the Coast Guard’s methodological steps were sound. To determine the extent to which the Coast Guard has taken actions to implement the results of its analyses of its need for boat and air stations, we analyzed Coast Guard documents and reports to identify proposals put forth by the Coast Guard for permanently or seasonally closing stations it has identified as overlapping and unnecessary. We analyzed these proposed actions to determine whether proposed plans or decisions regarding stations aligned with the results of the Coast Guard analyses. Specifically, we reviewed the study reports, memoranda detailing district input on the results of the 2013 contractor study and their verification of the stations the study identified as unnecessarily duplicative, and compared the recommended closures from the various studies to determine if the outcomes were consistent. We also compared Coast Guard actions against its response standards and statutory requirements to conduct public outreach. Finally, we reviewed documents and information on these proposals and compared them against criteria in Standards for Internal Control in the Federal Government, and leading practices identified in the Project Management Institute’s Standard for Program Management. We conducted this performance audit from July 2016 through October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Coast Guard Assets The U.S. Coast Guard (Coast Guard) uses several different types of assets to carry out its missions, including search and rescue. Coast Guard assets include boats, rotary wing aircraft (helicopters), fixed wing aircraft (planes), and cutters (including patrol boats and ships). Boats The Coast Guard’s primary boat station search and rescue (SAR) assets are its boats, which it uses to conduct searches near shore and on inland waterways, such as harbors and bays that are too shallow for its larger cutters to access. Different boats have different capabilities (see table 3). For example, 47-foot motor life boats are slower than other boats, but can operate in heavy weather and up to 50 nautical miles offshore. Aircraft The Coast Guard operates two types of aircraft: rotary wing (helicopters) and fixed wing (airplanes). Rotary wing aircraft operate from air stations, air facilities, cutters equipped with flight decks, and other locations that can support flight operations. The Coast Guard uses its rotary wing aircraft for search and rescue in coastal waters, among other mission uses. Rotary wing aircraft can hover and are equipped with hoists, which can allow rescue of distressed individuals from the water. Fixed wing aircraft operate from Coast Guard air stations, air facilities, and airports, and are used to conduct over-water searches and other missions. Cutters Coast Guard cutters are ships 65 foot or longer. They operate under the control of District or Area commands. According to the Coast Guard, cutters are suitable for conducting extended search and rescue operations because of their high endurance, communications systems, and ability to operate in heavier weather than other assets. Cutters carry boats that can directly rescue mariners in distress. Cutters with flight decks can serve as launch platforms for helicopters, which can help with SAR operations. The Coast Guard generally allocates boats to stations based on the needs and conditions of those stations. The Coast Guard also has other types of boats in its inventory that are used for a variety of missions that may include SAR missions. Table 3 provides details of selected boats used for search and rescue. Appendix III: Extent of Search and Rescue Coverage by Coast Guard Boat Stations in the Contiguous United States Figures 9 through 12 show the extent of search and rescue coverage by U.S. Coast Guard (Coast Guard) boat stations in the contiguous United States and selected Coast Guard districts reported in September 2013. The extent of coverage in 2017 was the same as the Coast Guard’s 2013 contractor study reported. Appendix IV: Reported Single-Boat Search and Rescue Responses by Selected Stations, Fiscal Years 2010 through 2016 Table 4 provides details of selected U.S. Coast Guard (Coast Guard) stations recommended for permanent or seasonal closure and the search and rescue (SAR) caseloads they reported for fiscal years 2010 through 2016, as well as estimated fiscal year 2015 annual operating costs. Our analysis of Coast Guard SAR single-boat response case data from fiscal years 2010 through 2016 found that the 18 stations recommended for closure reported an average of about 15 single-boat SAR responses annually, compared to an annual average of about 41 single-boat responses for all boat stations. These numbers are based on station reported data in the Coast Guard’s Marine Information for Safety and Law Enforcement (MISLE) case management system, and only include cases in which a single boat was launched to conduct a SAR mission. Some SAR missions result in multiple stations launching due to factors such as close proximity of stations, case complexity such as weather conditions, or other factors such as boat availability or training. Including multilaunch cases could result in double counting of SAR cases and therefore these cases were excluded from our analysis. Due to flexibility in how Coast Guard stations report SAR responses, some seasonal stations, which are detached subunits of larger parent stations, report the number of cases to which they respond in combination with the parent station. Because we could not disaggregate this information, we do not report on individual cases from these stations. Table 5 provides details of selected stations recommended for permanent or seasonal closure and the SAR caseloads they reported during the winter months, for fiscal years 2010 through 2016. Appendix V: Comments from the Department of Homeland Security Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Dawn Hoff (Assistant Director), Andrew Curry (Analyst-in-Charge), Chuck Bausell, Dorian Dunbar, Michele Fejfar, Peter Haderlein, Eric Hauswirth, Tracey King, John Mingus, Claire Peachey, and Christine San all made key contributions to this report.
Why GAO Did This Study The Coast Guard, within the Department of Homeland Security (DHS), is charged with preventing loss of life, injury, and property damage in the maritime environment through its SAR mission. It maintains over 200 stations with various assets, such as boats and helicopters (depending on the station), along U.S. coasts and inland waterways to carry out this mission, as well as its other missions such as maritime security. Resource limitations and changes to operations require the Coast Guard to periodically reexamine the need for these stations. GAO was asked to review these efforts. This report addresses, among other objectives, the extent to which the Coast Guard has (1) a sound process for analyzing the need for its boat stations and (2) taken actions to implement its boat station process results. GAO reviewed Coast Guard laws, standards, and guidance; analyzed Coast Guard data on station locations and SAR coverage; and analyzed the process and criteria used to evaluate its station needs and compared it with established evaluation design practices and internal control standards. GAO also interviewed Coast Guard officials. What GAO Found GAO found that the U.S. Coast Guard has a sound process for analyzing its boat stations that includes clear and specific steps for analyzing the need for stations using terms that can be readily defined and measured. In 2013, following this process, the Coast Guard and its contractor identified 18 unnecessarily duplicative boat stations with overlapping coverage that could be permanently closed without negatively affecting the Coast Guard's ability to meet its 2-hour search and rescue (SAR) response standard and other mission requirements. The process was designed to ensure the Coast Guard met or exceeded requirements to maintain SAR coverage and to account for such factors as boat downtime and surge capacity to respond to certain incidents. Further, the boat station analysis did not consider potential SAR responses by the Coast Guard's air stations and facilities, which can provide additional overlapping coverage. Coast Guard officials said that the closures would, among other things, help improve operations by consolidating boat station caseloads to help ensure personnel were active enough to maintain training requirements. In 2017, the Coast Guard affirmed that its leadership believes the 2013 study remains valid, but so far the agency has not taken actions to implement the closures identified by its sound process. Instead, the Coast Guard is recommending conversion of some year-round stations to seasonal stations that would operate during the summer. Coast Guard officials stated that seasonal closures are preferable to no action, given its limited resources, the significant overlapping SAR coverage, and potential to improve operations. However, permanently closing unnecessarily duplicative stations may better position the Coast Guard to improve its operations. It could also achieve up to $290 million in cost savings over 20 years, if stations were permanently closed. What GAO Recommends GAO is making three recommendations, including one recommendation that the Coast Guard close unnecessarily duplicative stations that its analysis identified. DHS concurred with the recommendations and stated it plans to act to eliminate unnecessary duplication.
gao_GAO-18-609SP
gao_GAO-18-609SP_0
Background Use of Performance Information in the Federal Government Concerned that the federal government was more focused on program activities and processes than the results to be achieved, Congress passed the Government Performance and Results Act of 1993 (GPRA). GPRA sought to focus federal agencies on performance by requiring agencies to develop long-term and annual goals, and measure and report on progress towards those goals annually. Based on our analyses of the act’s implementation, we concluded in March 2004 that GPRA’s requirements had laid a solid foundation for results-oriented management. At that time, we found that performance planning and measurement had slowly yet increasingly become a part of agencies’ cultures. For example, managers reported having significantly more performance measures in 2003 than in 1997, when GPRA took effect government-wide. However, the benefit of collecting performance information is fully realized only when that information is actually used by managers to make decisions aimed at improving results. Although our 2003 survey found greater reported availability of performance information than in 1997, it also showed managers’ use of that information for various management activities generally had remained unchanged. Based on those results, and in response to a request from Congress, in September 2005, we developed a framework intended to help agencies better incorporate performance information into their decision making. As shown in figure 1, we identified five leading practices that can promote the use of performance information for policy and program decisions; and four ways agency managers can use performance information to make program decisions aimed at improving results. Our September 2005 report also highlighted examples of how agencies had used performance information to improve results. For example, we described how the Department of Transportation’s National Highway Traffic Safety Administration used performance information to identify, develop, and share effective strategies that increased national safety belt usage—which can decrease injuries and fatalities from traffic accidents— from 11 percent in 1985 to 80 percent in 2004. Subsequently, the GPRA Modernization Act of 2010 (GPRAMA) was enacted, which significantly expanded and enhanced the statutory framework for federal performance management. The Senate Committee on Homeland Security and Governmental Affairs report accompanying the bill that would become GPRAMA stated that agencies were not consistently using performance information to improve their management and results. The report cited the results of our 2007 survey of federal managers. That survey continued to show little change in managers’ use of performance information. The report further stated that provisions in GPRAMA are intended to address those findings and increase the use of performance information to improve performance and results. For example, GPRAMA requires certain agencies to designate a subset of their respective goals as their highest priorities—known as agency priority goals—and to measure and assess progress toward those goals at least quarterly through data-driven reviews. Our recent work and surveys suggest that data-driven reviews are having their intended effect. For example, in July 2015, we found that agencies reported that their reviews had positive effects on progress toward agency goals and efforts to improve the efficiency of operations, among other things. In addition, for those managers who were familiar with their agencies’ data-driven reviews, our 2013 and 2017 surveys showed that the more managers viewed their programs as being subject to a review, the more likely they were to report their agencies’ reviews were driving results and conducted in line with our leading practices. Recognizing the important role these reviews were playing in improving data-driven decision making, our management agenda for the presidential and congressional transition in 2017 included a key action to expand the use of data-driven reviews beyond agency priority goals to other agency goals. More broadly, our recent surveys of federal managers have continued to show that reported government-wide uses of performance information generally have not changed or in some cases have declined. As we found in September 2017, and as illustrated in figure 2, the 2017 update to our index suggests that government-wide use of performance information did not improve between 2013 and 2017. In addition, it is statistically significantly lower relative to our 2007 survey, when we created the index. Moreover, in looking at the government-wide results on the 11 individual survey questions that comprise the index, we found few statistically significant changes in 2017 when compared to (1) our 2013 survey or (2) the year each question was first introduced. For example, in comparing 2013 and 2017 results, two questions had results that were statistically significantly different: The percentage of managers who reported that employees who report to them pay attention to their agency’s use of performance information was statistically significantly higher (from 40 to 46 percent). The percentage of managers who reported using performance information to adopt new program approaches or change work processes was statistically significantly lower (from 54 to 47 percent). As we stated in our September 2017 report, the decline on the latter question was of particular concern as agencies were developing plans to improve their efficiency, effectiveness, and accountability, as called for by an April 2017 memorandum from OMB. The Administration’s Plans for Federal Performance Management In early 2017, the administration announced several efforts intended to improve government performance. OMB issued several memorandums detailing the administration’s plans to improve government performance by reorganizing the government, reducing the federal workforce, and reducing federal agency burden. As part of the reorganization efforts, OMB and agencies were to develop government-wide and agency reform plans, respectively, designed to leverage various GPRAMA provisions. For instance, the April 2017 memorandum mentioned above stated that OMB intends to monitor implementation of the reforms using, among other things, agency priority goals. While many agency-specific organizational improvements were included in the President’s fiscal year 2019 budget, released in February 2018, OMB published additional government-wide and agency reform proposals in June 2018. The President’s Management Agenda (PMA), released in March 2018, outlines a long-term vision for modernizing federal operations and improving the ability of agencies to achieve outcomes. To address the issues outlined in the PMA, the administration established a number of cross-agency priority (CAP) goals. CAP goals, required by GPRAMA, are to address issues in a limited number of policy areas requiring action across multiple agencies, or management improvements that are needed across the government. The PMA highlights several root causes for the challenges the federal government faces. Among them is that agencies do not consistently apply data-driven decision-making practices. The PMA states that smarter use of data and evidence is needed to orient decisions and accountability around service and results. To that end, in March 2018, the administration established the Leveraging Data as a Strategic Asset CAP goal to improve the use of data in decision making to increase the federal government’s effectiveness. Federal Performance Management Leadership Roles and Responsibilities Over the past 25 years, various organizations, roles, and responsibilities have been created by executive action or in law to provide leadership in federal performance management. At individual agencies and across the federal government, these organizations and officials have key responsibilities for improving performance, as outlined below. OMB: At least every four years, OMB is to coordinate with other agencies to develop CAP goals—such as the one described earlier on leveraging data as an asset—to improve the performance and management of the federal government. OMB is also required to coordinate with agencies to develop annual federal government performance plans to define, among other things, the level of performance to be achieved toward the CAP goals. Following GPRAMA’s enactment, OMB issued guidance for initial implementation, as required by the act, and continues to provide updated guidance in its annual Circular No. A-11, additional memorandums, and other means. Chief Operating Officer (COO): The deputy agency head, or equivalent, is designated as the COO, with overall responsibility for improving agency management and performance through, among other things, the use of performance information. President’s Management Council (PMC): The PMC is comprised of OMB’s Deputy Director for Management and the COOs of major departments and agencies, among other individuals. Its responsibilities include improving overall executive branch management and implementing the PMA. Performance Improvement Officer (PIO): Agency heads designate a senior executive as the PIO, who reports directly to the COO. The PIO is responsible for assisting the head of the agency and COO to ensure that agency goals are achieved through, among other things, the use of performance information. Performance Improvement Council (PIC): The PIC is charged with assisting OMB to improve the performance of the federal government. It is chaired by the Deputy Director for Management at OMB and includes PIOs from each of the 24 Chief Financial Officers Act agencies, as well as other PIOs and individuals designated by the chair. Among its responsibilities, the PIC is to work to resolve government-wide or cross-cutting performance issues, and facilitate the exchange among agencies of practices that have led to performance improvements. Previously, the General Service Administration’s (GSA) Office of Executive Councils provided analytical, management, and administrative support for the PIC, the PMC, and other government-wide management councils. In January 2018, the office was abolished and its functions, staff, and authorities, along with those of the Unified Shared Services Management Office, were reallocated to GSA’s newly created Shared Solutions and Performance Improvement Office. Agencies’ Use of Performance Information in Decision Making and Related Leading Practices Generally Has Not Improved Reported Use of Performance Information in Decision Making Generally Has Not Improved at Individual Agencies Since 2013 As at the government-wide level—where, as described earlier, the use of performance information did not change from 2013 to 2017—managers’ reported use of performance information at most agencies also did not improve since 2013 (illustrated in figure 3). At the agency level, 3 of the 24 agencies had statistically significant changes in their index scores—1 increase (National Science Foundation) and 2 decreases (Social Security Administration and the Office of Personnel Management). Also, in 2017, 6 agencies had results that were statistically significantly different—4 higher and 2 lower—than the government-wide average (see sidebar). Throughout the report, we highlight two different types of statistically significant results—changes from our last survey in 2013 and differences from the 2017 government-wide average. The former indicates when an agency’s reported use of performance information or leading practices has measurably improved or declined. The latter indicates when it is statistically significantly higher or lower than the rest of government. These results suggest agencies have taken actions that led to improvements in their use of performance information. For example, when a result is a statistically significant increase since 2013, as with the National Science Foundation index score in 2017, this suggests that the agency has adopted practices that led to a measurable increase in the use of performance information by managers. When a result is statistically significantly higher than the government-wide average, like GSA’s 2017 index score, this suggests that the agency’s use of performance information is among the highest results when compared to the rest of government. These agencies could also have insights into practices that led to relatively high levels of performance information use. Finally, when a result is a statistically significant decrease since 2013, as with the Social Security Administration’s index score in 2017, or statistically significantly lower than the government-wide average, like the Department of Homeland Security’s 2017 index score, this suggests the agencies face challenges that are hampering their ability to use performance information. Appendix III provides each agency’s index scores from 2007, 2013, and 2017 to show changes between survey years. When we disaggregated the index and analyzed responses from the 11 questions that comprise the index—which could help pinpoint particular actions that improved the use of performance information—we similarly found relatively few changes in agencies’ recent results. Specifically, we identified 16 instances where agency responses on individual questions were statistically significantly different from 2013 to 2017—10 increases and 6 decreases. This represents about 6 percent of the total possible responses to the 11 survey questions from each of the agencies. In addition, we found 12 instances where an agency’s result on a question was statistically significantly higher (11) or lower (1) than the government-wide average in 2017. For example, the percentage of Social Security Administration (SSA) managers reporting that their peers use performance information to share effective approaches was statistically significantly higher than the government-wide average. Although SSA’s index score had a statistically significant decline in 2017 compared to 2013, the agency’s index score remains relatively high, as it has in prior years. The scope of our work has not allowed us to determine definitively what factors caused the decline in SSA’s index score and whether the decline is likely to continue, although its result on this particular question may indicate a continued strength. Each agency’s results on the 11 questions that comprise the index are presented in appendix I. The agencies’ respective statistically significant results are identified in figure 4. While some agencies had statistically significant improvements on individual questions, and could point to actions that led to improvements in their use of performance information, these improvements should be considered in relation to the range of agency results and the government- wide average. In figure 4, there are five agencies with statistically significant increases on responses to individual questions, where those results were not statistically significantly higher than the government-wide average (see arrows without plus signs for the Departments of Agriculture, Defense, and Justice; the Environmental Protection Agency; and the National Science Foundation). While these represent improvements, they should be considered in relation to the range of agency results and the government-wide average (provided in detail in the agency summaries in appendix I). For example, in 2017, the percentage of managers at the Department of Agriculture who reported that upper management use performance information to inform decisions about program changes was statistically significantly higher than in 2013. However, the department’s 2017 result (37 percent) was relatively lower when compared to the maximum agency result on that question (60 percent). Appendix I presents the results on the index and the 11 questions that comprise it for each of the 24 agencies. Individual Agencies’ Reported Use of Leading Practices Generally Remains Unchanged When we compared government-wide and agency-level results on selected survey questions that reflect practices that promote the use of performance information, we found that results between 2013 and 2017 generally remained unchanged. As described earlier, there are 10 survey questions that both reflect the five leading practices identified in our past work and had statistically significant associations with higher index scores. As shown in figure 5, government-wide results on 2 of the 10 questions were statistically significantly different, both increases, from 2013 to 2017. Despite these two increases, the overall results suggest these practices are not widely followed government-wide. On most of the 10 questions, only about half (or fewer) of the managers reported their agencies were following them to a “great” or “very great” extent. When we analyzed agency-level responses to these 10 questions, we also found relatively few changes in recent results. Specifically, our analysis found 20 instances—16 increases and 4 decreases—where agencies’ responses on individual questions were statistically significantly different from 2013 to 2017. This represents about 8 percent of the total possible responses to the 10 survey questions from each of the agencies. In addition, we found 10 instances where an agency’s result on a question was statistically significantly higher (8) or lower (2) than the government-wide average in 2017. Each agency’s results on these 10 questions are presented in appendix I, and the statistically significant results are identified in figure 6. Those agencies with results on individual questions that are either statistically significantly higher than 2013, higher than the 2017 government-wide average, or both may have taken actions in line with our leading practices for promoting the use of performance information. For example, the National Science Foundation had both types of statistically significant results on a question about having sufficient information on the validity of their performance data. Here, the agency’s result increased 27 percentage points from 2013 to 2017. While the scope of our review does not allow us to definitively determine the reasons for the National Science Foundation’s higher results, they suggest the agency has taken recent actions that greatly improved the availability and accessibility of information on the validity of performance data. In both 2013 and 2017, our analyses found this particular question to be the strongest predictor of higher performance information use when we tested for associations between the questions that reflect leading practices and our index. Managers Whose Programs Were Subject to Data- Driven Reviews Reported Greater Use of Performance Information and Leading Practices Our 2017 survey results show that managers who reported their programs were subject to data-driven reviews also were more likely to report using performance information in decision making to a greater extent (see figure 7). For the 35 percent of managers who reported being familiar with data-driven reviews, those who reported their programs had been subject to data-driven reviews to a “great” or “very great” extent had index scores that were statistically significantly higher than those whose programs were subject to these reviews to a lesser extent. Similarly, we found that being subject to data-driven reviews to a greater extent was also related to greater reporting of agencies following practices that can promote the use of performance information. As figure 8 shows, managers who reported their programs were subject to these reviews to a “great” or “very great” extent more frequently reported that their agencies followed the five leading practices that promote the use of performance information, as measured by the 10 related survey questions associated with higher scores on the index. For example, of the estimated 48 percent of managers who reported their programs were subject to data-driven reviews to a “great” or “very great” extent, 72 percent also reported that managers at their level (peers) effectively communicate performance information on a routine basis to a “great” or “very great” extent. Conversely, for the 24 percent of managers who reported their programs were subject to data-driven reviews to a “small” or “no” extent, only 30 percent reported that managers at their level do this to a “great” or “very great” extent. Opportunities Exist for the Executive Branch to Increase the Use of Performance Information within Agencies Disparities Exist in the Use of Performance Information by Senior Agency Leaders and Managers at Lower Levels Our past work has found that the Executive Branch has taken steps to improve the use of performance information in decision making by senior leaders at federal agencies. However, our survey results indicate those steps have not led to similar improvements in use by managers at lower levels. Through its guidance to implement GPRAMA, OMB developed a framework for performance management in the federal government that involves agencies setting goals and priorities, measuring performance, and regularly reviewing and reporting on progress. This includes expectations for how agency senior leaders should use performance information to assess progress towards achieving agency priority goals through data-driven reviews, and strategic objectives through strategic reviews. For example, GPRAMA requires, and OMB’s guidance reinforces, that data-driven reviews should involve the agency head, Chief Operating Officer, Performance Improvement Officer, and other senior officials responsible for leading efforts to achieve each goal. OMB’s guidance also identifies ways in which agency leaders should use the results of those reviews to inform various decision-making activities, such as revising strategies, formulating budgets, and managing risks. Our past work also found that agencies made progress in implementing these reviews and using performance information. In July 2015, we found that agencies generally were conducting their data-driven reviews in line with GPRAMA requirements and our related leading practices, including that agency leaders used the reviews to drive performance improvement. In addition, in September 2017, we reported on selected agencies’ experiences in implementing strategic reviews and found that the reviews helped direct leadership attention to progress on strategic objectives. Despite those findings, our survey results continue to show that the reported use of performance information by federal managers has generally not improved, and actually declined at some agencies. This could be because of the two different groups of agency officials covered by our work. GPRAMA’s requirements, and the federal performance management framework established by OMB’s guidance, apply at the agency-wide level and generally involve senior leaders. Our past work reviewing implementation of the act therefore focused on improvements in the use of performance information by senior leaders at the agency- wide level. In contrast, our surveys covered random samples of mid- and upper-level managers within those agencies, including at lower organizational levels such as component agencies. Their responses indicate that the use of performance information more broadly within agencies—at lower organizational levels—generally has not improved over time. The exception to this was managers whose programs were subject to the data-driven reviews required by GPRAMA. As described above, those managers were more likely to report greater use of performance information in their agencies. This reinforces the value of the processes and practices put in place by GPRAMA. Our survey results suggest that limited actions have been taken to diffuse processes and practices related to the use of performance information to lower levels within federal agencies, where mid-level and senior managers make decisions about managing programs and operations. Although OMB staff agreed that diffusing processes and practices to lower levels could lead to improved use of performance information, they told us they have not directed agencies to do so for a few reasons. First, OMB staff expressed concerns about potentially imposing a “one-size-fits- all” approach on agencies. They stated that agencies are best positioned to improve their managers’ use of performance information, given their individual and unique missions and cultures, and the environments in which they operate. We agree that it makes sense for agencies to be able to tailor their approaches for those reasons. OMB’s existing guidance provides an overarching framework that recognizes the need for flexibility and for agencies to tailor their approaches. Moreover, given the long- standing and cross-cutting nature of this challenge, a government-wide approach also would provide a consistent focus on improving the use of performance information more extensively within agencies. OMB staff also told us that they believed it would go beyond their mandate to direct agencies to extend GPRAMA requirements to lower levels. GPRAMA requires OMB to provide guidance to agencies to implement its requirements, which only apply at the agency-wide level. As noted earlier, however, GPRAMA also requires OMB to develop cross- agency priority (CAP) goals to improve the performance and management of the federal government. The President’s Management Agenda established a CAP goal to leverage data as a strategic asset, in part, to improve the use of data for decision making and accountability throughout the federal government. This new CAP goal presents an opportunity for OMB and agencies to identify actions to expand the use of performance information in decision making throughout agencies. Plan for New CAP Goal Does Not Yet Contain Required Elements for Successful Implementation As of June 2018, the action plan for implementing the Leveraging Data as a Strategic Asset CAP goal is limited. According to the President’s Management Agenda and initial CAP goal action plan, the goal primarily focuses on developing and implementing a long-term, enterprise-wide federal data strategy to better govern and leverage the federal government’s data. It is through this strategy that, among other things, the administration intends to improve the use of data for decision making and accountability. However, the strategy is under development and not expected to be released until January 2019, with a related plan to implement it expected in April 2019. The existing action plan, released in March 2018 and updated in June 2018, does not yet include specific steps needed to improve the use of data—including performance information—more extensively within agencies. According to the action plan for the goal, potential actions currently under consideration focus on establishing agency “learning agendas” that prioritize the development and use of data and other evidence for decision-making; building agency capacity to use data and other evidence; and improving the timeliness of performance information and other data, and making that information available to decision makers and the public. Although developing learning agendas and building capacity could help improve the use of performance information in agencies, improving availability of data may be less effective. For example, as our past survey results have shown, increasing the availability of performance information has not resulted in corresponding increases in its use in decision making. We recognize that the CAP goal was created in March 2018. Nonetheless, it is important that OMB and its fellow goal leaders develop the action plan and related federal data strategy consistent with all key requirements to better ensure successful implementation. The action plan does not yet include complete information related to the following GPRAMA requirements: performance goals that define the level of performance to be achieved each year for the CAP goal; the various federal agencies, organizations, programs, and other activities that contribute to the CAP goal; performance measures to assess overall progress towards the goal as well as the progress of each agency, program, and other activity contributing to the goal; and clearly defined quarterly targets. Consistent with GPRAMA, Standards for Internal Control in the Federal Government identifies information that agencies are required to include in their plans to help ensure they achieve their goals. The standards state that objectives—such as improving the use of data in decision making— should be clearly defined to enable the identification of risks. Objectives are to be defined in specific terms so they can be understood at all levels of the entity—in this case, government-wide as well as within individual agencies. This involves defining what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. Ensuring that future updates to the new CAP goal’s action plan includes all required elements is particularly important, as our previous work has found that some past CAP goal teams did not meet all planning and reporting requirements. For example, in May 2016 we found that most of the CAP goal teams we reviewed had not established targets for all performance measures they were tracking. This limited the transparency of their efforts and the ability to track progress toward established goals. We recommended that OMB, working with the Performance Information Council (PIC), report on actions that CAP goal teams are taking, or plan to take, to develop such targets and performance measures. OMB staff generally agreed and, in July 2017, told us they were working, where possible, to assist the development of measures for CAP goals. However, the recommendation has not been addressed and OMB staff said the next opportunity to address it would be when the administration established new CAP goals (which took place in March 2018). Following the initial release of the new CAP goals, CAP goal teams are to more fully develop the related action plans through quarterly updates. Given the ongoing importance of meeting these planning and reporting requirements, we will continue to monitor the status of actions to address this recommendation as implementation of the new CAP goals proceeds. Our Survey Results Identify Additional Opportunities for the PIC to Improve Federal Use of Performance Information While the PIC, which is chaired by OMB, has contributed to efforts to enhance the use of performance information, our survey results identify additional opportunities to further those efforts. The PIC’s past efforts have included hosting various working groups and learning events for agency officials to provide performance management guidance, and developing resources with relevant practices. For example, the PIC created a working group focused on agency performance reviews, which was used to share recommendations for how agencies can implement reviews, along with a guide with practices for effectively implementing strategic reviews. In January 2018, staff supporting the PIC joined with staff from another GSA office to create a new group called Fed2Fed Solutions. This group consults with agencies and provides tailored support, such as data analysis and performance management training for agency officials, to help them address specific challenges related to organizational transformation, data-driven decision making, and other management improvement efforts. Our survey results identify useful information related to potential promising practices and challenges that OMB and the PIC could use to inform efforts to enhance the use of performance information more extensively within agencies (e.g., at lower levels). As was previously described, the PIC has responsibilities to (1) facilitate the exchange among agencies of proven practices, and (2) work to resolve government- wide or cross-cutting performance issues, such as challenges. Our analyses of 2017 survey results identified instances where agencies may have found effective ways to enhance the use of performance information by agency leaders and managers in decision making, as well as instances where agencies (and their managers) face challenges in doing so. Specifically, based on analyses of our survey responses, we identified 14 agencies that may have insights into specific practices that led to recent improvements in managers’ use of performance information, or ways that they maintain relatively high levels of use by their managers when compared to the rest of the government. Figure 9 summarizes the agencies identified earlier in the report that had statistically significant increases, or results higher than the government-wide average, on our index or individual survey questions. As the figure shows, several agencies had statistically significant results across all three sets of analyses and therefore may have greater insights to offer: the General Services Administration, National Aeronautics and Space Administration, and the National Science Foundation. In addition, our analyses identified nine agencies where results suggest managers face challenges that have hampered their ability to use performance information. Figure 10 summarizes the agencies identified earlier in the report that had statistically significant decreases, or results lower than the government-wide average, on our index or individual survey questions. As the figure shows, the Office of Personnel Management had statistically significant decreases in all three sets of analyses. Four agencies—the Departments of the Treasury and Veterans Affairs, the Nuclear Regulatory Commission, and the Social Security Administration—were common to both of the figures above. That is, they had results that indicate they may have insights on some aspects of using performance information and face challenges in other aspects. As was mentioned earlier, to provide proper context, these results should be considered in relation to the range of agency results and the government- wide average (provided in detail in the agency summaries in appendix I). Given the prioritization of other activities, such as the recent creation of the Fed2Fed Solutions program, the PIC has not yet undertaken a systematic approach that could improve the use of performance information by managers at lower levels within agencies. Such an approach would involve identifying and sharing practices that have led to improved use, as well as identifying common or cross-cutting challenges that have hampered such use. The results of our analyses could help the PIC do so, and in a more targeted manner. By identifying and sharing proven practices, the PIC could further ensure that agency leaders and managers are aware of effective or proven ways they can use performance information to inform their decisions across the spectrum of activities they manage within their agencies. Those proven practices also may help agency leaders and managers resolve any identified challenges. Furthermore, in September 2017, we found that, for the estimated 35 percent of managers who reported familiarity with data-driven reviews, the more they viewed their programs being subject to a review, the more likely they were to report the reviews were driving results and were conducted in line with our leading practices for using performance information. Despite the reported benefits of and results achieved through data-driven reviews, they were not necessarily widespread. As noted above, GPRAMA requires agencies to conduct such reviews for agency priority goals, which represent a small subset of goals, and they are required at the departmental level. These reasons may explain why most managers reported they were not familiar with the reviews. As a result, we recommended that OMB should work with the PIC to identify and share among agencies practices for expanding the use of data-driven reviews. OMB staff agreed with our recommendation but have yet to address it. In June 2018, OMB updated its annual guidance to agencies to explicitly encourage them to expand data-driven reviews to include other goals, priorities, and management areas as applicable to improve organizational performance. However, as of June 2018, OMB and the PIC have yet to take any steps to identify and share practices for expanding the use of these reviews in line with our recommendation. Given the additional analyses we conducted for this report—which show that being subject to data-driven reviews is related to greater reported use of performance information and leading practices that promote such use—we continue to believe these further actions would help agencies implement these reviews more extensively. We reiterate the importance of the September 2017 recommendation and will continue to monitor OMB’s progress to address it. Conclusions For more than 20 years, our work has highlighted weaknesses in the use of performance information in federal decision making. While the Executive Branch has taken some actions in recent years, such as establishing a framework for performance management across the federal government, our survey results underscore that more needs to be done to improve the use of performance information more extensively within agencies and government-wide. The President’s Management Agenda and its related CAP goal to leverage data as a strategic asset present an opportunity to do so, as it aims to improve data-driven decision making. As OMB and its fellow goal leaders more fully develop the action plan for achieving this goal, providing additional details for its plans to improve data-driven decision making would help provide assurance that it can be achieved. As part of those initiatives, our survey results could provide a useful guide for targeting efforts. Officials at each agency could use these results to identify areas for additional analysis and potential actions that could help improve the use of performance information across the agency and at lower levels. Similarly, OMB and the PIC could use the results to identify broader issues in need of government-wide attention. It will also be important, however, for OMB and the PIC to go beyond this analysis and work with agencies to identify and share proven practices for increasing the use of performance information at lower levels within agencies, as well as challenges that may be hampering agencies’ ability to do so. Recommendations for Executive Action We are making the following two recommendations to OMB: The Director of OMB should direct the leaders of the Leveraging Data as a Strategic Asset CAP Goal to ensure future updates to the action plan, and the resulting federal data strategy, provide additional details on improving the use of data, including performance information, more extensively within federal agencies. The action plan should identify performance goals; contributing agencies, organizations, programs, and other activities; those responsible for leading implementation within these contributors; planned actions; time frames; and means to assess progress. (Recommendation 1) The Director of OMB, in coordination with the PIC, should prioritize efforts to identify and share among agencies proven practices for increasing, and challenges that hamper, the use of performance information in decision making more extensively within agencies. At a minimum, this effort should involve the agencies that our survey suggests may offer such insights. (Recommendation 2) Agency Comments We provided a draft of this report to the Director of the Office of Management and Budget for review and comment. We also provided a draft of the report to the heads of each of the 24 federal agencies covered by our survey. OMB had no comments, and informed us that it would assess our recommendations and consider how best to respond. We are sending copies of this report to congressional requesters, the Director of the Office of Management and Budget, the heads of each of the 24 agencies, and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or mcneilt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix IV. Appendix I: Summaries of Agency Survey Results U.S. Department of Agriculture (USDA) (Goverment-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Commerce (Commerce) (Goverment-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to 5 Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Defense (Goverment-wide) (DOD) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Education (Education) (Goverment-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Energy (Government-wide) (Energy) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Health & Human Services (HHS) (Goverment-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Partices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Department of Homeland Security (DHS) (Goverment-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Housing and Urban Development (Government-wide) (HUD) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Department of the Interior (Interior) (Government-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Justice (DOJ) (Government-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Labor (DOL) (Government-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Department of State (State) (Government-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Transportation (DOT) (Government-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of the Treasury (Treasury) (Government-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Department of Veterans Affairs (Government-wide) (VA) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively U.S. Agency for International Development (Goverment-wide) (USAID) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Environmental Protection Agency (Government-wide) (EPA) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively General Services Administration (Government-wide) (GSA) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively National Aeronautics and Space Administration (Government-wide) (NASA) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively National Science Foundation (Government-wide) (NSF) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Nuclear Regulatory Commission (NRC) (Government-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Office of Personnel Management (Government-wide) (OPM) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Partices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Small Business Administration (SBA) (Government-wide) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Social Security Administration (Government-wide) (SSA) 11 Questions that Comprise the Index Performance information is used to: 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” 10 Questions Related to Leading Practices That Promote the Use of Performance Information Leading Practices: Aligning agencywide goals, objectives, and measures Improving the usefulness of performance information Communicate performance information frequently and effectively Appendix II: Objectives, Scope, and Methodology This report responds to a request that we analyze agency-level results from our 2017 survey of federal managers at the 24 agencies covered by the Chief Financial Officers (CFO) Act of 1990, as amended, to determine the extent agencies are using performance information. This report assesses the extent to which: 1. the reported use of performance information and related leading practices at 24 agencies has changed compared to our prior survey in 2013; 2. being subject to data-driven reviews related to managers’ reported use of performance information and leading practices; and 3. the Executive Branch has taken actions to enhance agencies’ use of performance information in various decision-making activities. From November 2016 through March 2017, we administered our online survey to a stratified random sample of 4,395 individuals from a population of 153,779 mid- and upper-level civilian managers and supervisors at the 24 CFO Act agencies. The management levels covered general schedule (GS) or equivalent schedules at levels comparable to GS-13 through GS-15, and career Senior Executive Service (SES) or equivalent. We obtained the sample from the Office of Personnel Management’s Enterprise Human Resources Integration database as of September 30, 2015—the most recent fiscal year data available at the time. The sample was stratified by agency and whether the manager or supervisor was a member of the SES. To help determine the reliability and accuracy of the database elements used to draw our sample of federal managers for the 2017 survey, we checked the data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness and reviewed our past analyses of the reliability of this database. We concluded in our September 2017 report that the data used to draw our sample were sufficiently reliable for the purpose of the survey. For the 2017 survey, we received usable questionnaires from about 67 percent of the eligible sample. The weighted response rate at each agency generally ranged from 57 percent to 82 percent, except the Department of Justice, which had a weighted response rate of 36 percent. The overall survey results are generalizable to the population of managers government-wide and at each individual agency. To assess the potential bias from agencies with lower response rates, we conducted a nonresponse bias analysis using information from the survey and sampling frame as available. The analysis confirmed discrepancies in the tendency to respond to the survey related to agency and SES status. The analysis also revealed some differences in response propensity by age and GS level; however, the direction and magnitude of the differences on these factors were not consistent across agencies or strata. Our data may be subject to bias from unmeasured sources for which we cannot control. Results, and in particular estimates from agencies with low response rates such as the Department of Justice, should be interpreted with caution. However, the survey’s results are comparable to five previous surveys we conducted in 1997, 2000, 2003, 2007, and 2013. To address the first objective, we used data from our 2017 survey to update agency scores on our use of performance information index. This index, which was last updated using data from our 2013 survey, averages managers’ responses on 11 questions related to the use of performance information for various management activities and decision making. Using 2017 survey data, we conducted statistical analyses to ensure these 11 questions were still positively correlated. That analysis confirmed that no negative correlations existed and therefore no changes to the index were needed. Figure 11 shows the questions that comprise the index. After calculating agency index scores for 2017, we compared them to previous results from 2007 and 2013, and to the government-wide average for 2017, to identify any statistically significant differences. We focus on statistically significant results because these indicate that observed relationships between variables and differences between groups are likely to be valid, after accounting for the effects of sampling and other sources of survey error. For each of the 11 questions that comprise the index, we identified individual agency results, excluding missing and no basis to judge responses, and determined when they were statistically significantly different from (1) the agency’s results on the same question in 2013, or (2) the government-wide average results on the question in 2017. In this report, we analyzed and summarized the results of our 2017 survey of federal managers. Due to the limited scope of the engagement, we did not conduct additional audit work to determine what may have caused statistically significant changes between our 2017 and past survey results. To further address this objective we completed several statistical analyses that allowed us to assess the association between the index and 22 survey questions that we determined relate to leading practices we previously found promote the use of performance information. See figure 12 for the 22 specific questions related to these five practices that we included in the analysis. When we individually tested these 22 survey questions (bivariate regression), we found that each was statistically significantly and positively related to the index in 2017. This means that each question, when tested in isolation from other factors, was associated with higher scores on the index. However, when all 22 questions were tested together (multivariate regression), we found that 5 questions continued to be positively and significantly associated with the index in 2017, after controlling for other factors. To conduct this multivariate analysis, we began with a base model that treated differences in managers’ views of agency performance management use as a function of the agency where they worked. We found, however, that a model based on agency alone had little predictive power (R-squared of 0.04). We next examined whether managers’ responses to these questions reflecting practices that promote the use of performance information related to their perceptions of agency use of performance information, independent of agency. The results of this analysis are presented in table 1 below. Each coefficient reflects the increase in our index associated with a one-unit increase in the value of a particular survey question. Our final multivariate regression model had an R-squared of 0.67, suggesting that the variables in this model explain approximately 67 percent of the variation in the use index. We also tested this model controlling for whether a respondent was a member of the SES and found similar results. As shown above in table 1, five questions related to three of the leading practices that promote agencies’ use of performance information were statistically significant in 2017. These results suggest that, when controlling for other factors, certain specific efforts to increase agency use of performance information—such as providing information on the validity of performance data—may have a higher return and lead to higher index scores. With respect to aligning agency-wide goals, objectives, and measures, we found that each increase in terms of the extent to which individuals felt that managers aligned performance measures with agencywide goals and objectives was associated with a 0.08 increase in their score on the use index. In terms of improving the usefulness of performance information, we found that having information on the validity of performance data for decision making was the strongest predictor in our model (0.18). As measured here, taking steps to ensure the performance information is useful and appropriate was associated with almost as large a change in a managers’ index score (0.16). In terms of developing agency capacity to use performance information, we found that having sufficient analytical tools to collect, analyze, and use performance information (0.07), and providing or paying for training that would help link their programs to achievement of agency strategic goals (0.10), were also statistically significantly related to a manager’s reported use of performance information. When we combined these results with what we previously found through a similar analysis of 2013 survey results in September 2014, we identified 10 questions that have had a statistically significant association with higher index scores. This reinforces the importance of the five leading practices to promote the use of performance information. For each of these questions, which are outlined in figure 13 below, we determined when agency results were statistically significantly different from 2013 results or the 2017 government-wide average. For the second objective, we examined, based on the extent they responded their programs had been subject to agency data-driven reviews, differences in managers’ use index scores and responses on questions related to practices that promote the use of performance information. We grouped managers based on the extent they reported their programs had been subject to these reviews, from “no extent” through “very great extent.” We then calculated the average index scores for the managers in each of those five categories. We also examined differences in how managers responded to the 10 questions reflecting practices that can promote the use of performance information based on the extent they reported their programs had been subject to data-driven reviews. We grouped managers into three categories based on the extent they reported their programs had been subject to these reviews (no-small extent, moderate extent, great-very great extent). We then compared how these groups responded to the ten questions. For the third objective, we reviewed our past work that assessed Executive Branch activities to enhance the use of performance information; various resources (i.e., guidance, guides, and playbooks) developed by the Office of Management and Budget (OMB) and the Performance Improvement Council (PIC) that could support agencies’ use of performance information; and the President’s Management Agenda, and related materials with information on cross-agency efforts to improve the use of data in federal decision making. Lastly, for the third objective we also interviewed OMB and PIC staff about any actions they have taken, or planned to take, to further support the use of performance information across the federal government. We conducted this performance audit from October 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comparison of 2007, 2013, and 2017 Agency Use of Performance Information Index Scores Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the above contact, Benjamin T. Licht (Assistant Director) and Adam Miles (Analyst-in-Charge) supervised this review and the development of the resulting report. Arpita Chattopadhyay, Caitlin Cusati, Meredith Moles, Dae Park, Amanda Prichard, Steven Putansu, Alan Rozzi, Shane Spencer, and Khristi Wilkins also made key contributions. Robert Robinson developed the graphics for this report. Alexandra Edwards, Jeff DeMarco, Mark Kehoe, Ulyana Panchishin, and Daniel Webb verified the information presented in this report. Related GAO Products Results of the Periodic Surveys on Organizational Performance and Management Issues Managing for Results: Further Progress Made in Implementing the GPRA Modernization Act, but Additional Actions Needed to Address Pressing Governance Challenges. GAO-17-775. Washington, D.C.: September 29, 2017. Supplemental Material for GAO-17-775: 2017 Survey of Federal Managers on Organizational Performance and Management Issues. GAO-17-776SP. Washington, D.C.: September 29, 2017. Program Evaluation: Annual Agency-wide Plans Could Enhance Leadership Support for Program Evaluations. GAO-17-743. Washington, D.C.: September 29, 2017. Managing for Results: Agencies’ Trends in the Use of Performance Information to Make Decisions. GAO-14-747. Washington, D.C.: September 26, 2014. Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. GAO-13-518. Washington, D.C.: June 26, 2013. Managing for Results: 2013 Federal Managers Survey on Organizational Performance and Management Issues, an E-supplement to GAO-13-518. GAO-13-519SP. Washington, D.C.: June 26, 2013. Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making. GAO-13-570. Washington, D.C.: June 26, 2013. Government Performance: Lessons Learned for the Next Administration on Using Performance Information to Improve Results. GAO-08-1026T. Washington, D.C.: July 24, 2008. Government Performance: 2007 Federal Managers Survey on Performance and Management Issues, an E-supplement to GAO-08-1026T. GAO-08-1036SP. Washington, D.C.: July 24, 2008. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: March 10, 2004. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. Managing for Results: Federal Managers’ Views Show Need for Ensuring Top Leadership Skills.GAO-01-127. Washington, D.C.: October 20, 2000. The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven. GAO/GGD-97-109. Washington, D.C.: June 2, 1997.
Why GAO Did This Study To reform the federal government and make it more efficient and effective, agencies need to use data about program performance. The benefit of collecting performance information is only fully realized when it is used by managers to make decisions aimed at improving results. GAO was asked to review agencies' use of performance information. This report assesses, among other things, the extent to which: (1) 24 agencies' reported use of performance information and related leading practices has changed since 2013 and (2) the Executive Branch has taken actions to enhance the use of performance information. To address the first objective, GAO analyzed results from its 2017 survey of federal managers, and compared them to 2013 results. The survey covered a stratified random sample of 4,395 managers from the 24 Chief Financial Officers Act agencies. The survey had a 67 percent response rate and results can be generalized to the population of managers government-wide and at each agency. For the second objective, GAO reviewed agency documents and interviewed staff from OMB and the PIC. What GAO Found Agencies' reported use of performance information to make decisions, and leading practices that can promote such use, generally has not improved since GAO's last survey of federal managers in 2013. However, GAO's survey results continue to point to certain practices that could help agencies improve managers' use of performance information. For example, as shown in the table below, GAO's survey found that managers whose programs were subject to data-driven reviews (regular reviews used to assess progress on select agency goals) to a greater extent reported statistically significantly greater use of performance information to make decisions. The Executive Branch has begun taking steps to improve the use of performance information within agencies and across the government. For example, In the President's Management Agenda and government-wide reform plan, released in March and June 2018 respectively, the administration acknowledged the need to do more, and announced a goal, among other actions, to improve the use of data in federal decision making. However, the Office of Management and Budget (OMB) and others responsible for this goal have yet to fully develop action plans to hold agencies accountable for achieving it. The Performance Improvement Council (PIC), which is chaired by OMB, has undertaken efforts to improve the use of performance information by, for example, creating a working group on agency performance reviews. But it has not yet taken a systematic approach to identify and share proven practices that led to, or challenges that may be hampering, increased use of performance information by managers. GAO's survey results identified agencies that may have insights into such practices and challenges. More fully developing action plans for the new goal, and identifying and sharing proven practices and challenges, could help ensure the Executive Branch takes further steps to improve the use of performance information by managers within agencies and across the federal government. What GAO Recommends To improve the use of performance information within agencies and across the federal government, GAO recommends that OMB work with (1) fellow goal leaders to more fully develop action plans for the new goal to improve the use of data and (2) the PIC to prioritize efforts to identify and share proven practices and challenges. OMB had no comments on this report.
gao_GAO-18-215T
gao_GAO-18-215T_0
Background The cost of the census has been escalating over the last several decennials. The 2010 decennial was the costliest U.S. Census in history at about $12.3 billion, and was about 31 percent more costly than the $9.4 billion 2000 Census (in 2020 dollars). The average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 dollars). According to the Department of Commerce (Department), the total cost of the 2020 Census is now estimated to be approximately $15.6 billion dollars, more than $3 billion higher than previously reported by the Bureau. Meanwhile, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010 (see figure 1). Declining mail response rates—a key indicator in determining the cost-effectiveness of the census—are significant and lead to higher costs. This is because the Bureau sends temporary workers to each non-responding household to obtain census data. As a result, non-response follow-up is the Bureau’s largest and most costly field operation. In many ways, the Bureau has had to invest substantially more resources each decade to conduct the enumeration. Achieving a complete and accurate census is becoming an increasingly daunting task, in part, because the nation’s population is growing larger, more diverse, and more reluctant to participate. When the census misses a person who should have been included, it results in an undercount; conversely, an overcount occurs when an individual is counted more than once. Such errors are particularly problematic because of their impact on various subgroups. Minorities, renters, and children, for example, are more likely to be undercounted by the census. The challenges to an accurate count can be seen, for example, in the difficulties associated with counting people residing in unconventional and hidden housing units, such as converted basements and attics. In figure 2, what appears to be a small, single-family house could contain an apartment, as suggested by its two doorbells. If an address is not in the Bureau’s address file, its residents are less likely to be included in the census. The Bureau Plans to Rely Heavily on IT for the 2020 Census The Bureau plans to rely heavily on both new and legacy IT systems and infrastructure to support the 2018 End-to-End Test and the 2020 Census operations. For example, the Bureau plans to deploy and use 43 systems in the 2018 End-to-End Test. Eleven of these systems are being developed or modified as part of an enterprise-wide initiative called Census Enterprise Data Collection and Processing (CEDCaP), which is managed within the Bureau’s IT Directorate. This initiative is a large and complex modernization program intended to deliver a system-of-systems to support all of the Bureau’s survey data collection and processing functions, rather than continuing to rely on unique, survey-specific systems with redundant capabilities. According to Bureau officials, the remaining 32 IT systems are being developed or modified by the 2020 Census Directorate or other Bureau divisions. To support the 2018 End-to-End Test, the Bureau plans to incrementally deploy and use the 43 systems for nine operations from December 2016 through the end of the test in April 2019. These nine operations are: (1) in-office address canvassing, (2) recruiting staff for address canvassing, (3) training for address canvassing, (4) in-field address canvassing, (5) recruiting staff for field enumeration, (6) training for field enumeration, (7) self-response (i.e., Internet, phone, or paper), (8) field enumeration, and (9) tabulation and dissemination. Key Risks are Jeopardizing a Cost- Effective Enumeration We added the 2020 Census to our list of high-risk programs in February, 2017, because (1) innovations never before used in prior enumerations will not be fully tested; (2) the Bureau continues to face challenges in implementing and securing IT systems; and (3) the Bureau needs to control any further cost growth and develop reliable cost estimates. Each of these key risks are discussed in greater detail below; if not sufficiently addressed, these risks could adversely impact the cost and/or quality of the enumeration. Moreover, they compound the inherent challenges of conducting a successful census such as the nation’s increasingly diverse population and concerns over personal privacy. Key Risk #1: Reduced Operational Testing Limits Confidence in 2020 Census Innovation Areas The basic design of the enumeration—mail out and mail back of the census questionnaire with in-person follow-up for non-respondents—has been in use since 1970. However, a key lesson learned from the 2010 Census and earlier enumerations, is that this “traditional” design is no longer capable of cost-effectively counting the population. In response to its own assessments, our recommendations, and studies by other organizations, the Bureau has fundamentally re-examined its approach for conducting the 2020 Census. Specifically, its plan for 2020 includes four broad innovation areas: re-engineering field operations, using administrative records, verifying addresses in-office, and developing an Internet self-response option (see table 1). If they function as planned, the Bureau initially estimated that these innovations could result in savings of over $5 billion (in 2020 dollars) when compared to its estimates of the cost for conducting the census with traditional methods. However, in June 2016, we reported that the Bureau’s life-cycle cost estimate of $12.5 billion, developed in October 2015, was not reliable and did not adequately account for risk. As discussed earlier in this statement, the Department has recently updated this figure and now estimates a life-cycle cost of $15.6 billion. At this higher level, the cost savings would be reduced to around $1.9 billion. While the planned innovations could help control costs, they also introduce new risks, in part, because they include new procedures and technology that have not been used extensively in earlier decennials, if at all. Our prior work has shown the importance of the Bureau conducting a robust testing program, including the 2018 End-to-End Test. Rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. To address some of these challenges we have made several recommendations aimed at improving reengineered field operations, using administrative records, verifying the accuracy of the address list, and securing census responses via the Internet The Bureau has held a series of operational tests since 2012, but according to the Bureau, has scaled back recent tests because of funding uncertainties. For example, the Bureau canceled the field components of the 2017 Census Test including non-response follow-up, a key census operation. In November 2016, we reported that the cancelation of the 2017 field test was a lost opportunity to test, refine, and integrate operations and systems, and that it put more pressure on the 2018 End- to-End Test to demonstrate that enumeration activities will function under census-like conditions as needed for 2020. However, in May 2017, the Bureau scaled back the operational scope of the 2018 End-to-End and, of the three planned test sites; only the Rhode Island site would fully implement the 2018 End-to-End Test. The Washington and West Virginia state test sites would test just one field operation, address canvassing. In addition, due to budgetary concerns, the Bureau decided to remove three coverage measurement operations (and the technology that supports them) from the scope of the test. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost, in part because the 2018 End-to-End Test is the last opportunity to demonstrate census technology and procedures across a range of geographic locations, housing types, and demographic groups. Operational Issues Observed in the End-to-End Test Will Need to Be Addressed On August 28, 2017, temporary census employees known as address listers began implementing the in-field component of address canvassing for the 2018 End-to-End Test. Listers walked the streets of designated census blocks at all three test sites to verify addresses and geographic locations. The operation ended on September 27, 2017. As part of our ongoing work, we visited all three test sites and observed 18 listers conduct address canvassing. Generally, we found that listers were able to conduct address canvassing as planned. However, we also noted several challenges. We shared the following preliminary observations from our site visits with the Bureau: Internet connectivity was problematic at the West Virginia test site. We spoke to four census field supervisors who described certain areas as dead spots where Internet and cell phone service were not available. We also were told by those same supervisors that only certain cell service providers worked in certain areas. In order to access the Internet or cell service in those areas, census workers sometimes needed to drive several miles. The allocation of lister assignments was not always optimal. Listers were supposed to be provided assignments close to where they live in order to optimize their local knowledge and to limit the numbers of miles being driven by listers to and from their assignment area. Bureau officials told us this was a challenge at all three test sites. Moreover, at one site the area census manager told us that some listers were being assigned work in another county even though blocks were still unassigned closer to where they resided. Relying on local knowledge and limiting the number of miles can increase both the efficiency and effectiveness of address canvassing. The assignment of some of the large blocks early in the operations was not occurring as planned. At all three 2018 End-to-End Test sites Bureau managers had to manually assign some large blocks (some blocks had hundreds of housing units). It is important to assign large blocks early on because leaving the large blocks to be canvassed until the end of the operation could jeopardize the timely completion of address canvassing. According to Bureau officials,during the test, completed address and map updates for some blocks did not properly transmit. This happened at all three test sites, and included data on 11 laptops for 25 blocks. The address and map information on seven of the laptops was permanently deleted. However, data on four laptops were still available. The Bureau is examining those laptops to determine what occurred that prevented the data from being transmitted. In Providence, Rhode Island, where the full test will take place, the Bureau recanvassed those blocks where data were lost to ensure that the address and map information going forward was correct. It will be important for the Bureau to understand what happened and ensure all address and map data is properly transmitted for the 2020 Census. We have discussed these challenges with Bureau officials who stated that overall they are satisfied with the implementation of address canvassing but also agreed that resolving challenges discovered during address canvassing, some of which can affect the operation’s efficiency and effectiveness, will be important before the 2020 Census. We plan to issue a report early in 2018 on address canvassing at the three test sites. Key Risk #2: The Bureau Continues to Face Challenges Implementing and Securing IT Systems We have previously reported that the Bureau faced challenges in managing and overseeing IT programs, systems, and contractors supporting the 2020 Census. Specifically, it has been challenged in managing schedules, costs, contracts, governance and internal coordination, and security for its IT systems. As a result of these challenges, the Bureau is at risk of being unable to fully implement key IT systems necessary to support the 2020 Census and conduct a cost- effective enumeration. We have previously recommended that the Bureau take action to improve its implementation and management of IT in areas such as governance and internal coordination. We also have ongoing work reviewing each of these areas. Our ongoing work has indicated that the Bureau faces significant challenges in managing the schedule for developing and testing systems for the 2018 End-to-End Test that began in August 2017. In this regard, the Bureau still has significant development and testing work that remains to be completed. As of August 2017, of the 43 systems in the test, the Bureau reported that 4 systems had completed development and integration testing, while the remaining 39 systems had not completed these activities. Of these 39 systems, the Bureau reported that it had deployed a portion of the functionality for 21 systems to support address canvassing for the 2018 End-to-End Test; however, it had not yet deployed any functionality for the remaining 18 systems for the test. Figure 3 summarizes the development and testing status for the 43 systems planned for the 2018 End-to-End Test. Moreover, due to challenges experienced during systems development, the Bureau has delayed key IT milestone dates (e.g., dates to begin integration testing) by several months for several of the systems in the 2018 End-to-End Test. Figure 4 depicts the delays to the deployment dates for the operations in the 2018 End-to-End Test, as of August 2017. Our ongoing work also indicates that the Bureau is at risk of not meeting the updated milestone dates. For example, in June 2017 the Bureau reported that at least two of the systems expected to be used in the self- response operation (the Internet self-response system and the call center system) are at risk of not meeting the delayed milestone dates. In addition, in September 2017 the Bureau reported that at least two of the systems expected to be used in the field enumeration operation (the enumeration system and the operational control system) are at risk of not meeting their delayed dates. Combined, these delays reduce the time available to conduct the security reviews and approvals for the systems being used in the 2018 End-to- End Test. We previously testified in May 2017 that the Bureau faced similar challenges leading up to the 2017 Census Test, including experiencing delays in system development that led to compressed time frames for security reviews and approvals. Specifically, we noted that the Bureau did not have time to thoroughly assess the low-impact components of one system and complete penetration testing for another system prior to the test, but accepted the security risks and uncertainty due to compressed time frames. We concluded that, for the 2018 End-to- End Test, it will be important that these security assessments are completed in a timely manner and that risks are at an acceptable level before the systems are deployed. The Bureau noted that, if it continues to be behind schedule, key field operations for the 2018 End-to-End Test (such as non-response follow- up) could be delayed or canceled, which may affect the Bureau’s ability to meet the test’s objectives. As we stated earlier, without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost. Bureau officials are evaluating options to decrease the impact of these delays on integration testing and security review activities by, for example, utilizing additional staff. We have ongoing work reviewing the Bureau’s development and testing delays and the impacts of these delays on systems readiness for the 2018 End-to-End Test. The Bureau faces challenges in reporting and controlling IT cost growth. In April 2017, the Bureau briefed us on its efforts to estimate the costs for the 2020 Census, during which it presented IT costs of about $2.4 billion from fiscal years 2018 through 2021. Based on this information and other corroborating IT contract information provided by the Bureau, we testified in May 2017 that the Bureau had identified at least $2 billion in IT costs. However, in June 2017, Bureau officials in the 2020 Census Directorate told us that the data they provided in April 2017 did not reflect all IT costs for the 2020 program. The officials provided us with an analysis of the Bureau’s October 2015 cost estimate that identified $3.4 billion in total IT costs from fiscal years 2012 through 2023. These costs included, among other things, those associated with system engineering, test and evaluation, and infrastructure, as well as a portion of the costs for the CEDCaP program. Yet, our ongoing work determined the Bureau’s $3.4 billion cost estimate from October 2015 did not reflect its current plans for acquiring IT to be used during the 2020 Census and that the related costs are likely to increase: In August 2016, the Bureau awarded a technical integration contract for about $886 million, a cost that was not reflected in the $3.4 billion expected IT costs. More recently, in May 2017, we testified that the scope of work for this contract had increased since the contract was awarded; thus, the corresponding contract costs were likely to rise above $886 million, as well. In March 2017, the Bureau reported that the contract associated with the call center and IT system to support the collection of census data over the phone was projected to overrun its initial estimated cost by at least $40 million. In May 2017, the Bureau reported that the CEDCaP program’s cost estimate was increasing by more than $400 million—from its original estimate of $548 million in 2013 to a revised estimate of $965 million in May 2017. In June 2017, the Bureau awarded a contract for mobile devices and associated services for about $283 million, an amount that is about $137 million higher than the cost for these devices and services identified in its October 2015 estimate. As a result of these factors, the Bureau’s $3.4 billion estimate of IT costs is likely to be at least $1.4 billion higher, thus increasing the total costs to at least $4.8 billion. Figure 5 identifies the Bureau estimate of total IT costs associated with the 2020 program as of October 2015, as well as anticipated cost increases as of August 2017. IT cost information that is accurately reported and clearly communicated is necessary so that Congress and the public have confidence that taxpayer funds are being spent in an appropriate manner. However, changes in the Bureau’s reporting of these total costs, combined with cost growth since the October 2015 estimate, raise questions as to whether the Bureau has a complete understanding of the IT costs associated with the 2020 program. In early October 2017, the Secretary of Commerce testified that he expected the total IT costs for the 2020 Census to be about $4.96 billion. This estimate of IT costs is approximately $1.6 billion higher than the Bureau’s October 2015 estimate and further confirms our analysis of expected IT cost increases discussed above. As of late October 2017, the Bureau and Department were still finalizing the documentation used to develop the new cost estimate. After these documents are complete and made available for inspection, as part of our ongoing work, we plan to evaluate whether this updated IT cost estimate includes the cost increases, discussed above, that were not included in the October 2015 estimate. Our ongoing work also determined that the Bureau faces challenges in managing its significant contractor support. The Bureau is relying on contractor support in many key areas of the 2020 Census. For example, it is relying on contractors to develop a number of key systems and components of the IT infrastructure. These activities include (1) developing the IT platform that is intended to be used to collect data from those responding via the Internet, telephone, and non-response follow-up activities; (2) procuring the mobile devices and cellular service to be used for non-response follow-up; and (3) developing the infrastructure in the field offices. According to Bureau officials, contractors are also providing support in areas such as fraud detection, cloud computing services, and disaster recovery. In addition to the development of key technology, the Bureau is relying on contractor support for integrating all of the key systems and infrastructure. The Bureau awarded a contract to integrate the 2020 Census systems and infrastructure in August 2016. The contractor’s work was to include evaluating the systems and infrastructure and acquiring the infrastructure (e.g., cloud or data center) to meet the Bureau’s scalability and performance needs. It was also to include integrating all of the systems, supporting technical testing activities, and developing plans for ensuring the continuity of operations. Since the contract was awarded, the Bureau has modified the scope to also include assisting with operational testing activities, conducting performance testing for two Internet self-response systems, and technical support for the implementation of the paper data capture system. However, our ongoing work has indicated that the Bureau is facing staffing challenges that could impact its ability to manage and oversee the technical integration contractor. Specifically, the Bureau is managing the integration contractor through a government program management office, but this office is still filling vacancies. As of October 2017, the Bureau reported that 35 of the office’s 58 federal employee positions were vacant. As a result, this program management office may not be able to provide adequate oversight of contractor cost, schedule, and performance. The delays during the 2017 Test and preparations for the 2018 End-to- End Test raises concerns regarding the Bureau’s ability to effectively perform contractor management. As we reported in November 2016, a greater reliance on contractors for these key components of the 2020 Census requires the Bureau to focus on sound management and oversight of the key contracts, projects, and systems. As part of our ongoing work, we plan to monitor the Bureau’s progress in managing its contractor support. Effective IT governance can drive change, provide oversight, and ensure accountability for results. Further, effective IT governance was envisioned in the provisions referred to as the 2014 Federal Information Technology Acquisition Reform Act (FITARA), which strengthened and reinforced the role of the departmental CIO. The component CIO also plays a role in effective IT governance as subject to the oversight and policies of the parent department or agency implementing FITARA. To ensure executive-level oversight of the key systems and technology, the Bureau’s CIO (or a representative) is a member of the governance boards that oversee all of the operations and technology for the 2020 Census. However, in August 2016 we reported on challenges the Bureau has had with IT governance and internal coordination, including weaknesses in its ability to monitor and control IT project costs, schedules, and performance. We made several recommendations to the Department of Commerce to direct the Bureau to, among other things, better ensure that risks are adequately identified and schedules are aligned. The Department agreed with our recommendations. However, as of October 2017, the Bureau had only fully implemented one recommendation and had taken initial steps toward implementing others. Further, given the schedule delays and cost increases previously mentioned, and the vast amount of development, testing, and security assessments left to be completed, we remain concerned about executive- level oversight of systems and security. Moving forward, it will be important that the CIO and other Bureau executives continue to use a collaborative governance approach to effectively manage risks and ensure that the IT solutions meet the needs of the agency within cost and schedule. As part of our ongoing work, we plan to monitor the steps the Bureau is taking to effectively oversee and manage the development and acquisition of its IT systems. In November 2016, we described the significant challenges that the Bureau faced in securing systems and data for the 2020 Census, and we noted that tight time frames could exacerbate these challenges. Two such challenges were (1) ensuring that individuals gain only limited and appropriate access to the 2020 Census data, including personally identifiable information (PII) (e.g., name, personal address, and date of birth), and (2) making certain that security assessments were completed in a timely manner and that risks were at an acceptable level. Protecting PII, for example, is especially important because a majority of the 43 systems to be used in the 2018 End-to-End Test contain PII, as reflected in figure 6. To address these and other challenges, federal law and guidance specify requirements for protecting federal information and information systems, such as those to be used in the 2020 Census. Specifically, the Federal Information Security Management Act of 2002 and the Federal Information Security Modernization Act of 2014 (FISMA) require executive branch agencies to develop, document, and implement an agency-wide program to provide security for the information and information systems that support operations and assets of the agency. Accordingly, the National Institute of Standards and Technology (NIST) developed risk management framework guidance for agencies to follow in developing information security programs. Additionally, the Office of Management and Budget’s (OMB) revised Circular A-130 on managing federal information resources required agencies to implement the NIST risk management framework to integrate information security and risk management activities into the system development life cycle. In accordance with FISMA, NIST guidance, and OMB guidance, the Office of the CIO established a risk management framework. This framework requires that system developers ensure that each of the systems undergoes a full security assessment, and that system developers remediate critical deficiencies. In addition, according to the Bureau’s framework, system developers must ensure that each component of a system has its own system security plan, which documents how the Bureau plans to implement security controls. As a result, system developers for a single system might develop multiple system security plans which all have to be approved as part of the system’s complete security documentation. We have ongoing work that is reviewing the extent to which the Bureau’s framework meets the specific requirements of the NIST guidance. According to the Bureau’s framework, each of the 43 systems in the 2018 End-to-End Test will need to have complete security documentation (such as system security plans) and an approved authorization to operate prior to their use in the 2018 End-to-End Test. However, our ongoing work indicates that, while the Bureau is completing these steps for the 43 systems to be used in the 2018 End-to-End Test, significant work remains. Specifically, as we reported in October 2017: None of the 43 systems are fully authorized to operate through the completion of the 2018 End-to-End Test. Bureau officials from the CIO’s Office of Information Security stated that these systems will need to be reauthorized because, among other things, they have additional development work planned that may require the systems to be reauthorized; are being moved to a different infrastructure environment (e.g., from a data center to a cloud-based environment); or have a current authorization that expires before the completion of the 2018 End-to-End Test. The amount of work remaining is concerning because the test has already begun and the delays experienced in system development and testing mentioned earlier reduce the time available for performing the security assessments needed to fully authorize these systems before the completion of the 2018 End-to-End test. Thirty-seven systems have a current authorization to operate, but the Bureau will need to reauthorize these systems before the completion of the 2018 End-to-End Test. This is due to the reasons mentioned previously, such as additional development work planned and changes to the infrastructure environments. Two systems have not yet obtained an authorization to operate. For the remaining four systems, the Bureau has not yet provided us with documentation about the current authorization status. Figure 7 depicts the authorization to operate status for the systems being used in the 2018 End-to-End Test, as reported by the Bureau. Because many of the systems that will be a part of the 2018 End-to-End Test are not yet fully developed, the Bureau has not finalized all of the security controls to be implemented; assessed those controls; developed plans to remediate control weaknesses; and determined whether there is time to fully remediate any deficiencies before the systems are needed for the test. In addition, as discussed earlier, the Bureau is facing system development challenges that are delaying the completion of milestones and compressing the time available for security testing activities. While the large-scale technological changes (such as Internet self- response) increase the likelihood of efficiency and effectiveness gains, they also introduce many information security challenges. The 2018 End- to-End Test also involves collecting PII on hundreds of thousands of households across the country, which further increases the need to properly secure these systems. Thus, it will be important that the Bureau provides adequate time to perform these security assessments, completes them in a timely manner, and ensures that risks are at an acceptable level before the systems are deployed. We plan to continue monitoring the Bureau’s progress in securing its IT systems and data as part of our ongoing work. Key Risk #3: Lack of Reliable Costs Estimates Limits Support for 2020 Census Funding Earlier this month, the Department announced that it had updated the October 2015 life-cycle cost estimate and now projects the life-cycle cost of the 2020 Census will be $15.6 billion, more than a $3 billion (27 percent) increase over the Bureau’s earlier estimate. The higher estimated life-cycle cost is due, in part, as we reported in June 2016, to the Bureau’s failure to meet best practices for a quality cost-estimate. Specifically, we reported that, although the Bureau had taken steps to improve its capacity to carry out an effective cost estimate, such as establishing an independent cost estimation office, its October 2015 version of the estimate for the 2020 Census only partially met the characteristics of two best practices (comprehensive and accurate) and minimally met the other two (well-documented and credible). We also reported that risks were not properly accounted for in the cost estimate. We recommended that the Bureau take action to ensure its 2020 Census cost estimate meets all four characteristics of a reliable cost estimate, as well as properly account for risk to ensure there are appropriate levels for budgeted contingencies. The Bureau agreed with our recommendations. In response, the Department of Commerce reported that in May 2017, a multidisciplinary team was created to evaluate the 2020 Census program and to produce an independent cost estimate. Factors driving the increased cost-estimate include changes to assumptions relating to self- response rates, wage levels for temporary census workers, as well as the fact that major contracts and IT scale-up plans and procedures were not effectively planned, managed, and executed. The new estimate also includes a contingency of 10 percent of estimated costs per year as insurance against “unknown-unknowns”, such as a major cybersecurity event. The Bureau and Department are still finalizing the documentation used to develop the $15.6 billion cost-estimate. Until these documents are complete and made available for inspection, we cannot determine the reliability of the estimate. We will review the documentation when it is available. In order for the estimate to be deemed high quality, and thus the basis for any 2020 Census annual budgetary figures, the new cost- estimate will need to address the following four best practices, and do so as quickly as possible given the expected ramp-up in spending: Comprehensive. To be comprehensive an estimate should have enough detail to ensure that cost elements are neither omitted nor double-counted, and all cost-influencing assumptions are detailed in the estimate’s documentation, among other things, according to best practices. In June 2016, we reported that, while Bureau officials were able to provide us with several documents that included projections and assumptions that were used in the cost estimate, we found the estimate to be partially comprehensive because it was unclear if all life-cycle costs were included in the estimate or if the cost estimate completely defined the program. Accurate. Accurate estimates are unbiased and contain few mathematical mistakes. We reported in June 2016 that the estimate partially met best practices for this characteristic, in part because we could not independently verify the calculations the Bureau used within its cost model, which the Bureau did not have documented or explained. Well-documented. Cost estimates are considered valid if they are well-documented to the point they can be easily repeated or updated and can be traced to original sources through auditing, according to best practices. In June 2016, we reported that, while the Bureau provided some documentation of supporting data, it did not describe how the source data were incorporated. Credible. Credible cost estimates must clearly identify limitations due to uncertainty or bias surrounding the data or assumptions, according to best practices. In June 2016, we reported that the estimate minimally met best practices for this characteristic in part because the Bureau carried out its risk and uncertainty analysis only for about $4.6 billion (37 percent) of the $12.5 billion total estimated life-cycle cost, excluding, for example, consideration of uncertainty over what the decennial census’s estimated part will be of the total cost of CEDCaP. Continued Management Attention Needed to Keep Preparations on Track and Help Ensure a Cost- Effective Enumeration 2020 Challenges Are Symptomatic of Deeper Long-Term Organizational Issues The difficulties facing the Bureau’s preparation for the decennial in such areas as planning and testing; managing and overseeing IT programs, systems, and contractors supporting the enumeration; developing reliable cost estimates; prioritizing decisions; managing schedules; and other challenges, are symptomatic of deeper organizational issues. Following the 2010 Census, a key lesson learned for 2020 we identified was ensuring that the Bureau’s organizational culture and structure, as well as its approach to strategic planning, human capital management, internal collaboration, knowledge sharing, capital decision-making, risk and change management, and other internal functions are aligned toward delivering more cost-effective outcomes. The Bureau has made improvements over the last decade, and continued progress will depend in part on sustaining efforts to strengthen risk management activities, enhancing systems testing, bringing in experienced personnel to key positions, implementing our recommendations, and meeting regularly with officials from its parent agency, the Department of Commerce. Going forward, our experience has shown that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency officials to (1) leadership commitment, (2) ensuring capacity, (3) developing a corrective action plan, (4) regular monitoring, and (5) demonstrated progress. Although important steps have been taken in at least some of these areas, overall, far more work is needed. On the one hand, the Secretary of Commerce has taken several actions towards demonstrating leadership commitment. For example, the previously noted multidisciplinary review team included members with Bureau leadership experience, as well as members with private sector technology management experience. Additional program evaluation and the independent cost estimate was produced by a team from the Commerce Secretary’s Office of Acquisition Management that included a member detailed from OMB. Commerce also reports senior officials are now actively involved in the management and oversight of the decennial. Likewise, with respect to monitoring, the Commerce Secretary reports having weekly 2020 Census oversight reviews with senior Bureau staff and will require metric tracking and program execution status on a real- time basis. On the other hand, demonstrating the capacity to address high risk concerns remains a challenge. For example, our ongoing work has indicated that the Bureau is facing staffing challenges that could impact its ability to manage and oversee the technical integration contractor. Specifically, the Bureau is managing the integration contractor through a government program management office, but this office is still filling vacancies. As of October 2017, the Bureau reported that 35 of 58, or 60 percent, of the office’s federal employee positions were vacant. As a result, this program management office may not be able to provide adequate oversight of contractor cost, schedule, and performance. In the months ahead, we will continue to monitor the Bureau’s progress in addressing in each of the 5 elements essential for reducing the risk to a cost-effective enumeration. Leadership Continuity Will Be Critical For Keeping Efforts on Track At a time when strong Bureau management is needed, vacancies in the agency’s two top positions—Director and Deputy Director—are not helpful for keeping 2020 preparations on-track. These vacancies are due to the previous director’s retirement on June 30, 2017, and the previous deputy director’s appointment to be the Chief Statistician of the United States within the Office of Management and Budget in January 2017. Although interim leadership has since been named, in our prior work we have noted how openings in the Bureau’s top position makes it difficult to ensure accountability and continuity, as well as to develop and sustain efforts that foster change, produce results, mitigate risks, and control costs over the long term. The census director is appointed by the President, by and with the advice and consent of the Senate, without regard to political affiliation. The director’s term is a fixed 5-year term of office, and runs in 5-year increments. An individual may be reappointed and serve 2 full terms as director. The director’s position was first filled this way beginning on January 1, 2012, and cycles every fifth year thereafter. Because the new term began on January 1, 2017, the time that elapses until a new director is confirmed counts against the 5-year term of office. As a result, the next director’s tenure will be less than 5 years. Going forward, filling these top two slots should be an important priority. On the basis of our prior work, key attributes of a census director, in addition to the obvious ones of technical expertise and the ability to lead large, long-term, and high risk programs, could include abilities in the following areas: Strategic Vision. The Director needs to build a long-term vision for the Bureau that extends beyond the current decennial census. Strategic planning, human-capital succession planning, and life-cycle cost estimates for the Bureau all span the decade. Sustaining Stakeholder Relationships. The Director needs to continually expand and develop working relationships and partnerships with governmental, political, and other professional officials in both the public and private sectors to obtain their input, support, and participation in the Bureau’s activities. Accountability. The life-cycle cost for a decennial census spans a decade, and decisions made early in the decade about the next decennial census guide the research, investments, and tests carried out throughout the decennial census. Institutionalizing accountability over an extended period may help long-term decennial initiatives provide meaningful and sustainable results. Further Actions Needed on Our Recommendations Over the past several years we have issued numerous reports that underscored the fact that if the Bureau was to successfully meet its cost savings goal for the 2020 Census, the Bureau needs to take significant actions to improve its research, testing, planning, scheduling, cost estimation, system development, and IT security practices. Over the past decade, we have made 84 recommendations specific to the 2020 Census to help address these and other issues. The Bureau has generally agreed with those recommendations; however 36 of them had not been implemented as of October 2017. We have designated 20 of these recommendations as a priority for the Department of Commerce and 5 have been implemented. In August 2017, we sent the Secretary of Commerce a letter that identified our open priority recommendations at the Department, 15 of which concern the 2020 Census. We believe that attention to these recommendations is essential for a cost-effective enumeration. The recommendations included implementing reliable cost estimation and scheduling practices in order to establish better control over program costs, as well as taking steps to better position the Bureau to develop an Internet response option for the 2020 Census. Appendix I summarizes our priority recommendations related to the 2020 Census and the actions the Department has taken to address them. On October 3, 2017, in response to our August 2017 letter, the Commerce Secretary noted that he shared our concerns about the 2020 Census and acknowledged that some of the programs had not worked as planned, and are not delivering the savings that were promised. The Commerce Secretary also stated that he intends to improve the timeliness for implementing our recommendations. We meet quarterly with Bureau officials to discuss the progress and status of open recommendations related to the 2020 Census. We are encouraged by the actions taken by the Department and the Bureau in addressing our recommendations. Implementing our recommendations in a complete and timely manner is important because it would improve the management of the 2020 Census and help to mitigate continued risks. In conclusion, while the Bureau has made progress in revamping its approach to the census, it faces considerable challenges and uncertainties in (1) implementing key cost-saving innovations and ensuring they function under operational conditions; (2) managing the development and security of key IT systems; and (3) developing a quality cost estimate for the 2020 Census and preventing further cost increases. Without timely and appropriate actions, these challenges could adversely affect the cost, accuracy, and schedule of the enumeration. For these reasons, the 2020 Census is a GAO high risk area. Going forward, continued management and Congressional attention—such as hearings like this one—will be vital for ensuring risks are managed, preparations stay on-track, and the Bureau is held accountable for implementing the enumeration as planned. We will continue to assess the Bureau’s efforts to conduct a cost-effective enumeration and look forward to keeping Congress informed of the Bureau’s progress. Chairman Johnson, Ranking Member McCaskill, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you have any questions about this statement, please contact Robert Goldenkoff at (202) 512-2757 or by e-mail at goldenkoffr@gao.gov or David A. Powner at (202) 512-9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this testimony include Lisa Pearson (Assistant Director); Jon Ticehurst (Assistant Director); Katherine Wulff (Analyst in Charge); Mark Abraham; Brian Bothwell; Jeffrey DeMarco; Hoyt Lacy; Jason Lee; Ty Mitchell; LaSonya Roberts; Kate Sharkey; Andrea Starosciak; Umesh Thakkar; and Timothy Wexler. Appendix I: Priority Recommendations from GAO’s Work Related to the 2020 Census The Department of Commerce and Census Bureau have taken some actions to address our recommendations related to implementation of the 2020 Census; however, a large number of recommendations remain open. Since just prior to the 2010 Census, we have made 84 recommendations in 23 reports to the Department of Commerce and Census Bureau aimed at helping the Bureau prepare for and implement a successful 2020 Census (table 1). Of those 84, the Department of Commerce and the Census Bureau have implemented 48 recommendations. Thirty-six recommendations require additional action. Of these 84 recommendations, we have designated 20 as priorities for Commerce to address. The Census Bureau has taken some action on our priority recommendations, implementing 5 of the 20 priority recommendations we have made. The following table presents each of the 20 priority recommendations along with a summary of actions taken to address it.
Why GAO Did This Study One of the Bureau's most important functions is to conduct a complete and accurate decennial census of the U.S. population. The decennial census is mandated by the Constitution and provides vital data for the nation. A complete count of the nation's population is an enormous undertaking as the Bureau seeks to control the cost of the census, implement operational innovations, and use new and modified IT systems. In recent years, GAO has identified challenges that raise serious concerns about the Bureau's ability to conduct a cost-effective count. For these reasons, GAO added the 2020 Census to its High-Risk list in February 2017. In light of these challenges, GAO was asked to testify about the reasons the 2020 Census was placed on the High-Risk List. To do so, GAO summarized its prior work regarding the Bureau's planning efforts for the 2020 Census. GAO also included observations from its ongoing work on the 2018 End-to-End Test. This information is related to, among other things, recent decisions on preparations for the 2020 Census; progress on key systems to be used for the 2018 End-to-End Test, including the status of IT security assessments; execution of the address canvassing operation at the test sites; and efforts to update the life-cycle cost estimate. What GAO Found GAO added the 2020 Census to its high-risk list because of challenges associated with (1) developing and testing key innovations; (2) implementing and securing IT systems; and (3) controlling any further cost growth and preparing reliable cost estimates. The Census Bureau (Bureau) is planning several innovations for the 2020 Decennial Census, including re-engineering field operations by relying on automation, using administrative records to supplement census data, verifying addresses in-office using on-screen imagery, and allowing the public to respond using the Internet. These innovations show promise for controlling costs, but they also introduce new risks, in part because they have not been used extensively in earlier enumerations, if at all. As a result, robust testing is needed to ensure that key systems and operations will function as planned. However, citing budgetary uncertainties, the Bureau canceled its 2017 field test and then scaled back its 2018 End-to End Test. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost, as key census-taking activities will not be tested across a range of geographic locations, housing types, and demographic groups. The Bureau continues to face challenges in managing and overseeing the information technology (IT) programs, systems, and contracts supporting the 2020 Census. For example, GAO's ongoing work indicates that the system development schedule leading up to the 2018 End-to-End test has experienced several delays. Further, the Bureau has not addressed several security risks and challenges to secure its systems and data, including making certain that security assessments are completed in a timely manner and that risks are at an acceptable level. Given that certain operations for the 2018 End-to-End Test began in August 2017, it is important that the Bureau quickly address these challenges. GAO plans to monitor the Bureau's progress as part of its ongoing work. In addition, the Bureau needs to control any further cost growth and develop cost estimates that reflect best practices. Earlier this month, the Department of Commerce (Department) announced that it had updated the October 2015 life-cycle cost-estimate and now projects the life-cycle cost of the 2020 Census will be $15.6 billion, more than $3 billion (27 percent) increase over its earlier estimate. The higher estimated life-cycle cost is due, in part, to the Bureau's failure to meet best practices for a quality cost-estimate. The Bureau and Department are still finalizing the documentation used to develop the $15.6 billion cost-estimate. Until these documents are complete and made available for inspection, GAO cannot determine the reliability of the estimate. What GAO Recommends Over the past decade, we have made 84 recommendations specific to the 2020 Census to address the issues raised in this testimony and others. The Bureau generally has agreed with our recommendations. As of October 2017, 36 recommendations had not been implemented.
gao_GAO-18-213
gao_GAO-18-213_0
Background While no commonly accepted definition of a community bank exists, they are generally smaller banks that provide banking services to the local community and have management and board members who reside in the local community. In some of our past reports, we often defined community banks as those with under $10 billion in total assets. However, many banks have assets well below $10 billion as data from the financial condition reports that institutions submit to regulators (Call Reports) indicated that of the more than 6,100 banks in the United States, about 90 percent had assets below about $1.2 billion as of March 2016. Based on our prior interviews and reviews of documents, regulators and others have observed that small banks tend to differ from larger banks in their relationships with customers. Large banks are more likely to engage in transactional banking, which focuses on the provision of highly standardized products that require little human input to manage and are underwritten using statistical information. Small banks are more likely to engage in what is known as relationship banking in which banks consider not only data models but also information acquired by working with the banking customer over time. Using this banking model, small banks may be able to extend credit to customers such as small business owners who might not receive a loan from a larger bank. Small business lending appears to be an important activity for community banks. As of June 2017, community banks had almost $300 billion outstanding in loans with an original principal balance of under $1 million (which banking regulators define as small business lending), or about 20 percent of these institutions’ total lending. In that same month, non- community banks had about $390 billion outstanding in business loans under $1 million representing 5 percent of their total lending. Credit unions are nonprofit member-owned institutions that take deposits and make loans. Unlike banks, credit unions are subject to limits on their membership because members must have a “common bond”—for example, working for the same employer or living in the same community. Financial reports submitted to NCUA (the regulator that oversees federally-insured credit unions) indicated that of the more than 6,000 credit unions in the United States, 90 percent had assets below about $393 million as of March 2016. In addition to providing consumer products to their members, credit unions are also allowed to make loans for business activities subject to certain restrictions. These member business loans are defined as a loan, line of credit, or letter of credit that a credit union extends to a borrower for a commercial, industrial, agricultural, or professional purpose, subject to certain exclusions. In accordance with rules effective January 2017, the total amount of business lending credit unions can do is not to generally exceed 1.75 times the actual net worth of the credit union. Overview of Federal Financial Regulators for Community Banks and Credit Unions Federal banking and credit union regulators have responsibility for ensuring the safety and soundness of the institutions they oversee, protecting federal deposit insurance funds, promoting stability in financial markets, and enforcing compliance with applicable consumer protection laws. All depository institutions that have federal deposit insurance have a federal prudential regulator. The regulator responsible for overseeing a community bank or credit union varies depending on how the institution is chartered, whether it is federally insured, and whether it is a Federal Reserve member (see table 1). Other federal agencies also impose regulatory requirements on banks and credit unions. These include rules issued by CFPB, which has supervision and enforcement authority for various federal consumer protection laws for depository institutions with more than $10 billion in assets and their affiliates. The Federal Reserve, OCC, FDIC, and NCUA continue to supervise for consumer protection compliance at institutions that have $10 billion or less in assets. Although community banks and credit unions with less than $10 billion in assets typically would not be subject to CFPB examinations, they generally are required to comply with CFPB rules related to consumer protection. In addition, FinCEN also issues requirements that financial institutions, including banks and credit unions, must follow. FinCEN is a component of Treasury’s Office of Terrorism and Financial Intelligence that supports government agencies by collecting, analyzing, and disseminating financial intelligence information to combat money laundering. It is responsible for administering the Bank Secrecy Act, which, with its implementing regulations, generally requires banks, credit unions, and other financial institutions, to collect and retain various records of customer transactions, verify customers’ identities in certain situations, maintain AML programs, and report suspicious and large cash transactions. FinCEN relies on financial regulators and others to examine U.S. financial institutions to determine compliance with these requirements. In addition, financial institutions also have to comply with requirements by Treasury’s Office of Foreign Asset Control to review transactions to ensure that business is not being done with sanctioned countries or individuals. Recent Regulatory Changes In response to the 2007-2009 financial crisis, Congress passed the Dodd- Frank Act, which became law on July 21, 2010. The act includes numerous reforms to strengthen oversight of financial services firms, including consolidating consumer protection responsibilities within CFPB. Under the Dodd-Frank Act, federal financial regulatory agencies were directed to or granted authority to issue hundreds of regulations to implement the act’s reforms. Many of the provisions in the Dodd-Frank Act target the largest and most complex financial institutions, and regulators have noted that much of the act is not meant to apply to community banks. Although the Dodd-Frank Act exempts small institutions, such as community banks and credit unions, from several of its provisions, and authorizes federal regulators to provide small institutions with relief from certain regulations, it also contains provisions that impose additional restrictions and compliance costs on these institutions. As we reported in 2012, federal regulators, state regulatory associations, and industry associations collectively identified provisions within 7 of the act’s 16 titles that they expected to affect community banks and credit unions. The provisions they identified as likely to affect these institutions included some of the act’s mortgage reforms, such as those requiring institutions to ensure that a consumer obtaining a residential mortgage loan has the reasonable ability to repay the loan at the time the loan is consummated; comply with a new CFPB rule that combines two different mortgage loan disclosures that had been required by the Truth-in-Lending Act and the Real Estate Settlement Procedures Act of 1974; and ensure that property appraisers are sufficiently independent. In addition to the regulations that have arisen from provisions in the Dodd-Frank Act, we reported that other regulations have created potential burdens for community banks. For example, the depository institution regulators also issued changes to the capital requirements applicable to these institutions. Many of these changes were consistent with the Basel III framework, which is a comprehensive set of reforms to strengthen global capital and liquidity standards issued by an international body consisting of representatives of many nations’ central banks and regulators. These new requirements significantly changed the risk-based capital standards for banks and bank holding companies. As we reported in November 2014, officials interviewed from community banks did not anticipate any difficulties in meeting the U.S. Basel III capital requirements but expected to incur additional compliance costs. In addition to regulatory changes that could increase burden or costs on community banks, some of the Dodd-Frank Act provisions have likely resulted in reduced costs for these institutions. For example, revisions to the way that deposit insurance premiums are calculated reduced the amount paid by banks with less than $10 billion in assets by $342 million or 33 percent from the first to second quarter of 2011 after the change became effective. Another change reduced the audit-related costs that some banks were incurring in complying with provisions of the Sarbanes- Oxley Act. Prior Studies on Regulatory Burden Generally Focused on Costs A literature search indicated that prior studies by other entities, including regulators, trade associations or others, which examined how to measure regulatory burden generally focused on direct costs resulting from compliance with regulations, and our analysis of them identified various limitations that restrict their usefulness in assessing regulatory burden. For example, researchers commissioned by the Credit Union National Association, which advocates for credit unions, found costs attributable to regulations totaled a median of 0.54 percent of assets in 2014 for a non- random sample of the 53 small, medium, and large credit unions responding to a nationwide survey. However, one of the study’s limitations was its use of a small, non-random sample of credit unions. In addition, the research was not designed to conclusively link changes in regulatory costs for the sampled credit unions to any one regulation or set of regulations. CFPB also conducted a study of regulatory costs associated with specific regulations applicable to checking accounts, traditional savings accounts, debit cards, and overdraft programs. Through case studies involving 200 interviews with staff at seven commercial banks with assets over $1 billion, the agency’s staff determined that the banks’ costs related to ongoing regulatory compliance were concentrated in operations, information technology, human resources, and compliance and retail functions, with operations and information technology contributing the highest costs. While providing detailed information about the case study institutions, reliance on a small sample of mostly large commercial banks limits the conclusions that can be drawn about banks’ regulatory costs generally. In addition, the study notes several challenges to quantifying compliance costs that made their cost estimates subject to some measurement error, and the study’s design limits the extent to which a causal relationship between financial regulations and costs could be fully established. Researchers from the Mercatus Center at George Mason University used a nongeneralizable survey of banks to find that respondents believed they were spending more money and staff time on compliance than before due to Dodd-Frank regulations. From a universe of banks with less than $10 billion of assets, the center’s researchers used a non-random sample to collect 200 responses to a survey sent to 500 banks with assets less than $10 billion about the burden of complying with regulations arising from the Dodd-Frank Act. The survey sought information on the respondents’ characteristics, products, and services and the effects various regulatory and compliance activities had on operations and decisions, including those related to bank profitability, staffing, and products. About 83 percent of the respondents reported increased compliance costs of greater than or equal to 5 percent due to regulatory requirements stemming from the Dodd-Frank Act. The study’s limitations include use of a non-random sample selection, small response rate, and use of questions that asked about the Dodd-Frank Act in general. In addition, the self-reported survey items used to capture regulatory burden—compliance costs and profitability—have an increased risk of measurement error and the causal relationship between Dodd- Frank Act requirements and changes in these indicators is not well- established. Institutions Cited Mortgage and Anti- Money Laundering Regulations as Most Burdensome, although Others Noted Their Significant Public Benefits Community bank and credit union representatives that we interviewed identified three sets of regulations as most burdensome to their institutions: (1) data reporting requirements related to loan applicants and loan terms under the Home Mortgage Disclosure Act of 1975 (HMDA); (2) transaction reporting and customer due diligence requirements as part of the Bank Secrecy Act and related anti-money laundering laws and regulations (collectively, BSA/AML); and (3) disclosures of mortgage loan fees and terms to consumers under the TILA-RESPA Integrated Disclosure (TRID) regulations. In focus groups and interviews, many of the institution representatives said these regulations were time- consuming and costly to comply with, in part because the requirements were complex, required preparation of individual reports that had to be reviewed for accuracy, or mandated actions within specific timeframes. However, federal regulators and consumer advocacy groups said that benefits from these regulations were significant. HMDA Requirements Deemed Time Consuming by Institutions but Critical to Others Representatives of community banks and credit unions in all our focus groups and in most of our interviews told us that HMDA’s data collection and reporting requirements were burdensome. Under HMDA and its implementing Regulation C, banks and credit unions with more than $45 million in assets that do not meet regulatory exemptions must collect, record, and report to the appropriate federal regulator, data about applicable mortgage lending activity. For every covered mortgage application, origination, or purchase of a covered loan, lenders must collect information such as the loan’s principal amount, the property location, the income relied on in making the credit decision, and the applicants’ race, ethnicity, and sex. Institutions record this on a form called the loan/application register, compile these data each calendar year, and submit them to CFPB. Institutions have also been required to make these data available to the public upon request, after modifying them to protect the privacy of applicants and borrowers. Representatives of many community banks and credit unions with whom we spoke said that complying with HMDA regulations was time consuming. For example, representatives from one community bank we interviewed said it completed about 1,100 transactions that required HMDA reporting in 2016, and that its staff spent about 16 hours per week complying with Regulation C. In one focus group, participants discussed how HMDA compliance was time consuming because the regulations were complex, which made determining whether a loan was covered and should be reported difficult. As a part of that discussion, one bank representative told us that it was not always clear whether a residence that was used as collateral for a commercial loan was a reportable mortgage under HMDA. In addition, representatives in all of our focus groups in which HMDA was discussed and in some interviews said that they had to provide additional staff training for HMDA compliance. Among the 28 community banks and credit unions whose representatives commented on HMDA in our focus groups, 61 percent noted having to conduct additional HMDA-related training. In most of our focus groups and three of our interviews, representatives of community banks and credit unions also expressed concerns about how federal bank examiners review HMDA data for errors. When regulatory examiners conducting compliance examinations determine that an institution’s HMDA data has errors above prescribed thresholds, the institution has to correct and resubmit its data, further adding to the time required for compliance. While regulators have revised their procedures for assessing errors as discussed later, prior to 2018, if 10 percent or more of the loan/application registers that examiners reviewed had errors, an institution was required to review all of their data, correct any errors, and resubmit them. If 5 percent or more of the reviewed loan/application registers had errors in a single data field, an institution had to review all other registers and correct the data in that field. Participants in one focus group discussed how HMDA’s requirements left them little room for error and that they were concerned that examiners weigh all HMDA fields equally when assessing errors. For example, representatives of one institution noted that for purposes of fair lending enforcement, errors in fields such as race and ethnicity can be more important than errors in the action taken date (the field for the date when a loan was originated or when an application not resulting in an origination was received). Representatives of one institution also noted that they no longer have access to data submission software that allowed them to verify the accuracy of some HMDA data, and this has led to more errors in their submissions. Representatives of another institution told us that they had to have staff conduct multiple checks of HMDA data to ensure the data met accuracy standards, which added to the time needed for compliance. Representatives of many community banks and credit unions with whom we spoke also expressed concerns that compliance requirements for HMDA were increasing. The Dodd-Frank Act included provisions to expand the information institutions must collect and submit under HMDA, and CFPB issued rules implementing these new requirements that mostly became effective January 2018. In addition to certain new data requirements specified in the act, such as age and the total points and fees payable at origination, CFPB’s amendments to the HMDA reporting requirements also added additional data points, including some intended to collect more information about borrowers such as credit scores, as well as more information about the features of loans, such as fees and terms. In the final rule implementing the new requirements, CFPB also expanded the types of loans on which some institutions must report HMDA data to include open-ended lines of credit and reverse mortgages. Participants in two of our focus groups with credit unions said reporting this expanded information will require more staff time and training and cause them to purchase new or upgraded computer software. In most of our focus groups, participants said that changes should be made to reduce the burdens associated with reporting HMDA data. For example, in some focus groups, participants suggested raising the threshold for institutions that have to file HMDA reports above the then current $44 million in assets, which would reduce the number of small banks and credit unions that are required to comply. Representatives of two institutions noted that because small institutions make very few loans compared to large ones, their contribution to the overall HMDA data was of limited value in contrast to the significant costs to the institutions to collect and report the data. Another participant said their institution sometimes make as few as three loans per month. In most of our focus groups, participants also suggested that regulators could collect mortgage data in other ways. For example, one participant discussed how it would be less burdensome for lenders if federal examiners collected data on loan characteristics during compliance examinations. However, staff of federal regulators and consumer groups said that HMDA data are essential for enforcement of fair lending laws and regulations. Representatives of CFPB, FDIC, NCUA, and OCC and groups that advocate for consumer protection issues said that HMDA data has helped address discriminatory practices. For example, some representatives noted a decrease in “redlining” (refusing to make loans to certain neighborhoods or communities). CFPB staff noted that HMDA data provides transparency about lending markets, and that HMDA data from community banks and credit unions is critical for this purpose, especially in some rural parts of the country where they make the majority of mortgage loans. While any individual institution’s HMDA reporting might not make up a large portion of HMDA data for an area, CFPB staff told us that if all smaller institutions were exempted from HMDA requirements, regulators would have little or no data on the types of mortgages or on lending patterns in some areas. Agency officials also told us that few good alternatives to HMDA data exist and that the current collection regime is the most effective available option for collecting the data. NCUA officials noted that collecting mortgage data directly from credit unions during examinations to enforce fair lending rules likely would be more burdensome for the institutions. CFPB staff and consumer advocates we spoke with also said that HMDA provides a low-cost data source for researchers and local policy makers, which leads to other benefits that cannot be directly measured but are included in HMDA’s statutory goals—such as allowing local policymakers to target community investments to areas with housing needs. While representatives of some community banks and credit unions argued that HMDA data were no longer necessary because practices such as redlining have been reduced and they receive few requests for HMDA data from the public, representatives of some consumer advocate groups responded that eliminating the transparency that HMDA data creates could allow discriminatory practices to become more common. CFPB staff and representatives of one of these consumer groups also said that before the financial crisis of 2007–2009, some groups were not being denied credit outright but instead were given mortgages with terms, such as high interest rates, which made them more likely to default. The expanded HMDA data will allow regulators to detect such problematic lending practices for mortgage terms. CFPB and FDIC staff also told us that while lenders will have to collect and report more information, the new fields will add context to lending practices and should reduce the likelihood of incorrectly flagging institutions for potential discrimination. For example, with current data, a lender may appear to be denying mortgage applications to a particular racial or ethnic group, but with expanded data that includes applicant credit scores, regulators may determine that the denials were appropriate based on credit score underwriting. CFPB staff acknowledged that HMDA data collection and reporting may be time consuming, and said they have taken steps to reduce the associated burdens for community banks and credit unions. First, in its final rule implementing the Dodd-Frank Act’s expanded HMDA data requirements, CFPB added exclusions for banks and credit unions that make very few mortgage loans. Effective January 2018, an institution will be subject to HMDA requirements only if it has originated at least 25 closed-end mortgage loans or at least 100 covered open-end lines of credit in each of the 2 preceding calendar years and also has met other applicable requirements. In response to concerns about the burden associated with the new requirement for reporting open-end lines of credit, in 2017. CFPB temporarily increased the threshold for collecting and reporting data for open-end lines of credit from 100 to 500 for the 2018 and 2019 calendar years. CFPB estimated that roughly 25 percent of covered depository institutions will no longer be subject to HMDA as a result of these exclusions. Second, the Federal Financial Institutions Examination Council (FFIEC), which includes CFPB, announced the new FFIEC HMDA Examiner Transaction Testing Guidelines that specify when agency examiners should direct an institution to correct and resubmit its HMDA data due to errors found during supervisory examinations. CFPB said these revisions should greatly reduce the burden associated with resubmissions. Under the revised standards, institutions will no longer be directed to resubmit all their HMDA data if they exceeded the threshold for HMDA files with errors, but will still be directed to correct specific data fields that have errors exceeding the specified threshold. The revised guidelines also include new tolerances for some data fields, such as application date and loan amount. Third, CFPB also introduced a new online system for submitting HMDA data in November 2017. CFPB staff said that the new system, the HMDA Platform, will reduce errors by including features to allow institutions to validate the accuracy and correct the formatting of their data before submitting. They also noted that this platform will reduce burdens associated with the previous system for submitting HMDA data. For example, institutions no longer will have to regularly download software, and multiple users within an institution will be able to access the platform. NCUA officials added that some credit unions had tested the system and reported that it reduced their reporting burden. Finally, on December 21, 2017, CFPB issued a public statement announcing that, for HMDA data collected in 2018, CFPB does not intend to require resubmission of HMDA data unless errors are material, and does not intend to assess penalties for errors in submitted data. CFPB also announced that it intends to open a rule making to reconsider various aspects of the 2015 HMDA rule, such as the thresholds for compliance and data points that are not required by statute. Institutions Found BSA/AML Regulations Burdensome and Regulators Have Been Considering Steps to Reduce Burden In all our focus groups and many of our interviews, participants said they found BSA/AML requirements to be burdensome due to the staff time and other costs associated with their compliance efforts. To provide regulators and law enforcement with information that can aid in pursuing criminal, tax, and regulatory investigations, BSA/AML statutes and regulations require covered financial institutions to file Currency Transaction Reports (CTR) for cash transactions conducted by a customer for aggregate amounts of more than $10,000 per day and Suspicious Activity Reports (SAR) for activity that might signal criminal activity (such as money laundering or tax evasion); and establish BSA/AML compliance programs that include efforts to identify and verify customers’ identities and monitor transactions to report, for example, transactions that appear to violate federal law. Participants in all of our focus groups discussed how BSA/AML compliance was time-consuming, and in most focus groups participants said this took time away from serving customers. For example, representatives of one institution we interviewed told us that completing a single SAR could take 4 hours, and that they might complete 2 to 5 SARs per month. However, representatives of another institution said that at some times of the year it has filed more than 300 SARs per month. In a few cases, representatives of institutions saw BSA/AML compliance as burdensome because they had to take actions that seemed unnecessary based on the nature of the transactions. For example, one institution’s representatives said that filing a CTR because a high school band deposited more than $10,000 after a fundraising activity seemed unnecessary, while another’s said that it did not see the need to file SARs for charitable organizations that are well known in their community. Representatives of institutions in most of our focus groups also noted that BSA/AML regulations required additional staff training. Some of these representatives noted that the requirements are complex and the activities, such as identifying transactions potentially associated with terrorism, are outside of their frontline staff’s core competencies. Representatives in all focus groups and a majority of interviews said BSA imposes financial costs on community banks and credit unions that must be absorbed by those institutions or passed along to customers. In most of our focus groups, representatives said that they had to purchase or upgrade software systems to comply with BSA/AML requirements, which can be expensive. Some representatives also said they had to hire third parties to comply with BSA/AML regulations. Representatives of some institutions also noted that the compliance requirements do not produce any material benefits for their institutions. In most of our focus groups, participants were particularly concerned that the compliance burden associated with BSA/AML regulations was increasing. In 2016, FinCEN—the bureau in the Department of the Treasury that administers BSA/AML rules—issued a final rule that expanded due-diligence requirements for customer identification. The final rule was intended to strengthen customer identification programs by requiring institutions to obtain information about the identities of the beneficial owners of businesses opening accounts at their institutions. The institutions covered by the rule are expected to be in compliance by May 11, 2018. Some representatives of community banks and credit unions that we spoke with said that this new requirement will be burdensome. For example, one community bank’s representatives said the new due-diligence requirements will require more staff time and training and cause them to purchase new or upgraded computer systems. Representatives of some institutions also noted that accessing beneficial ownership information about companies can be difficult, and that entities that issue business licenses or tax identification numbers could perform this task more easily than financial institutions. In some of our focus groups, and in some comment letters that we reviewed that community banks and credit unions submitted to bank regulators and NCUA as part of the EGRPRA process, representatives of community banks and credit unions said regulators should take steps to reduce the burdens associated with BSA/AML. Participants in two of our focus groups and representatives of two institutions we interviewed said that the $10,000 CTR threshold, which was established in 1972, should be increased, noting it had not been adjusted for inflation. One participant told us that if this threshold had been adjusted for inflation over time, it likely would be filing about half of the number of CTRs that it currently files. In several focus groups, participants also indicated that transactions that must be checked against the Office of Foreign Assets Control list also should be subject to a threshold amount. Representatives of one institution noted that they have to complete time-consuming compliance work for even very small transactions (such as less than $1). Representatives of some institutions suggested that the BSA/AML requirements be streamlined to make it easier for community banks and credit unions to comply. For example, representatives of one institution that participated in the EGRPRA review suggested that institutions could provide regulators with data on all cash transactions in the format in which they keep these records rather than filing CTRs. Finally, participants in one focus group said that regulators should better communicate how the information that institutions submit contributes to law enforcement successes in preventing or prosecuting crimes. Staff from FinCEN told us that the reports and due-diligence programs required in BSA/AML rules are critical to safeguarding the U.S. financial sector from illicit activity, including illegal narcotics and terrorist financing activities. They said they rely on CTRs and SARs that financial institutions file for the financial intelligence they disseminate to law enforcement agencies, and noted that they saw all BSA/AML requirements as essential because activities are designed to complement each other. Officials also pointed out that entities conducting terrorism, human trafficking, or fraud all rely heavily on cash, and reporting frequently made deposits makes tracking criminals easier. They said that significant reductions in BSA/AML reporting requirements would hinder law enforcement, especially because depositing cash through ATMs has become very easy. FinCEN staff said they utilize a continuous evaluation process to look for ways to reduce burden associated with BSA/AML requirements, and noted actions taken as a result. They said that FinCEN has several means of soliciting feedback about potential burdens, including through its Bank Secrecy Act Advisory Group that consists of industry, regulatory, and law enforcement representatives who meet twice a year, and also through public reporting and comments received through FinCEN’s regulatory process. FinCEN officials said that based on this advisory group’s recommendations, the agency provided SAR filing relief by reducing the frequency of submission for written SAR summaries on ongoing activity from 90 days to 120 days. FinCEN also has recognized that financial institutions do not generally see the beneficial impacts of their BSA/AML efforts, and officials said they have begun several different feedback programs to address this issue. FinCEN staff said they have been discussing ways to improve the CTR filing process, but in response to comments obtained as part of a recent review of regulatory burden they noted that the staff of law enforcement agencies do not support changing the $10,000 threshold for CTR reporting. FinCEN officials said that they have taken some steps to reduce the burden related to CTR reporting, such as by expanding the ability of institutions to seek CTR filing exemptions, especially for low-risk customers. FinCEN is also utilizing its advisory group to examine aspects of the CTR reporting obligations to assess ways to reduce reporting burden, but officials said it is too early to know the outcomes of the effort. However, FinCEN officials said that while evaluation of certain reporting thresholds may be appropriate, any changes to them or other CTR requirements to reduce burden on financial institutions, must still meet the needs of regulators and law enforcement, and prevent misuse of the financial system. FinCEN staff also said that some of the concerns raised about the upcoming requirements on beneficial ownership may be based on misunderstandings of the rule. FinCEN officials told us that under the final rule, financial institutions can rely on the beneficial ownership information provided to them by the entity seeking to open the account. Under the final rule, the party opening an account on behalf of the legal entity customer is responsible for providing beneficial ownership information, and the financial institution may rely on the representations of the customer unless it has information that calls into question the accuracy of those representations. The financial institution does not have to confirm ownership; rather, it has to verify the identity of the beneficial owners as reported by the individual seeking to open the account, which can be done with photocopies of identifying documents such as a driver’s license. FinCEN issued guidance explaining this aspect of the final rule in 2016. Institutions Found New Mortgage Term Disclosure Rules Burdensome, but Some May Be Misinterpreting Requirements In all of our focus groups and many of our interviews, representatives of community banks and credit unions said that new requirements mandating consolidated disclosures to consumers for mortgage terms and fees have increased the time their staff spend on compliance, increased the cost of providing mortgage lending services, and delayed the completion of mortgages for customers. The Dodd Frank Act directed CFPB to issue new requirements to integrate mortgage loan disclosures that previously had been separately required by the Truth-in-Lending Act (TILA) and the Real Estate Settlement Procedures Act (RESPA), and their implementing regulations, Regulation Z and X, respectively. Effective in October 2015, the combined TILA-RESPA Integrated Disclosure (known as TRID) requires mortgage lenders to disclose certain mortgage terms, conditions, and fees to loan applicants during the origination process for certain mortgage loans and prescribe how the disclosures should be made. The disclosure provisions also require lenders, in the absence of specified exceptions, to reimburse or refund to borrowers portions of certain fees that exceed the estimates previously provided in order to comply with the revised regulations. Under TRID, lenders generally must provide residential mortgage loan applicants with two forms, and deliver these documents within specified time frames (as shown in fig. 1). Within 3 business days of an application and at least 7 business days before a loan is consummated, lenders must provide the applicant with the loan estimate, which includes estimates for all financing costs and fees and other terms and conditions associated with the potential loan. If circumstances change after the loan estimate has been provided (for example, if a borrower needs to change the loan amount), a new loan estimate may be required. At least 3 days before a loan is consummated, lenders must provide the applicant with the closing disclosure, which has the loan’s actual terms, conditions, and associated fees. If the closing disclosure is mailed to an applicant, lenders must wait an additional 3 days for the applicant to receive it before they can execute the loan, unless they can demonstrate that the applicant has received the closing disclosure. If the annual percentage rate or the type of loan change after the closing disclosure is provided, or if a prepayment penalty is added, a new closing disclosure must be provided and a new 3-day waiting period is required. Other changes made to the closing disclosure require the provision of a revised closing disclosure, but a new 3-day waiting period is not required. If the fees in the closing disclosure are more than the fees in the loan estimate (subject to some exceptions and tolerances discussed later in this section), the lender must reimburse the applicant for the amount of the increase in order to comply with the applicable regulations. In all of our focus groups and most of our interviews, representatives of community banks and credit unions said that TRID has increased the time required to comply with mortgage disclosure requirements and increased the cost of mortgage lending. In half of our focus groups, participants discussed how they have had to spend additional time ensuring the accuracy of their initial estimates of mortgage costs, including fees charged by third parties, in part because they are now financially responsible for changes in fees during the closing process. Some participants also discussed how they have had to hire additional staff to meet TRID’s requirements. In one focus group of community banks, participants described how mortgage loans frequently involve the use of multiple third parties, such as appraisers and inspectors, and obtaining accurate estimates of the amounts these parties will charge for their services within the 3-day period prescribed by TRID can be difficult. The community banks we spoke with also discussed how fees from these parties often change at closing, and ensuring an accurate estimate at the beginning of the process was not always possible. As a result, some representatives said that community banks and credit unions have had to pay to cure or correct the difference in changed third-party fees that are outside their control. In most of our focus groups and some of our interviews, representatives told us that this TRID requirement has made originating a mortgage more costly for community banks and credit unions. Community banks and credit unions in half of our focus groups and some of our interviews also told us that TRID’s requirements are complex and difficult to understand, which adds to their compliance burden. Participants in one focus group noted that CFPB’s final rule implementing TRID was very long—the rule available on CFPB’s website is more than 1,800 pages including the rule’s preamble—and has many scenarios that require different actions by mortgage lenders or trigger different responsibilities as the following examples illustrate. Some fees in the loan estimate, such as prepaid interest, may be subsequently changed provided that the estimates were in good faith. Other fees, such as for third-party services where the charge is not paid to the lender or the lender’s affiliate, may be changed by as much as 10 percent in aggregate before the lender becomes liable for the difference. However, for some charges the lender must reimburse or refund to the borrower portions of subsequent increases, such as fees paid to the creditor, mortgage broker, or a lender affiliate, without any percentage tolerance. Based on a poll we conducted in all six focus groups, 40 of 43 participants said that they had to provide additional training to staff to ensure that TRID’s requirements were understood, which takes additional time from serving customers. In all of our focus groups and most of our interviews, community banks and credit unions also said that TRID’s mandatory waiting periods and disclosure schedules increased the time required to close mortgage loans, which created burdens for the institutions and their customers. Several representatives we interviewed told us that TRID’s waiting periods led to delays in closings of about 15 days. The regulation mandates that mortgage loans generally cannot be consummated sooner than 7 business days after the loan estimate is provided to an applicant, and no sooner than 3 business days after the closing disclosure is received by the applicant. If the closing disclosure is mailed, the lender must add another 3 business days to the closing period to allow for delivery. Representatives in some of our focus groups said that when changes needed to be made to a loan during the closing period, TRID requires them to restart the waiting periods, which can increase delays. For example, if the closing disclosure had been provided, and the loan product needed to be changed, a new closing disclosure would have to be provided and the applicant given at least 3 days to review it. Some representatives we interviewed said that their customers are frustrated by these delays and would like to close their mortgages sooner than TRID allows. Others said that TRID’s waiting periods decreased flexibility in scheduling the closing date, which caused problems for homebuyers and sellers (for instance, because transactions frequently have to occur on the same day). However, CFPB officials and staff of a consumer group said that TRID has streamlined previous disclosure requirements and is important for ensuring that consumers obtaining mortgages are protected. CFPB reported that for more than 30 years lenders have been required by law to provide mortgage disclosures to borrowers, and CFPB staff noted that prior time frames were similar to those required by TRID and Regulation Z. CFPB also noted that information on the disclosure forms that TRID replaced was sometimes overlapping, used inconsistent terminology, and could confuse consumers. In addition, CFPB staff and staff of a consumer group said that the previous disclosures allowed some mortgage-related fees to be combined, which prevented borrowers from knowing what charges for specific services were. They said that TRID disclosures better highlight important items for home buyers, allowing them to more readily compare loan options. Furthermore, CFPB staff told us that before TRID, lenders and other parties commonly increased a mortgage loan’s fees during the closing process, and then gave borrowers a “take it or leave it” choice just before closing. As a result, borrowers often just accepted the increased costs. CFPB representatives said that TRID protects consumers from this practice by shifting the responsibility for most fee increases to lenders, and increases transparency in the lending process. CFPB staff told us that it is too early to definitively identify what impact TRID has had on borrowers’ understanding of mortgage terms, but told us that some information they have seen indicated that it has been helpful. For example, CFPB staff said that preliminary results from the National Survey of Mortgage Originations conducted in 2017 found that consumer confidence in mortgage lending increased. While CFPB staff said that this may indicate that TRID, which became effective in October 2015, has helped consumers better understand mortgage terms, they noted that the complete survey results are not expected to be released until 2018. CFPB staff said that these results should provide valuable information on how well consumers generally understood mortgage terms and whether borrowers were comparison shopping for loans that could be used to analyze TRID’s effects on consumer understanding of mortgage products. CFPB staff also told us that complying with TRID should not result in significant time being added to the mortgage closing process. Based on the final rule, they noted that TRID’s waiting periods should not lead to delays of more than 3 days. CFPB staff also pointed out that the overall 7-day waiting period and the 3-day waiting period can be modified or waived if the consumer has a bona fide personal financial emergency, and thus should not be creating delays for those consumers. To waive the waiting period, consumers have to provide the lender with a written statement that describes the emergency. CFPB staff also said that closing times are affected by a variety of factors and can vary substantially, and that the delays that community banks and credit unions we spoke with reported may not be representative of the experiences of other lenders. A preliminary CFPB analysis of industry-published mortgage closing data found that closing times increased after it first implemented TRID, but that the delays subsequently declined. CFPB staff also said that they plan to analyze closing times using HMDA data now that they are collecting these data, and that they expect that delays that community banks and credit unions may have experienced so far would decrease as institutions adjusted to the new requirements. Based on our review of TRID’s requirements and discussions with community banks and credit unions, some of the burden related to TRID that community banks and credit unions described appeared to result from institutions taking actions not required by regulations, and community banks and credit unions told us they still were confused about TRID requirements. For example, representatives of some institutions we interviewed said that they believed TRID requires the entire closing disclosure process to be restarted any time any changes were made to a loan’s amount. CFPB staff told us that this is not the case, and that revised loan estimates can be made in such cases without additional waiting periods. Representatives of several other community banks and credit unions cited 5- and 10-day waiting periods not in TRID requirements, or believed that the 7-day waiting period begins after the closing disclosure is received by the applicant, rather than when the loan estimate is provided. Participants in one focus group discussed that they were confused about when to provide disclosures and what needs to be provided. Representatives of one credit union said that if they did not understand a requirement, it was in their best interest to delay closing to ensure they were in compliance. CFPB staff said that they have taken several steps to help lenders understand TRID requirements. CFPB has published a Small Entity Compliance Guide and a Guide to the Loan Estimate and Closing Disclosure Forms. As of December 2017, these guides were accessible on a TRID implementation website that has links to other information about the rule, as well as blank forms and completed samples. CFPB staff told us that the bureau conducted several well-attended, in-depth webinars to explain different aspects of TRID, including one with more than 20,000 participants, and that recordings of the presentations remained available on the bureau’s TRID website. CFPB also encourages institutions to submit questions about TRID through the website, and the staff said that they review submitted questions for any patterns that may indicate that an aspect of the regulation is overly burdensome. However, the Mortgage Bankers Association reported that CFPB’s guidance for TRID had not met the needs of mortgage lenders. In a 2017 report on reforming CFPB, this association stated that timely and accessible answers to frequently asked questions about TRID were still needed, noting that while CFPB had assigned staff to answer questions, these answers were not widely circulated. The association also reported that it had made repeated requests for additional guidance related to TRID, but the agency largely did not respond with additional materials in response to these requests. Although we found that misunderstandings of TRID requirements could be creating unnecessary compliance burdens for some small institutions, CFPB had not assessed the effectiveness of the guidance it provided to community banks and credit unions. Under the Dodd-Frank Act, CFPB has a general responsibility to ensure its regulations are not unduly burdensome, and internal control standards direct federal agencies to analyze and respond to risks related to achieving their defined objectives. However, CFPB staff said that they have not directly assessed how well community banks and credit unions have understood TRID requirements and acknowledged that some of these institutions may be applying the regulations improperly. They said that CFPB intends to review the effectiveness of its guidance, but did not indicate when this review would be completed. Until the agency assesses how well community banks and credit unions understand TRID requirements, CFPB may not be able to effectively respond to the risk that some smaller institutions have implemented TRID incorrectly, unnecessarily burdening their staff and delaying consumers’ home purchases. Community Banks and Credit Unions Appeared to Be Receiving Applicable Regulatory Exemptions, but Expressed Concerns about Examiner Expectations We did not find that regulators directed institutions to comply with regulations from which they were exempt, although institutions were concerned about the appropriateness of examiner expectations. To provide regulatory relief to community banks and credit unions, Congress and regulators have sometimes exempted smaller institutions from the need to comply with all or part of some regulations. Such exemptions are often based on the size of the financial institution or the level of particular activities. For example, CFPB exempted institutions with less than $45 million in assets and fewer than 25 closed-end mortgage loans or 500 open-end lines of credit from the expanded HMDA reporting requirements. In January 2013, CFPB also included exemptions for some institutions in a rule related to originating loans that meet certain characteristics—known as qualified mortgages—in order for the institutions to receive certain liability protections if the loans later go into default. To qualify for this treatment, the lenders must make a good faith effort to determine a borrower’s ability to repay a loan and the loan must not include certain risky features (such as interest-only or balloon payments). In its final rule, CFPB included exemptions that allow small creditors to originate loans with certain otherwise restricted features (such as balloon payments) and still be considered qualified mortgage loans. Concerns expressed to legislators about exemptions not being applied appeared to be based on misunderstandings of certain regulations. For example, in June 2016, a bank official testified that he thought his bank would be exempt from all of CFPB’s requirements. However, CFPB’s rules applicable to banks apply generally to all depository institutions, although CFPB only conducts compliance examinations for institutions with assets exceeding $10 billion. The depository institution regulators continue to examine institutions with assets below this amount (the overwhelming majority of banks and credit unions) for compliance with regulations enacted by CFPB. Although not generalizable, our analysis of select examinations did not find that regulators directed institutions to comply with requirements from which they were exempt. In our interviews with representatives from 17 community banks and credit unions, none of the institutions’ representatives identified any cases in which regulators required their institution to comply with a regulatory requirement from which they should have been exempt. We also randomly selected and reviewed examination reports and supporting material for 28 examinations conducted by the regulators to identify any instances in which the regulators had not applied exemptions. From our review of the 28 examinations, we found no instances in the examination reports or the scoping memorandums indicating that examiners had required these institutions to comply with the regulations covered by the eight selected exemptions. Because of the limited number of the examinations we reviewed, we cannot generalize our findings to the regulatory treatment of all institutions qualifying for exemptions. Although not identifying issues relating to exemptions, representatives of community banks and credit unions in about half of our interviews and focus groups expressed concerns that their regulators expected them to follow practices they did not feel corresponded to the size or risks posed by their institutions. For example, representatives from one institution we interviewed said that examiners directed them to increase BSA/AML activities or staff, whereas they did not see such expectations as appropriate for institutions of their size. Similarly, in public forums held by regulators as part of their EGRPRA reviews (discussed in the next section) a few bank representatives stated that regulators sometimes considered compliance activities by large banks to be best practices, and then expected smaller banks to follow such practices. However, institution representatives in the public forums and in our interviews and focus groups that said sometimes regulators’ expectations for their institutions were not appropriate, but did not identify specific regulations or practices they had been asked to consider following when citing these concerns. To help ensure that applicable exemptions and regulatory expectations are appropriately applied, federal depository institution regulators told us they train their staff in applicable requirements and conduct senior-level reviews of examinations to help ensure that examiners only apply appropriate requirements and expectations on banks and credit unions. Regulators said that they do not conduct examinations in a one-size-fits- all manner, and aim to ensure that community banks and credit unions are held to standards appropriate to their size and business model. To achieve this, they said that examiners undergo rigorous training. For example, FDIC staff said that its examiners have to complete four core trainings and then receive ongoing on-the-job instruction. Each of the four regulators also said they have established quality assurance programs to review and assess their examination programs periodically. For example, each Federal Reserve Bank reviews its programs for examination inconsistency and the Federal Reserve Board staff conducts continuous and point-in-time oversight reviews of Reserve Banks’ examination programs to identify issues or problems, such as examination inconsistency. The depository institution regulators also said that they have processes for depository institutions to appeal examination findings if they feel they were held to inappropriate standards. In addition to less formal steps, such as contacting a regional office, each of the four regulators have an ombudsman office to which institutions can submit complaints or concerns about examination findings. Staffs of the various offices are independent from the regulators’ management and work with the depository institutions to resolve examination issues and concerns. If the ombudsman is unable to resolve the complaints, then the institutions can further appeal their complaints through established processes. Reviews of Regulations Resulted in Some Reduction in Burden, but the Reviews Have Limitations Federal depository institution regulators address regulatory burden of their regulated institutions through the rulemaking process and also through retrospective reviews that may provide some regulatory relief to community banks. However, the retrospective review process has some limitations that limit its effectiveness in assessing and addressing regulatory burden on community banks and credit unions. Mechanisms for Regulators to Address Regulatory Burden Include Mandated Decennial Reviews Federal depository institution regulators can address the regulatory burden of their regulated institutions throughout the rulemaking process and through mandated, retrospective or “look back” reviews. According to the regulators, attempts to reduce regulatory burden start during the initial rulemaking process. Staff from FDIC, Federal Reserve, NCUA, and OCC all noted that when promulgating rules, their staff seek input from institutions and others throughout the process to design requirements that achieve the goals of the regulation at the most reasonable cost and effort for regulated entities. Once a rule has been drafted, the regulators publish it in the Federal Register for public comment. The staff noted that regulators often make revisions in response to the comments received to try to reduce compliance burdens in the final regulation. After regulations are implemented, banking regulators also address regulatory burdens by periodically conducting mandated reviews of their regulations. The Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA) directs three regulators (Federal Reserve, FDIC, and OCC, as agencies represented on the Federal Financial Institutions Examination Council) to review at least every 10 years all of their regulations and through public comment identify areas of the regulations that are outdated, unnecessary or unduly burdensome on insured depository institutions. Under the act, the regulators are to categorize their regulations and provide notice and solicit public comment on all the regulations for which they have regulatory authority. The act also includes a number of requirements on how the regulators should conduct the review, including reporting results to Congress. The first EGRPRA review was completed in 2007. The second EGRPRA review began in 2014 and the report summarizing its results was submitted to Congress in March 2017. While NCUA is not required to participate in the EGRPRA review (because EGRPRA did not include the agency in the list of agencies that must conduct the reviews), NCUA has been participating voluntarily. NCUA’s assessment of its regulations appears in separate sections of the reports provided to Congress for each of the 2007 and 2017 reviews. Bank Regulators’ 2017 EGRPRA Review Process and Results Regulators began the most recent EGRPRA review by providing notice and soliciting comments in 2014–2016. The Federal Reserve, FDIC, and OCC issued four public notices in the Federal Register seeking comments from regulated institutions and interested parties on 12 categories of regulations they promulgated. The regulators published a list of all the regulations they administer in the notices and asked for comments, including comments on the extent to which regulations were burdensome. Although not specifically required under EGRPRA, the regulators also held six public meetings across the country with several panels of banks and community groups. At each public meeting, at least three panels of bank officials represented banks with assets of generally less than $5 billion and a large number of the panels included banks with less than $2 billion in assets. Panels were dedicated to specific regulations or sets of regulations. For example, one panel covered capital-related rules, consumer protection, and director-related rules, and another addressed BSA/AML requirements. Although panels were dedicated to specific regulations or sets of regulations, the regulators invited comment on all of their regulations at all public meetings. The regulators then assessed the public comments they received and described actions they intended to take in response. EGRPRA requires that the regulators identify the significant issues raised by the comments. The regulators generally deemed the issues that received the most public comments as significant. For the 2017 report, representatives at the Federal Reserve, FDIC, and OCC reviewed, evaluated, and summarized more than 200 comment letters and numerous oral comments they received. For interagency regulations that received numerous comments, such as those relating to capital and BSA/AML requirements, the comment letters for each were provided to staff of one of the three regulators or to previously established interagency working groups to conduct the initial assessments. The regulators’ comment assessments also included reviews by each agency’s subject-matter experts, who prepared draft summaries of the concerns and proposed agency responses for each of the rules that received comments. According to one bank regulator, the subject-matter experts assessed the comments across three aspects: (1) whether a suggested change to the regulation would reduce bank burdens; (2) how the change to the regulation would affect the safety and soundness of the banking system; and (3) whether a statutory change would be required to address the comment. The summaries drafted by the subject-matter experts then were shared with staff representing all three regulators and further revised. The staff of the three regulators said they then met jointly to analyze the merits of the comments and finalize the comment responses and the proposed actions for approval by senior management at all three regulators. In the 2017 report summarizing their assessment of the comments received, the regulators identified six significant areas in which commenters raised concerns: (1) capital rules, (2) financial condition reporting (Call Reports), (3) appraisal requirements, (4) examination frequency, (5) Community Reinvestment Act, and (6) BSA/AML. Based on our analysis of the 2017 report, the Federal Reserve, FDIC, and OCC had taken or pledged to take actions to address 11 of the 28 specific concerns commenters had raised across these six areas. We focused our analysis on issues within the six significant issues that affected the smaller institution and defined an action taken by the regulators as a change or revision to a regulation or the issuance of guidance. Capital rules. The regulators noted in the 2017 EGRPRA report that they received comment letters from more than 30 commenters on the recently revised capital requirements. Although some of the concerns commenters expressed related to issues affecting large institutions, some commenters sought to have regulators completely exempt smaller institutions from the requirements. Others objected to the amounts of capital that had to be held for loans made involving more volatile commercial real estate. In response, the regulators stated that the more than 500 failures of banks in the recent crisis, most of which were community banks, justified requiring all banks to meet the new capital requirements. However, they pledged in the report to make some changes, and have recently proposed rules that would alter some of the requirements. For example, on September 27, 2017, the regulators proposed several revisions to the capital requirements that would apply to banks not subject to the advanced approach requirements under the capital rules (generally, banks with less than $250 billion in assets and less than $10 billion in total foreign exposure). For example, the proposed rule simplifies the capital treatment for certain commercial acquisition, development, and construction loans, and would change the treatment of mortgage servicing assets. Call Reports. The regulators also received more than 30 comments relating to the reports—known as Call Reports—that banks file with the regulators outlining their financial condition and performance. Generally, the commenters requested relief (reducing the number of items required to be reported) for smaller banks and also asked that the frequency of reporting for some items be reduced. In response to these concerns, the regulators described a review of the Call Report requirements intended to reduce the number of items to be reported to the regulators. The regulators had started this effort to address Call Report issues soon after the most recent EGRPRA process had begun in June 2014. In the 2017 EGRPRA report, the regulators noted that they developed a new Call Report form for banks with assets of less than $1 billion and domestic offices only. For instance, according to the regulators, the new form reduced the number of items such banks had to report by 40 percent. Staff from the regulators told us that about 3,500 banks used the new small-bank reporting form in March 2017, which represented about 68 percent of the banks eligible to use the new form. OCC officials told us that an additional 100 federally chartered banks submitted the form for the 2017 second quarter reporting period. After the issuance of the 2017 EGRPRA report, in June 2017 the regulators issued additional proposed revisions to the three Call Report forms that banks are required to complete. These proposed changes are to become effective in June 2018. For example, one of the proposed changes to the new community bank Call Report form would change the frequency of reporting certain data on non-accrual assets— nonperforming loans that are not generating their stated interest rate— from quarterly to semi-annually. In November 2017, the agencies issued further proposed revision to the community bank Call Report that would delete or consolidate a number of items and add a new, or raise certain existing, reporting thresholds. The proposed revision would take effect as of June 2018. Appraisals. The three bank regulators and NCUA received more than 160 comments during the 2017 EGRPRA process related to appraisal requirements. The commenters included banks and others that sought to raise the size of the loans that require appraisals, and a large number of appraisers that objected to any changes in the requirements According to the EGRPRA report, several professional appraiser associations argued that raising the threshold could undermine the safety and soundness of lenders and diminish consumer protection for mortgage financing. These commenters argued that increasing the thresholds could encourage banks to neglect collateral risk-management responsibilities. In response, in July 2017, the regulators proposed raising the threshold for when an appraisal is required from $250,000 to $400,000 for commercial real estate loans. The regulators indicated that the appraisal requirements for 1-4 family residential mortgage loans above the current $250,000 would not be appropriate at the this time because they believed having such appraisals for loans above that level increased the safety of those loans and better protected consumers and because other participants in the housing market, such as the Department of Housing and Urban Development and the government-sponsored enterprises, also required appraisals for loans above that amount. However, the depository institution regulators included in the proposal a request for comment about the appraisal requirements for residential real estate and what banks think are other factors that should be included when considering the threshold for these loans. As part of the 2017 EGRPRA process, the regulators also received comments indicating that banks in rural areas were having difficulty securing appraisers. In the EGRPRA report, the regulators acknowledged this difficulty and in May 2017, the bank regulators and NCUA issued agency guidance on how institutions could obtain temporary waivers and use other means to expand the pool of persons eligible to prepare appraisals in cases in which suitable appraiser staff were unavailable. The agencies also responded to commenters who found the evaluation process confusing by issuing an interagency advisory on the process in March 2016. Evaluations may be used instead of an appraisal for certain transactions including those under the threshold. Frequency of safety and soundness examinations. As part of the 2017 EGRPRA process, the agencies also received comments requesting that they raise the total asset threshold for an insured depository institution to qualify for the extended 18-month examination cycle from $1 billion to $2 billion and to further extend the examinations cycle from 18 months to 36 months. During the EGRPRA process, Congress took legislative action to reduce examination frequency for smaller, well-capitalized banks. In 2015, the FAST Act raised the threshold for the 18-month examination cycle from less than $500 million to less than $1 billion for certain well-capitalized and well-managed depository institutions with an “outstanding” composite rating and gave the agencies discretion to similarly raise this threshold for certain depository institutions with an “outstanding” or “good” composite rating. The agencies exercised this discretion and issued a final rule in 2016 making qualifying depository institutions with less than $1 billion in total assets eligible for an 18-month (rather than a 12-month) examination cycle. According to the EGRPRA report, agency staff estimated that the final rules allowed approximately 600 more institutions to qualify for an extended 18-month examination cycle, bringing the total number of qualifying institutions to 4,793. Community Reinvestment Act. The commenters in the 2017 EGRPRA process also raised various issues relating to the Community Reinvestment Act, including the geographic areas in which institutions were expected to provide loans to low- and moderate-income borrowers and whether credit unions should be required to comply with the act’s requirements. The regulators noted that they were not intending to take any actions to revise regulations relating to this act because many of the revisions the commenters suggested would require changes to the statute (that is, legislative action). The regulators also noted that they had addressed some of the concerns by revising the Interagency Questions and Answers relating to this act in 2016. Furthermore, the agencies noted that they have been reviewing their existing examination procedures and practices to identify policy and process improvements. BSA/AML. The regulators also received a number of comments as part of the 2017 EGRPRA process on the burden institutions encounter in complying with BSA/AML requirements. These included the threshold for reporting currency transactions and suspicious activities. The regulators also received comments on both BSA/AML examination frequency and the frequency of safety and soundness examinations generally. Agencies typically review BSA/AML compliance programs during safety and soundness examinations. As discussed previously, regulators allowed more institutions of outstanding or good composite condition to be examined every 18 months instead of every 12 months. Institutions that qualify for less frequent safety-and-soundness examinations also will be eligible for less frequent BSA/AML examinations. For the remainder of the issues raised by commenters, the regulators noted they do not have the regulatory authority to revise the requirements but provided the comments to FinCEN, which has authority for these regulations. A letter with FinCEN’s response to the comments was included as an appendix of the EGRPRA report. In the letter, the FinCEN Acting Director stated that FinCEN would work through the issues raised by the comments with its advisory group consisting of regulators, law enforcement staff, and representatives of financial institutions. Additional Burden Reduction Actions. In addition to describing some changes in response to the comments deemed significant, the regulators’ 2017 report also includes descriptions of additional actions the individual agencies have taken or planned to take to reduce the regulatory burden for banks, including community banks. The Federal Reserve Board noted that it changed its Small Bank Holding Company Policy Statement that allows small bank holding companies to hold more debt than permitted for larger bank holding companies. In addition, the Federal Reserve noted that it had made changes to certain supervisory policies, such as issuing guidance on assessing risk management for banks with less than $50 billion in assets and launching an electronic application filing system for banks and bank holding companies. OCC noted that it had issued two final rules amending its regulations for licensing/chartering and securities-related filings, among other things. According to OCC staff, the agency conducted an internal review of its agency-specific regulations and many of the changes to these regulations came from the internal review. The agency also noted that it integrated its rules for national banks and federal savings associations where possible. In addition, OCC noted that it removed redundant and unnecessary information requests from those made to banks before examinations. FDIC noted that it had rescinded enhanced supervisory procedures for newly insured banks and reduced the consumer examination frequency for small and newly insured banks. Similarly to OCC, FDIC is integrating its rules for both non-state member banks and state- chartered savings and loans associations. In addition, FDIC noted it had issued new guidance on banks’ deposit insurance filings and reduced paperwork for new bank applications. NCUA 2017 EGRPRA Process and Results The 2017 report also presents the results of NCUA’s concurrent efforts to obtain and respond to comments as part of the EGRPRA process. NCUA conducts its review separately from the bank regulators’ review. In four Federal Register notices in 2015, NCUA sought comments on 76 regulations that it administers. NCUA received about 25 comments raising concerns about 29 of its regulations, most of which were submitted by credit union associations. NCUA received no comments on 47 regulations. NCUA’s methodology for its regulatory review was similar to the bank regulators’ methodology. According to NCUA, all comment letters responding to a particular notice were collected and reviewed by NCUA’s Special Counsel to the General Counsel, an experienced, senior-level attorney with overall responsibility for EGRPRA compliance. NCUA staff told us that criteria applied by the Special Counsel in his review included relevance, depth of understanding and analysis exhibited by the comment, and degree to which multiple commenters expressed the same or similar views on an issue. The Special Counsel prepared a report summarizing the substance of each comment. The comment summary was reviewed by the General Counsel and circulated to the NCUA Board and reviewed by the Board members and staff. NCUA identified in its report the following as significant issues relating to credit union regulation: (1) field of membership and chartering; (2) member business lending; (3) federal credit union ownership of fixed assets; (4) expansion of national credit union share insurance coverage; and (5) expanded powers for credit unions. For these, NCUA took various actions to address the issues raised in the comments. For example, NCUA modified and updated its field of credit union membership by revising the definition of a local community, rural district and underserved area, which provided greater flexibility to federal credit unions seeking to add a rural district to their field of membership. NCUA also lessened some of the restrictions on member lending to small business; and raised some of the asset thresholds for what would be defined as a small credit union so that fewer requirements would apply to these credit unions. Also, in April 2016, the NCUA Board issued a proposed rule that would eliminate the requirement that federal credit unions must have a plan by which they will achieve full occupancy of premises within an explicit time frame. The proposal would allow for federal credit unions to plan for and manage their use of office space and related premises in accordance with their own strategic plans and risk-management policies. Bank Regulators and NCUA 2007 EGRPRA Review Process and Results The bank and credit union regulators’ process for the 2007 EGRPRA review also began with Federal Register notices that requested comments on regulations. The regulators then reviewed and assessed the comments and issued a report in 2007 to Congress in which they noted actions they took in some of the areas raised by commenters. Our analysis of the regulators’ responses indicated that the regulators took responsive actions in a few areas. The regulators noted they already had taken action in some cases (including after completion of a pending study and as a result of efforts to work with Congress to obtain statutory changes). However, for the remaining specific concerns, the four regulators indicated that they would not be taking actions. Similar to its response in 2017, NCUA discussed its responses to the significant issues raised about regulations in a separate section of the 2007 report. Our analysis indicated that NCUA took responsive actions in about half of the areas. For example, NCUA adjusted regulations in one case and in another case noted previously taken actions. For comments related to three other areas, NCUA took actions not reflected in the 2007 report because the actions were taken over a longer time frame (in some cases, after 8 years). In the remaining areas, NCUA deemed actions as not being desirable in four cases and outside of its authority in two other cases. Other Retrospective Reviews The bank regulators do not conduct other retrospective reviews of regulations outside of the EGRPRA process. We requested information from the Federal Reserve, FDIC, and OCC about any discretionary regulatory retrospective reviews that they performed in addition to the EGRPRA review during 2012–2016. All three regulators reported to us they have not conducted any retrospective regulatory reviews outside of EGRPRA since 2012. However, under the Regulatory Flexibility Act (RFA), federal agencies are required to conduct what are referred to as section 610 reviews. The purpose of these reviews is to determine whether certain rules should be continued without change, amended, or rescinded consistent with the objectives of applicable statutes, to minimize any significant economic impact of the rules upon a substantial number of small entities. Section 610 reviews are to be conducted within 10 years of an applicable rule’s publication. As part of other work, we assessed the bank regulators’ section 610 reviews and found that the Federal Reserve, FDIC, and OCC conducted retrospective reviews that did not fully align with the Regulatory Flexibility Act’s requirements. Officials at each of the agencies stated that they satisfy the requirements to perform section 610 reviews through the EGRPRA review process. However, we found that the requirements of the EGRPRA reviews differ from those of the RFA-required section 610 reviews, and we made recommendations to these regulators to help ensure their compliance with this act in a separate report issued in January 2018. In addition to participating in the EGRPRA review, NCUA also reviews one-third of its regulations every year (each regulation is reviewed every 3 years). NCUA’s “one-third” review employs a public notice and comment process similar to the EGRPRA review. If a specific regulation does not receive any comments, NCUA does not review the regulation. For the 2016 one-third review, NCUA did not receive comments on 5 of 16 regulations and thus these regulations were not reviewed. NCUA made technical changes to 4 of the 11 regulations that received comments. In August 2017, NCUA staff announced they developed a task force for conducting additional regulatory reviews, including developing a 4-year agenda for reviewing and revising NCUA’s regulations. The primary factors they said they intend to use to evaluate their regulations will be the magnitude of the benefit and the degree of effort that credit unions must expend to comply with the regulations. Because the 4-year reviews will be conducted on all of NCUA’s regulations, staff noted that the annual one-third regulatory review process will not be conducted again until 2020. Limitations of Reviews of Burden Include CFPB Exclusion and Lack of Quantitative Analysis Our analysis of the EGRPRA review found three limitations to the current process. CFPB Not Included and Significant Mortgage Regulations Not Assessed First, the EGRPRA statute does not include CFPB and thus the significant mortgage-related regulations and other regulations that it administers— regulations that banks and credit unions must follow—were not included in the EGRPRA review. Under the Dodd-Frank Act, CFPB was given financial regulatory authority, including for regulations implementing the Home Mortgage Disclosure Act (Regulation C); the Truth-in-Lending Act (Regulation Z); and the Truth-in-Savings Act (Regulation DD). These regulations apply to many of the activities that banks and credit unions conduct; the four depository institution regulators conduct the large majority of examinations of these institutions’ compliance with these CFPB-administered regulations. However, EGRPRA was not amended after the Dodd-Frank Act to include CFPB as one of the agencies that must conduct the EGRPRA review. During the 2017 EGRPRA review, the bank regulators only requested public comments on consumer protection regulations for which they have regulatory authority. But the banking regulators still received some comments on the key mortgage regulations and the other regulations that CFPB now administers. Our review of 2017 forum transcripts identified almost 60 comments on mortgage regulations, such as HMDA and TRID. The bank regulators could not address these mortgage regulation-related comments because they no longer had regulatory authority over these regulations; instead, they forwarded these comment letters to CFPB staff. According to CFPB staff, their role in the most recent EGRPRA process was very limited. CFPB staff told us they had no role in assessing the public comments received for purposes of the final 2017 EGRPRA report. According to one bank regulator, the bank regulators did not share non- mortgage regulation-related letters with CFPB staff because those comment letters did not involve CFPB regulations. Another bank regulator told us that CFPB was offered the opportunity to participate in the outreach meetings and were kept informed of the EGRPRA review during the quarterly FFIEC meetings that occurred during the review. Before the report was sent to Congress, CFPB staff said that they reviewed several late-stage drafts, but generally limited their review to ensuring that references to CFPB’s authority and regulations and its role in the EGRPRA process were properly characterized and explained. As a member of FFIEC, which issued the final report, CFPB’s Director was given an opportunity to review the report again just prior to its approval by FFIEC. CFPB must conduct its own reviews of regulations after they are implemented. Section 1022(d) of the Dodd-Frank Act requires CFPB to conduct an assessment of each significant rule or order adopted by the bureau under federal consumer financial law. CFPB must publish a report of the assessment not later than 5 years after the effective date of such rule or order. The assessment must address, among other relevant factors, the rule’s effectiveness in meeting the purposes and objectives of title X of the Dodd-Frank Act and specific goals stated by CFPB. The assessment also must reflect available evidence and any data that CFPB reasonably may collect. Before publishing a report of its assessment, CFPB must invite public comment on recommendations for modifying, expanding, or eliminating the significant rule or order. CFPB announced in Federal Register notices in spring 2017 that it was commencing assessments of rules related to Qualified Mortgage/Ability- to-Repay requirements, remittances, and mortgage servicing regulations. The notices described how CFPB planned to assess the regulations. In each notice, CFPB requested comment from the public on the feasibility and effectiveness of the assessment plan, data, and other factual information that may be useful for executing the plan; recommendations to improve the plan and relevant data; and data and other factual information about the benefits, costs, impacts, and effectiveness of the significant rule. Reports of these assessments are due in late 2018 and early 2019. According to CFPB staff, the requests for data and other factual information are consistent with the statutory requirement that the assessment must reflect available evidence and any data that CFPB reasonably may collect. The Federal Register notices also describe other data sources that CFPB has in-house or has been collecting pursuant to this requirement. CFPB staff told us that they have not yet determined whether certain other regulations that apply to banks and credit unions, such as the revisions to TRID and HMDA requirements, will be designated as significant and thus subjected to the one-time assessments. CFPB staff also told us they anticipate that within approximately 3 years after the effective date of a rule, it generally will have determined whether the rule is a significant rule for section 1022(d) assessment purposes. In tasking the bank regulators with conducting the EGRPRA reviews, Congress indicated its intent was to require these regulators to review all regulations that could be creating undue burden on regulated institutions. According to a Senate committee report relating to EGRPRA, the purpose of the legislation was to minimize unnecessary regulatory impediments for lenders, in a manner consistent with safety and soundness, consumer protection, and other public policy goals, so as to produce greater operational efficiency. Some in Congress have recognized that the omission of CFPB in the EGRPRA process is problematic, and in 2015 legislation was introduced to require that CFPB—and NCUA—formally participate in the EGRPRA review. Currently, without CFPB’s participation, key regulations that affect banks and credit unions may not be subject to the review process. In addition, these regulations may not be reviewed if CFPB does not deem them significant. Further, if reviewed, CFPB’s mandate is for a one-time, not recurring, review. CFPB staff told us that they have two additional initiatives designed to review its regulations, both of which have been announced in CFPB’s spring and fall 2017 Semiannual Regulatory Agendas. First, CFPB launched a program to periodically review individual existing regulations—or portions of large regulations—to identify opportunities to clarify ambiguities, address developments in the marketplace, or modernize or streamline provisions. Second, CFPB launched an internal task force to coordinate and bolster their continuing efforts to identify and relieve regulatory burdens, including with regard to small businesses such as community banks that potentially will address any regulation the agency has under its jurisdiction. Staff told us the agency has been considering suggestions it received from community banks and others on ways to reduce regulatory burden. However, CFPB has not provided public information specifically on the extent to which it intends to review regulations applicable to community banks and credit unions and other institutions or provided information on the timing and frequency of the reviews. In addition, it has not indicated the extent to which it will coordinate the reviews with the federal depository institution regulators as part of the EGRPRA reviews. Until CFPB publicly provides additional information indicating its commitment to periodically review the burden of all its regulations, community banks, credit unions, and other depository institutions may face diminished opportunities for relief from regulatory burden. Regulators Have Not Conducted or Reported Quantitative Analyses Second, the federal depository institution regulators have not conducted or reported on quantitative analyses during the EGRPRA process to help them determine if changes to regulations would be warranted. Our analysis of the 2017 report indicated that in responses to comments in which the regulators did not take any actions, the regulators generally only provided their arguments against taking actions and did not cite analysis or data to support their narrative. In contrast, other federal agencies that are similarly tasked with conducting retrospective regulatory reviews are required to follow certain practices for such reviews that could serve as best practices for the depository institution regulators. For example, the Office of Management and Budget’s Circular A-4 guidance on regulatory analysis notes that a good analysis is transparent and should allow qualified third parties reviewing such analyses to clearly see how estimates and conclusions were determined. In addition, executive branch agencies that are tasked under executive orders to conduct retrospective reviews of regulations they issue generally are required under these orders to collect and analyze quantitative data as part of assessing the costs and benefits of changing existing regulations. However, EGRPRA does not require the regulators to collect and report on any quantitative data they collected or analyzed as part of assessing the potential burden of regulations. Conducting and reporting on how they analyzed the impact of potential regulatory changes to address burden could assist the depository institution regulators in conducting their EGRPRA reviews. For example, as discussed previously, Community Reinvestment Act regulations were deemed a significant issue, with commenters questioning the relevance of requiring small banks to make community development loans and suggesting that the asset threshold for this requirement be raised from $1 billion to $5 billion. The regulators told us that if the thresholds were raised, then community development loans would decline, particularly in underserved communities. However, regulators did not collect and analyze data for the EGRPRA review to determine the amount of community development loans provided by banks with assets of less than $1 billion; including a discussion of quantitative analysis might have helped show that community development loans from smaller community banks provided additional credit in communities—and thus helped to demonstrate the benefits of not changing the requirement as commenters requested. By not performing and reporting quantitative analyses where appropriate in the EGRPRA review, the regulators may be missing opportunities to better assess regulatory impacts after a regulation has been implemented, including identifying the need for any changes or benefits from the regulations and making their analyses more transparent to stakeholders. As the Office of Management and Budget’s Circular A-4 guidance on the development of regulatory analysis noted, sound quantitative estimates of costs and benefits, where feasible, are preferable to qualitative descriptions of benefits and costs because they help decision makers understand the magnitudes of the effects of alternative actions. By not fully describing their rationale for the analyses that supported their decisions, regulators may be missing opportunities to better communicate their decisions to stakeholders and the public. Reviews Have Not Considered Cumulative Effects of Regulations Lastly, in the EGRPRA process, the federal depository institution regulators have not assessed the ways that the cumulative burden of the regulations they administer may have created overlapping or duplicative requirements. Under the current process, the regulators have responded to issues raised about individual regulations based on comments they have received, not on bodies of regulations. However, congressional intent in tasking the depository institution regulators with the EGRPRA reviews was to ensure that they considered the cumulative effect of financial regulations. A 1995 Senate Committee on Banking, Housing, and Urban Affairs report stated while no one regulation can be singled out as being the most burdensome, and most have meritorious goals, the aggregate burden of banking regulations ultimately affects a bank’s operations, its profitability, and the cost of credit to customers. For example, financial regulations may have created overlapping or duplicative regulations in the areas of safety and soundness. One primary concern noted in the EGRPRA 2017 report was the amount of information or data banks are required to provide to regulators. For example, the cumulative burden of information collection was raised by commenters in relation to Call Reports, Community Reinvestment Act, and BSA/AML requirements. But in the EGRPRA report, the regulators did not examine how the various reporting requirements might relate to each other or how they might collectively affect institutions. In contrast, the executive branch agencies that conduct retrospective regulatory reviews must consider the cumulative effects of their own regulations, including cumulative burdens. For example, Executive Order 13563 directs agencies, to the extent practicable, to consider the costs of cumulative regulations. Executive Order 13563 does not apply to independent regulatory agencies such as the Federal Reserve, FDIC, OCC, NCUA, or CFPB. A memorandum from the Office of Management and Budget provided guidance to the agencies required to follow this order for assessing the cumulative burden and costs of regulations. The actions suggested for careful consideration include conducting early consultations with affected stakeholders to discuss potential interactions between rulemaking under consideration and existing regulations as well as other anticipated regulatory requirements. The executive order also directs agencies to consider regulations that appear to be attempting to achieve the same goal. However, other researchers often acknowledge that cumulative assessments of burden are difficult. Nevertheless, until the Federal Reserve, FDIC, OCC, and NCUA identify ways to consider the cumulative burden of regulations, they may miss opportunities to streamline bodies of regulations to reduce the overall compliance burden among financial institutions, including community banks and credit unions. For example, regulations applicable to specific activities of banks, such as lending or capital, could be assessed to determine if they have overlapping or duplicative requirements that could be revised without materially reducing the benefits sought by the regulations. Conclusions New regulations for financial institutions enacted in recent years have helped protect mortgage borrowers, increase the safety and soundness of the financial system, and facilitate anti-terrorism and anti-money laundering efforts. But the regulations also entail compliance burdens, particularly for smaller institutions such as community banks and credit unions, and the cumulative burden on these institutions can be significant. Representatives from the institutions with which we spoke cited three sets of regulations—HMDA, BSA/AML, and TRID—as most burdensome for reasons that included their complexity. In particular, the complexity of TRID regulations appears to have contributed to misunderstandings that in turn caused institutions to take unnecessary actions. While regulators have acted to reduce burdens associated with the regulations, CFPB has not assessed the effectiveness of its TRID guidance. Federal internal control standards require agencies to analyze and respond to risks to achieving their objectives, and CFPB’s objectives include addressing regulations that are unduly burdensome. Assessing the effectiveness of TRID guidance represents an opportunity to reduce misunderstandings that create additional burden for institutions and also affect individual consumers (for instance, by delaying mortgage closings). The federal depository institution regulators (FDIC, Federal Reserve, OCC, as well as NCUA) also have opportunities to enhance the activities they undertake during EGRPRA reviews. Congress intended that the burden of all regulations applicable to depository institutions would be periodically assessed and reduced through the EGRPRA process. But because CFPB has not been included in this process, the regulations for which it is responsible were not assessed, and CFPB has not yet provided public information about what regulations it will review, and when, and whether it will coordinate with other regulators during EGPRA reviews. Until such information is publicly available, the extent to which the regulatory burden of CFPB regulation will be periodically addressed remains unclear. The effectiveness of the EGRPRA process also has been hampered by other limitations, including not conducting and reporting on depository institution regulators’ analysis of quantitative data and assessing the cumulative effect of regulations on institutions. Addressing these limitations in their EGRPRA processes likely would make the analyses the regulators perform more transparent, and potentially result in additional burden reduction. Recommendations for Executive Action We make a total of 10 recommendations, which consist of 2 recommendations to CFPB, 2 to FDIC, 2 to the Federal Reserve, 2 to OCC, and 2 to NCUA. The Director of CFPB should assess the effectiveness of TRID guidance to determine the extent to which TRID’s requirements are accurately understood and take steps to address any issues as necessary. (Recommendation 1) The Director of CFPB should issue public information on its plans for reviewing regulations applicable to banks and credit unions, including information describing the scope of regulations the timing and frequency of the reviews, and the extent to which the reviews will be coordinated with the federal depository institution regulators as part of their periodic EGRPRA reviews. (Recommendation 2) The Chairman, FDIC, should, as part of the EGRPRA process, develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. (Recommendation 3) The Chairman, FDIC, should, as part of the EGRPRA process, develop plans for conducting evaluations that would identify opportunities for streamlining bodies of regulation. (Recommendation 4) The Chair, Board of Governors of the Federal Reserve System, should, as part of the EGRPRA process develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. (Recommendation 5) The Chair, Board of Governors of the Federal Reserve System, should, as part of the EGRPRA process, develop plans for conducting evaluations that would identify opportunities to streamline bodies of regulation. (Recommendation 6) The Comptroller of the Currency should, as part of the EGRPRA process, develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. (Recommendation 7) The Comptroller of the Currency should, as part of the EGRPRA process, develop plans for conducting evaluations that would identify opportunities to streamline bodies of regulation. (Recommendation 8) The Chair of NCUA should, as part of the EGRPRA process, develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. (Recommendation 9) The Chair of NCUA should, as part of the EGRPRA process, develop plans for conducting evaluations that would identify opportunities to streamline bodies of regulation. (Recommendation 10) Agency Comments and Our Evaluation We provided a draft of this report to CFPB, FDIC, FinCEN, the Federal Reserve, NCUA, and OCC. We received written comments from CFPB, FDIC, the Federal Reserve, NCUA, and OCC that we have reprinted in appendixes II through VI, respectively. CFPB, FDIC, FinCEN, the Federal Reserve, NCUA, and OCC also provided technical comments, which we incorporated as appropriate. In its written comments, CFPB agreed with the recommendation to assess its TRID guidance to determine the extent to which it is understood. CFPB stated it intends to solicit public input on how it can improve its regulatory guidance and implementation support. In addition, CFPB agreed with the recommendation on issuing public information on its plan for reviewing regulations. CFPB committed to developing additional plans with respect to their reviews of key regulations and to publicly releasing such information and in the interim, CFPB stated it intends to solicit public input on how it should approach reviewing regulations. FDIC stated that it appreciated the two recommendations and stated that it would work with the Federal Reserve and OCC to find the most appropriate ways to ensure that the three regulators continue to enhance their rulemaking analyses as part of the EGRPRA process. In addition, FDIC stated that as part of the EGRPRA review process, it would continue to monitor the cumulative effects of regulation through for example, a review of the community and quarterly banking studies and community bank Call Report data. The Federal Reserve agreed with the two recommendations pertaining to the EGRPRA process. Regarding the need conduct and report on quantitative analysis whenever feasible to strengthen and to increase the transparency of the EGRPRA process, the Federal Reserve plans to coordinate with FDIC and OCC to identify opportunities to conduct quantitative analyses where feasible during future EGRPRA reviews. With respect to the second recommendation, the Federal Reserve agreed that the cumulative impact of regulations on depository institutions is important and plans to coordinate with FDIC and OCC to identify further opportunities to seek comment on bodies of regulations and how they could be streamlined. NCUA acknowledged the report’s conclusions as part of their voluntary compliance with the EGRPRA process; NCUA should improve its qualitative analysis and develop plans for continued reductions to regulatory burden within the credit union industry. In its letter, NCUA noted it has appointed a regulatory review task force charged with reviewing and developing a four-year plan for revising their regulations and the review will consider the benefits of NCUA’s regulations as well as the burden they have on credit unions. In its written comments, OCC stated that it understood the importance of GAO’s recommendations. They stated they OCC will consult and coordinate with the Federal Reserve and FDIC to develop plans for regulatory analysis, including how the regulators should conduct and report on quantitative analysis and also, will work with these regulators to increase the transparency of the EGRPRA process. OCC also stated it will consult with these regulators to develop plans, as part of the EGRPRA process, to conduct evaluations that identify ways to decrease the regulatory burden created by bodies of regulations. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to CFPB, FDIC, FinCEN, the Federal Reserve, NCUA, and OCC. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology This report examines the burdens that regulatory compliance places on community banks and credit unions and actions that federal regulators have taken to reduce these burdens; specifically: (1) the financial regulations that community banks and credit unions reported viewing as the most burdensome, the characteristics of those regulations that make them burdensome, and the benefits are associated with those regulations and (2) federal financial regulators’ efforts to reduce any existing regulatory burden on community banks and credit unions. To identify the regulations that community banks and credit unions viewed as the most burdensome, we first constructed a sample frame of financial institutions that met certain criteria for being classified as community banks or community-focused credit unions for the purposes of this review. These sample frames were then used as the basis for drawing our non-probability samples of institutions for purposes of interviews, focus group participation, and document review. Defining a community bank is important because, as we have reported, regulatory compliance may be more burdensome for community banks and credit unions than for larger banks because they are not as able to benefit from economies of scale in compliance resources. While there is no single consensus definition for what constitutes a community bank, we reviewed criteria for defining community banks developed by the Federal Deposit Insurance Corporation (FDIC), officials from the Independent Community Bankers Association, the Office of the Comptroller of the Currency (OCC). Based on this review, we determined that institutions that had the following characteristics would be the most appropriate to include in our universe of institutions, (1) fewer total assets, (2) engage in traditional lending and deposit taking activities, have limited geographic scope, and (3) did not have complex operating structures. To identify banks that met these characteristics, we began with all banks that filed a Consolidated Reports of Condition and Income (Call Report) for the first quarter of 2016 (March 31, 2016) and are not themselves subsidiaries of another bank that filed a Call Report. We then excluded banks using an asset-size threshold, to ensure we are including only small institutions. Based on interviews with regulators and our review of the FDIC’s community bank study, we targeted institutions around the $1 billion in assets as the group that could be relatively representative of the experiences of many community banks in complying with regulations. Upon review of the Call Reports data, we found that the banks in the 90th percentile by asset size were had about $1.2 billion, and we selected this to be an appropriate cutoff for our sample frame. In addition we excluded institutions with characteristics suggesting they do not engage in typical community banking activities like such as deposit-taking and lending; and those with characteristics suggesting they conduct more specialized operations not typical of community banking, such as credit card banks. In addition to ensure that we excluded banks whose views of regulatory compliance might be influenced by being part of a large and/or complex organization, we also excluded banks with foreign offices and banks that are subsidiaries of either foreign banks or of holding companies with $50 billion or more in consolidated assets. Finally, as a practical matter, we excluded banks for which we could not obtain data on one or more of the characteristics listed below. We also relied on a similar framework to construct a sample frame for credit unions. We sought to identify credit unions that were relatively small, engaged in traditional lending and deposit taking activities, and had limited geographic scope. To do this, we began with all insured credit unions that filed a Call Report for the first quarter of 2016 (March 31, 2016). We then excluded credit unions using an asset-size threshold of $860 million, which is the 95th percentile of credit unions, to ensure we are including only smaller institutions. The percentile of credit unions was higher than the percentile of banks because there are more large banks than there are credit unions. We then excluded credit unions that did not engage in activities that are typical of community lending, such as taking deposits, making loans and leases, and providing consumer checking accounts, as well as those credit unions with headquarters outside of the United States. We assessed the reliability of data from FFIEC, FDIC, the Federal Reserve Bank of Chicago, and NCUA by reviewing relevant documentation and electronically testing the data for missing values or obvious errors, and we found the data from these sources to be sufficiently reliable for the purpose of creating sample frames of community banks and credit unions. The sample frames were then used as the basis for drawing our nonprobability samples of institutions for purposes of interviews and focus groups. To identify regulations that community banks and credit unions viewed as among the most burdensome, we conducted structured interviews and focus groups with a sample of a total of 64 community banks and credit unions. To reduce the possibility of bias, we selected the institutions to ensure that banks and credit unions with different asset sizes and from different regions of the country were included. We also included at least one bank overseen by each of the three primary federal depository institution regulators, Federal Reserve, FDIC, NCUA, and OCC in the sample. We interviewed 17 institutions (10 banks and 7 credit unions) about which regulations their institutions experienced the most compliance burden. On the basis of the results of these interviews, we determined that considerable consensus existed among these institutions as to which regulations were seen as most burdensome, including those relating to mortgage fees and terms disclosures to consumers, mortgage borrower and loan characteristics reporting, and anti-money laundering activities. As a result, we determined to conduct focus groups with institutions to identify the characteristics of the regulations identified in our interviews that made these regulations burdensome. To identify the burdensome characteristics of the regulations identified in our preliminary interviews, we selected institutions to participate in three focus groups of community banks and three focus groups of credit unions. For the first focus group of community banks, we randomly selected 20 banks among 647 banks between $500 million and $1 billion located in nine U.S. census geographical areas using the sample frame of community banks we developed, and contacted them asking for their participation. Seven of the 20 banks agreed to participate in the first focus group. However, mortgages represented a low percentage of the assets of two participants in the first focus group, so we revised our selection criteria because two of the regulations identified as burdensome were related to mortgages. For the remaining two focus groups with community banks, we randomly selected institutions with more than $45 million and no more than $1.2 billion in assets to ensure that they would be required to comply with the mortgage characteristics reporting and with at least a 10 percent mortgage to asset ratio to better ensure that they would be sufficiently experienced with mortgage regulations. After identifying the large percentage of FDIC regulated banks in the first 20 banks we contacted, we decided to prioritize contact with banks regulated by OCC and the Federal Reserve for the institutions on our list. When banks declined or when we determined an institution merged or was acquired, we selected a new institution from that state and preferenced institutions regulated by OCC and the Federal Reserve. The three focus groups totaled 23 community banks with a range of assets. We used a similar selection process for three focus groups of credit unions consisting of 23 credit unions. We selected credit unions with at least $45 million in assets so that they would be required to comply with the mortgage regulations and with at least a 10 percent mortgage-to-asset ratio. During each of the focus groups, we asked the representatives from participating institutions what characteristics of the relevant regulations made them burdensome with which to comply. We also polled them about the extent to which they had to take various actions to comply with regulations, including hiring or expanding staff resources, investing in additional information technology resources, or conducting staff training. During the focus groups, we also confirmed with the participants that the three sets of regulations (on mortgage fee and other disclosures to consumers, reporting of mortgage borrower and loan characteristics, and anti-money laundering activities) were generally the ones they found most burdensome. To identify in more detail the steps a community bank or credit union may take to comply with the regulations identified as among the most burdensome, we also conducted an in-depth on-site interview with one community bank. We selected this institution by limiting the community bank sample to only those banks in the middle 80 percent of the distribution in terms of assets, mortgage lending, small business lending, and lending in general that were no more than 70 miles from Washington, D.C. We limited the sample in this way to ensure that the institution was not an outlier in terms of activities or size, and to limit the travel resources needed to conduct the site visit. We also interviewed associations representing consumers to understand the benefits of these regulations. These groups were selected using professional judgement of their knowledge of relevant banking regulations. We interviewed associations representing banks and credit unions. To identify the requirements of the regulations identified as among the most burdensome, we reviewed the Home Mortgage Disclosure Act (HMDA) and its implementing regulation, Regulation C; Bank Secrecy Act and anti-money laundering (BSA/AML) regulations, including those deriving from the Currency and Foreign Transactions Reporting Act, commonly known as the Bank Secrecy Act (BSA), and the 2001 USA PATRIOT Act; and the Integrated Mortgage Disclosure Rule Under the Real Estate Settlement Procedures Act (RESPA) with the implementing Regulation X; and the Truth-in-Lending Act (TILA) with implementing Regulation Z. We reviewed the Consumer Financial Protection Bureau’s (CFPB) small entity guidance and supporting materials on the TILA- RESPA Integrated Disclosure (TRID) regulation and HMDA to clarify the specific requirements of each rule and to analyze the information included in the CFPB guidance. We interviewed staff from each of the federal regulators responsible for implementing the regulations, as well as from the federal regulators responsible for examining community banks and credit unions. To identify the potential benefits of the regulations that were considered burdensome by community banks and credit unions, we interviewed representatives from four community groups to document their perspectives on the benefits provided by the identified regulations. To determine whether the bank regulators had required banks to comply with certain provisions from which the institutions might be exempt, we identified eight exemptions from the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 from which community banks and credit unions should be exempt and reviewed a small group of the most recent examinations to identify instances in which a regulator may not have applied an exemption for which a bank was eligible. We reviewed 20 safety and soundness and consumer compliance examination reports of community banks and eight safety and soundness examination reports of credit unions. The bank examination reports we reviewed were for the first 20 community banks we contacted requesting participation in the first focus group. The bank examination reports included examinations from all three bank regulators (FDIC, Federal Reserve, and OCC). The NCUA examination reports we reviewed were for the eight credit unions that participated in the second focus group of credit unions. Because of the limited number of the examinations we reviewed, we cannot generalize whether regulators extended the exemptions to all qualifying institutions. To assess the federal financial regulators’ efforts to reduce the existing regulatory burden on community banks and credit unions, we identified the mechanisms the regulators used to identify burdensome regulations and actions to reduce potential burden. We reviewed laws and congressional and agency documentation. More specifically, we reviewed the Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA) that requires the Federal Reserve, FDIC, and OCC to review all their regulations every 10 years and identify areas of the regulations that are outdated, unnecessary, or unduly burdensome and reviewed the 1995 Senate Banking Committee report, which described the intent of the legislation. We reviewed the Federal Register notices that bank regulators and NCUA published requesting comments on their regulations. We also reviewed over 200 comment letters that the regulators had received through the EGRPRA process from community banks, credit unions, their trade associations, and others, as well as the transcripts of all six public forums regulators held as part the 2017 EGRPRA regulatory review efforts they conducted. We analyzed the extent to which the depository institutions regulators addressed the issues raised in comments received for the review. In assessing the 2017 and 2007 EGRPRA reports sent to Congress, we reviewed the significant issues identified by the regulators and determined the extent to which the regulators proposed or took actions in response to the comments relating to burden on small entities. We compared the requirements of Executive Orders 12866, 13563, and 13610 issued by Office of Management and Budget with the actions taken by the regulators in implementing their 10-year regulatory retrospective review. The executive orders included requirements on how executive branch agencies should conduct retrospective reviews of their regulations. For both objectives, we interviewed representatives from CFPB, FDIC, Federal Reserve, Financial Crimes Enforcement Network, NCUA, and OCC to identify any steps that regulators took to reduce the compliance burden associated with each of the identified regulations and to understand how they conduct retrospective reviews. We also interviewed representatives of the Small Business Administration’s Office of Advocacy, which reviews and comments on the burdens of regulations affecting small businesses, including community banks. We conducted this performance audit from March 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Consumer Financial Protection Bureau Appendix III: Comments from the Board of Governors of the Federal Reserve System Appendix IV: Comments from the Federal Deposit Insurance Corporation Appendix V: Comments from the National Credit Union Administration Appendix VI: Comments from the Office of the Comptroller of the Currency Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Cody J. Goebel (Assistant Director); Nancy Eibeck (Analyst in Charge); Bethany Benitez; Kathleen Boggs; Jeremy A. Conley; Pamela R. Davidson; Courtney L. LaFountain; William V. Lamping; Barbara M. Roesmann; and Jena Y. Sinkfield made key contributions to this report.
Why GAO Did This Study In recent decades, many new regulations intended to strengthen financial soundness, improve consumer protections, and aid anti-money laundering efforts were implemented for financial institutions. Smaller community banks and credit unions must comply with some of the regulations, but compliance can be more challenging and costly for these institutions. GAO examined (1) the regulations community banks and credit unions viewed as most burdensome and why, and (2) efforts by depository institution regulators to reduce any regulatory burden. GAO analyzed regulations and interviewed more than 60 community banks and credit unions (selected based on asset size and financial activities), regulators, and industry associations and consumer groups. GAO also analyzed letters and transcripts commenting on regulatory burden that regulators prepared responding to the comments. What GAO Found Interviews and focus groups GAO conducted with representatives of over 60 community banks and credit unions indicated regulations for reporting mortgage characteristics, reviewing transactions for potentially illicit activity, and disclosing mortgage terms and costs to consumers were the most burdensome. Institution representatives said these regulations were time-consuming and costly to comply with, in part because the requirements were complex, required individual reports that had to be reviewed for accuracy, or mandated actions within specific timeframes. However, regulators and others noted that the regulations were essential to preventing lending discrimination and use of the banking system for illicit activity, and they were acting to reduce compliance burdens. Institution representatives also said that the new mortgage disclosure regulations increased compliance costs, added significant time to loan closings, and resulted in institutions absorbing costs when others, such as appraisers and inspectors, changed disclosed fees. The Consumer Financial Protection Bureau (CFPB) issued guidance and conducted other outreach to educate institutions after issuing these regulations in 2013. But GAO found that some compliance burdens arose from misunderstanding the disclosure regulations—which in turn may have led institutions to take actions not actually required. Assessing the effectiveness of the guidance for the disclosure regulations could help mitigate the misunderstandings and thus also reduce compliance burdens. Regulators of community banks and credit unions—the Board of Governors of the Federal Reserve, the Federal Deposit Insurance Corporation, the Office of the Comptroller of the Currency, and the National Credit Union Administration—conduct decennial reviews to obtain industry comments on regulatory burden. But the reviews, conducted under the Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA), had the following limitations: CFPB and the consumer financial regulations for which it is responsible were not included. Unlike executive branch agencies, the depository institution regulators are not required to analyze and report quantitative-based rationales for their responses to comments. Regulators do not assess the cumulative burden of the regulations they administer. CFPB has formed an internal group that will be tasked with reviewing regulations it administers, but the agency has not publicly announced the scope of regulations included, the timing and frequency of the reviews, and the extent to which they will be coordinated with the other federal banking and credit union regulators as part of their periodic EGRPRA reviews. Congressional intent in mandating that these regulators review their regulations was that the cumulative effect of all federal financial regulations be considered. In addition, sound practices required of other federal agencies require them to analyze and report their assessments when reviewing regulations. Documenting in plans how the depository institution regulators would address these EGRPRA limitations would better ensure that all regulations relevant to community banks and credit unions were reviewed, likely improve the analyses the regulators perform, and potentially result in additional burden reduction. What GAO Recommends GAO makes a total of 10 recommendations to CFPB and the depository institution regulators. CFPB should assess the effectiveness of guidance on mortgage disclosure regulations and publicly issue its plans for the scope and timing of its regulation reviews and coordinate these with the other regulators' review process. As part of their burden reviews, the depository institution regulators should develop plans to report quantitative rationales for their actions and addressing the cumulative burden of regulations. In written comments, CFPB and the four depository institution regulators generally agreed with the recommendations.
gao_GAO-18-112
gao_GAO-18-112_0
Background In fiscal year 2016, USPS handled over 1 billion pieces of international mail, which included over 976 million pieces of letter mail. According to USPS statistics, about 371 million pieces of letter mail (38 percent) was sent to other countries or “outbound” mail, while the majority of international letter mail, 605 million pieces (62 percent), was sent to the United States from other countries or “inbound” mail. UPU member nations agree to provide a “single postal territory” for international mail, meaning designated postal operators must deliver inbound international mail to the recipient in their own country (i.e., provide universal service). The UPU created the terminal dues system in 1969 to establish a means for paying destination countries’ designated postal operators for the cost of delivering the mail that originated in another UPU member country. UPU member countries vote every 4 years on the annual payment rates. The current terminal dues system was designed as a single rate structure for the delivery of letter mail, regardless of shape (i.e., letters, flats or packets). The system takes into account that this rate varies based on UPU’s estimation of each nation’s postal cost structure and economic development. As such, the UPU divides its member countries into country groups based on the UPU’s “postal development indicator,” which is largely based on gross national income per capita and attempts to factor in the cost to deliver a letter based on statistics from the United Nations, the World Bank, and the UPU. Designated postal operators from transitional (formerly called “developing”) countries generally pay a lower terminal dues rate for their mail to be delivered by designated postal operators in target (formerly called “industrialized”) countries (see app. II for a list of the countries in each UPU country group). UPU limits the rates through caps and floors to minimize year-over-year variability. In addition to USPS, numerous domestic stakeholders are affected by the terminal dues system, for example: ECOs, such as FedEx and UPS, that collect, transport, and deliver documents and packages sent to the United States from other countries. As they are not designated postal operators under the UPU, ECOs are not part of the terminal dues system and do not have access to terminal dues rates. Some U.S. businesses that compete with foreign companies for U.S. customers, including large e-commerce businesses, such as Amazon, and much smaller e-commerce related businesses, as well as U.S. businesses that obtain goods from other countries via international mail. U.S. businesses that send mail to other countries—such as business correspondence, advertisements, or e-commerce packages—and that have an interest in paying the lowest postage rate. U.S. consumers—mainly individuals who send or receive correspondence, gifts, or commercial goods through international mail. Government also plays a role in the terminal dues system. The Department of State (State) represents the United States to the UPU, and State officials participate in the negotiations at the UPU congress, held every 4 years, that determine terminal dues rates. State also solicits input on terminal dues and other international postal issues through a Federal Advisory Committee on international mail that consists of USPS and PRC officials, other federal agencies with jurisdiction over related issues (e.g., the U.S. Department of Commerce), and representatives from affected businesses. PRC also has a role in international mail issues including terminal dues, USPS bilateral agreements, and USPS international mail products. First, State is required by statute to request PRC’s views on terminal dues proposals before they are adopted by the UPU every 4 years to ensure that the U.S. positions on the relevant UPU proposals are consistent with PRC’s standards and criteria for regulating USPS rates (or if not consistent, provide a foreign policy or national security interest justification). State then is required to ensure that the terminal dues proposals for market-dominant mail are consistent with PRC’s views unless there is a foreign policy or national security concern. Second, PRC is required by statute to review USPS proposals for international mail products to ensure compliance with legal requirements, such as requirements relating to cost coverage. Third, PRC has also sponsored studies on terminal dues issues in recent years. USPS officials stated that designated postal operators send mail to USPS for delivery to U.S. addresses under the UPU’s Universal Postal Convention. Designated postal operators pay for the collection and transportation of mail to the United States and hand off mail to USPS at a USPS International Service Center (ISC) for sorting and final delivery (see fig. 1). USPS presents the inbound mail to U.S. Customs and Border Protection (CBP) for inspection at the ISC. Once the mail clears CBP inspection, the mail enters USPS’s domestic mail stream for delivery. The process is reversed for outbound international mail collected by USPS for delivery to foreign addresses. Current Terminal Dues Rates Benefit USPS despite Revenue Losses on Inbound Mail and Have Mixed Effects on Other U.S. Stakeholders The Current Rates Benefit USPS, as Increasing Gains from Outbound Mail Outweigh the Increasing Losses on Inbound Mail According to USPS analysis, USPS has generated positive net revenue from all terminal dues mail, which has increased from fiscal year 2012 to fiscal year 2016. This occurred even though losses from inbound mail more than doubled from $66 million to $135 million from fiscal year 2012 to fiscal year 2016 (see fig.2). Net revenue for outbound terminal dues mail increased during the same time period. To understand this trend of declining net revenue for inbound terminal dues mail, it is important to understand the differences in inbound versus outbound mail. As USPS is the designated postal operator for the United States responsible for ensuring universal service under UPU agreements, it is required to accept and deliver all mail tendered to it from other designated postal operators (inbound mail), including mail sent under the terminal dues rates adopted by the UPU. Losses on Inbound Terminal Dues Mail USPS’s recent losses on inbound terminal dues mail are due in part to the shift in this mail from primarily letters and flats to more packets (which are more costly for USPS to handle and deliver)—which outpaced the corresponding terminal dues revenue earned by USPS (see fig. 3). These losses are exacerbated by the rising volume of inbound terminal dues mail. According to USPS, there has been an 86 percent increase in all inbound terminal dues mail between fiscal year 2012 and 2016— including a 19 percent increase between fiscal year 2015 and 2016 alone—with much of the increase attributable to international e- commerce. E-commerce mail consists mainly of packets that are heavier and irregular (see fig. 4 for examples). Most inbound letter mail to the United States in fiscal year 2016 was “packets,” defined by the UPU as small packages that weigh no more than 2 kilograms (about 4.4 pounds), often generated by e-commerce. USPS officials stated that packets generate higher costs as USPS’s delivery and processing costs for packets are higher than they are for letters. The current terminal dues system does not distinguish mail based on shape, and there is no separate rate for packets. A recent report commissioned by PRC found that the current terminal dues system, by reducing the price of international packet mail below what it would be without the system, is responsible for increasing the demand (and hence volume) of packets sent through USPS. According to the report, this increase in terminal dues packets reduces the demand for other types of international mail and ECOs. As mentioned above, losses occur because USPS’s costs to deliver inbound terminal dues mail are higher than the terminal dues revenues for that mail. Specifically, PRC has recognized that terminal dues have not covered USPS’s costs to deliver inbound letter mail since fiscal year 1998 and has reiterated this recently. USPS recently stated that the failure to cover USPS’s costs for inbound mail was caused by the terminal dues system. For example, currently, USPS is paid between $1.13 and $1.87 in terminal dues to deliver a 10-ounce packet from a developing country, an amount that does not include any additional surcharges for tracking and other features. Conversely, USPS officials told us that the published domestic rates for a 10-ounce piece of mail range from $1.61 to the highest commercial rate of $3.46. However, comparing these products is complicated, because they offer different features. According to USPS officials, packets sent under the terminal dues system do not include any tracking and have a delivery time of up to 3 weeks from some countries, while all USPS domestic mail products include tracking and have delivery times from as short as one day to an average of 2 to 3 days. In addition, 85 percent of USPS domestic mail receives discounted rates for mail that is entered in bulk and prepared in a way that reduces USPS’s costs, including barcoding, presorting, and being entered into USPS’s system closer to its final destination. Net Revenue Increases for Outbound Terminal Dues Mail USPS reports show that net revenue for outbound terminal dues mail increased from fiscal year 2012 to fiscal year 2016. The increased net revenue allowed USPS to more than offset its losses from inbound terminal dues mail. The increase in net revenue came despite a decrease in outbound terminal dues mail volume over the same period. The Current Terminal Dues System Also Benefits U.S. Businesses That Rely on Outbound International Mail and U.S. Consumers According to stakeholders we interviewed as well as our economic analysis, businesses that send outbound international mail and U.S. consumers also benefit from the current terminal dues system similar to USPS (see table 1). These stakeholders benefit for the following reasons: U.S. businesses that send outbound terminal dues mail, for example, e-commerce shippers or magazine publishers, may pay lower postal rates than what would be set by the destination country’s designated postal operator to deliver that mail. This is the case for mail sent to many developed countries, such as in Europe, where most U.S. outbound international mail is sent. In this case, the postage charged by USPS to the business reflects the relatively low terminal dues rate paid by USPS to those designated postal operators, rather than a higher rate that better reflects the designated postal operator’s delivery costs. U.S. consumers also benefit from the current terminal dues system. As described above, U.S. consumers have spurred a significant increase in inbound international mail, especially from Asia where terminal dues rates are lower than USPS’s costs, which contributes to low shipping prices for U.S. consumers. For example, the USPS’s Office of Inspector General (OIG) conducted a case study in 2015 that found that five low-cost items shipped from China cost about $1.60 per item in shipping charges, while equivalent published domestic postage for the same items cost between $2.04 and $2.22 per item, depending on their exact weight. Studies by the USPS OIG and WIK-Consult GmbH (WIK Consultants) have found similar benefits for consumers from the current terminal dues system. U.S. consumers may also see the same kind of benefit from lower mailing prices as do U.S. businesses that send terminal dues mail to certain outbound countries, such as in Europe, that have higher delivery costs than the terminal dues paid to them by USPS. Current Terminal Dues System Results in Competitive Disadvantages for Some Stakeholders Despite creating some benefits for some U.S. stakeholders, the current terminal dues system also creates competitive disadvantages for other U.S. stakeholders (see table 2). The terminal dues system puts ECOs at a disadvantage because according to representatives from ECOs we spoke with, their volume for international items that are similar to those currently shipped at terminal dues rates is low, and they cannot compete on price with designated postal operators. Instead, they compete using other features such as tracking and delivery speed. Businesses overseas, like e-commerce companies such as Alibaba, can use the terminal dues system as a low- cost alternative to ECO service for items which have much slower delivery standards than offered by ECOs. This disadvantage is especially pronounced when ECOs compete with designated postal operators for business from countries with relatively low terminal dues rates, such as many countries in Asia. U.S. businesses that compete with foreign companies that use inbound terminal dues mail are also disadvantaged by the current terminal dues system. Foreign businesses that send products from countries with low terminal dues to U.S. consumers through USPS may have a competitive advantage over domestic businesses, which may have to pay a higher domestic postage. Representatives from two small U.S. businesses we spoke with stated that the disparity between postage charges available to foreign mailers under the terminal dues system versus the domestic postage available to them was a significant factor in reduced sales in recent years, although the disparity between postage charges is not the only disadvantage they faced from foreign competition. We found this outcome may be less of a competitive disadvantage for larger businesses, such as Amazon, which may be able to obtain discounts on the domestic mailing prices from USPS based on volume, presorting, and other worksharing arrangements while smaller domestic mailers may not be able to secure such discounts. Even with discounting, the domestic- mailing price may still be higher than the foreign-mailing price charged by a designated postal operator a price that is based on, in part, a lower terminal dues rate. However, USPS officials cautioned that such comparisons are complicated because: 1. Terminal dues rates do not include other costs, such as the cost of collection, international transportation, and other costs that may be included in the price charged to the foreign mailer by the designated postal operator. 2. Significant amounts of international mail are sent in large quantities from foreign designated postal operators to USPS, making this mail not analogous to USPS’s single-piece mail rates. 3. U.S. commercial customers may pay non-published rates established in negotiated service agreements that may be lower than USPS’s single-piece published rates. While we described above how different stakeholders are affected by the terminal dues system, it is not possible to quantify the system’s impacts. For example, according to USPS officials, while the current system has a single rate for three shapes of terminal dues mail, USPS has over 3,000 rates for domestic mail, making rate comparisons of terminal dues mail products to domestic mail products imprecise. In addition, the terminal dues rate is a payment between designated postal operators, not the price paid by the foreign mailer. This price information may not be publicly available as designated postal operators in other countries might also have non-published prices. Planned Changes to Terminal Dues Rates Should Reduce USPS’s Losses and Could Affect Other Stakeholders Changes Should Reduce USPS’s Losses from Terminal Dues System Based on our analysis of the changes to the terminal dues system recently adopted by the UPU and of USPS’s estimates of the financial impact of those changes, increased terminal dues rates should help reduce USPS’s losses for inbound mail. All terminal dues rates will increase for inbound mail starting on January 1, 2018—especially for certain countries, which will increase by 13 percent per year specifically for packets. By 2021, all but the least developed countries will have the same terminal dues rates for packets. As the majority of terminal dues mail handled by USPS is inbound, the increase in revenue resulting from higher terminal dues may likely more than offset the increase in USPS’s costs that will result from increases in terminal dues rates to the countries where most USPS outbound terminal dues mail is sent. The UPU also created a new rate category for packets, in addition to a new tracked-packet surcharge, which will increase USPS’s revenue. As described previously, the current terminal dues rates do not distinguish between letters, flats, or packets—even though packets are more expensive to handle and deliver due to their irregular size and heavier weight. This change should lead to higher terminal dues revenue for USPS. In addition, the UPU adopted the Integrated Product Plan (IPP) at the 2016 UPU congress; that, among other things, will require commercial goods to be sent under the terminal dues system as packets. According to USPS officials, this change could also increase terminal dues revenues, as all commercial items will be sent via packets, which will have higher terminal dues rates starting in 2018. However, decisions on other aspects of the IPP are expected to be made at a special UPU congress in 2018. According to USPS projections, USPS will start earning positive net revenues for inbound terminal dues mail as a result of these changes. As terminal dues rates increase, USPS projections show that USPS will cover costs for inbound terminal dues mail from the 15 countries that sent most of the inbound terminal dues mail to the United States in fiscal year 2015. The State Department official who coordinated the U.S. delegation to the 2016 UPU congress stated that these changes will achieve the government’s goal of dramatically improving USPS’s cost coverage for the delivery of inbound terminal dues mail, such as packets, from China and other developing countries, when the changes take effect in 2018. However, other stakeholders we spoke with disagree on the extent of improvement. While PRC staff officials stated that the UPU has made some progress in closing the gap between terminal dues rates and domestic rates for equivalent domestic mail, they also stated that there is still a way to go to make the rates equivalent to each other. Similarly, PRC’s Chairman stated that while the changes will improve USPS’s cost coverage, they will not eliminate the negative impacts of the current terminal dues system and may in fact exacerbate them over the 2018– 2021 period. In addition, he stated that while USPS projected that terminal dues proposals in 2008 and 2012 would increase USPS’s cost coverage for inbound terminal dues mail, the improvement was negligible, casting doubt on the accuracy of USPS’s projections for the planned changes for the 2018–2021 period. A consultant who has experience in international mail issues stated that, given some assumptions about changes in international mail volume, the terminal dues increases will still not be equal to the delivery costs for domestic postage for inbound terminal dues mail to countries such as the United States. He estimated that the difference between the new terminal dues rates and equivalent domestic postage for packets will be reduced from about 57 to 73 percent (depending on the sending country) in 2016 to about 50 percent by 2021. Changes Could Affect U.S. Stakeholders Differently, but Effects Are Difficult to Quantify According to stakeholders we interviewed as well as our analysis of UPU’s terminal dues rates, the projected increase in terminal dues rates caused by the planned changes to the terminal dues system may negatively affect U.S. businesses that send outbound terminal dues mail and U.S. consumers (see table 3). U.S. businesses and U.S. consumers that send mail to other countries will pay higher postage rates for terminal dues mail, to the extent that USPS increases its prices to reflect the higher terminal dues USPS must pay to designated postal operators. However, increased terminal dues rates may still be less than the cost to deliver that mail for designated postal operators in relatively high cost countries, such as Norway and Germany. U.S. businesses and consumers benefit from this disparity as the postage they pay USPS to send mail to those countries is based on the terminal dues rates to those countries, not the delivery costs, which may be higher. U.S. consumers may also see shipping prices increase for inbound terminal dues mail, for example e-commerce packets, to the extent that any increases in postage charged to foreign mailers resulting from increased terminal dues rates are passed along to U.S. consumers. ECOs and U.S. businesses affected by inbound terminal dues mail should become more cost-competitive due to the planned increased terminal dues rates, although the planned changes may not eliminate all of the existing competitive disadvantages (see table 4). A State Department official stated that these new rates may still not fully cover the cost of delivery in some countries with very high postal delivery costs, potentially impacting ECOs’ competitiveness in those countries. A representative from a small business that competes with overseas e- commerce businesses for U.S. consumers stated that any increase in terminal dues would make his business more price-competitive with foreign competitors. However, a representative from one small business affected by inbound terminal dues mail we spoke with stated that his business had already suffered due to the terminal dues system. He added that other factors also make it harder to compete and therefore it would be harder to recover even with higher terminal dues rates. While these changes may have different effects on U.S. stakeholders, it is difficult to quantify the future effects because of limited information and forecasting variability. As a result, it remains to be seen what the effects of these changes to this system will be on domestic stakeholders. USPS, USPS OIG, PRC, and others have developed or adapted models and analyses that try to show the economic impacts of the terminal dues system on different stakeholders and estimate the impact of any changes to that system. We analyzed six recent models and analyses—including the USPS’s, USPS OIG’s, and PRC’s models—that describe either different effects of the terminal dues system on USPS, all designated postal operators, or other stakeholders. Some of these studies also try to measure how terminal dues rate increases may affect these stakeholders. Our review determined that these models and analyses can help inform stakeholders about the different overall effects of terminal dues. However, we also found that the analyses are limited in how they can predict or describe the effect of the terminal dues system, in part due to a lack of complete information on the following issues: the volume of mail, including its type and weight, that flows between each UPU member country; equivalent domestic postage rates that would be charged to domestic mailers for service equivalent to inbound terminal dues mail; the presence of alternative international mail agreements, such as bilateral and multilateral agreements; the number of U.S. businesses and consumers that receive or send international mail covered by terminal dues rates; the number and market characteristics of U.S. businesses that currently compete with imported products that are sent under terminal dues; how terminal dues rates and changes in those rates would affect supply chains between businesses in the U.S. and other countries; and, the share of postage costs of U.S. businesses for outbound mail that is covered by the terminal dues system and the proportion of total business costs; how important postage and shipping costs are to total costs of doing business, domestically and globally. Our analysis also found that some of the models and analyses we reviewed did not make adjustments to factor in some or all of these potential mail-related changes, such as changes in: volume of international mail reimbursed by terminal dues changes in response to increases in terminal dues rates or changes in other international mail products offered by USPS or express consignment operators; monetary exchange rates, as all terminal dues are denominated in Special Drawing Rights, a combination of five major currencies, which all vary over time, introducing uncertainty to any projection of terminal dues rates; and, other international trade issues, such as customs duties, tariffs, labor costs, other shipping costs, and regulatory costs. In the absence of models or analyses that take these factors into account, it is difficult to quantify the impact of terminal dues rate increases on other domestic stakeholders. Alternatives to the Terminal Dues System Mainly Benefit USPS, While the Effects on Other Stakeholders Are Unclear USPS Offers Several Alternatives to the Terminal Dues System for International Mail Not all international mail is sent through the terminal dues system. USPS officials and mail stakeholders told us and USPS data indicate that mailers send a significant portion of U.S. inbound and outbound international mail using the following alternatives: bilateral and multilateral agreements, parcels, express mail service, and direct entry. USPS data show that that while terminal dues as a percentage of inbound international mail volume increased from about 50 percent in fiscal year 2012 to about 60 percent in fiscal year 2016, a significant portion of international mail comes into the United States via these alternatives. Each of these alternatives is described in greater detail below. USPS negotiates bilateral and multilateral agreements for a number of countries, wherein USPS and other designated postal operators both pay higher rates for different mail products (inbound and outbound) with desirable features not available for terminal dues mail, such as tracking and faster delivery. For example, USPS is party to the multilateral PRIME agreement, which gives priority delivery and performance bonuses (paid in addition to terminal dues) to mail from 31 countries. According to USPS officials, bilateral and multilateral agreements have improved USPS’s financial position. In reviewing these agreements, PRC has found that they improve USPS’s financial position relative to what it would have been in the absence of these agreements. USPS offers products that provide UPU parcel service for both inbound and outbound mail, which includes tracking, liability insurance, and signature confirmation at delivery. Parcels are sent under UPU parcel rates, which are higher than what USPS would receive under the terminal dues system for comparable letter mail. USPS also offers “e-Commerce Parcel,” a new parcel service established by the UPU in 2016. The e-Commerce Parcel product is aimed at the e-commerce marketplace and provides tracking, but no liability or signature confirmation. Under e-Commerce Parcel, designated postal operators determine the inbound delivery rates, which are expected to be lower than parcel service due to the service limitations. USPS officials stated that they are implementing the e- Commerce parcel service and reviewing its pricing strategy. USPS also provides Express Mail Service (also referred to as EMS) for both inbound and outbound international mail. Express Mail Service products provide express delivery of documents and merchandise, which take priority over other postal services and include signature confirmation and liability insurance for damaged or lost mail. Express Mail Service rates for inbound products can be set by bilateral agreements or determined by the receiving designated postal operator. During fiscal year 2016, inbound Express Mail Service products included shipments from 149 countries, including China, Japan, Korea, Canada, and France. Express Mail Service products for outbound U.S. mail include Priority Mail Express International service, a high-speed USPS mail service available to certain countries and available at designated USPS facilities, and Global Express Guaranteed service, a USPS international expedited delivery service provided through an alliance with FedEx. Mailers use direct entry as a way of accessing USPS domestic mail services from foreign countries without sending the item through a foreign designated postal operator. Under USPS’s Global Direct Entry Wholesaler Program, third party companies, such as foreign e- commerce businesses selling products in the United States, send the items to the United States as cargo that bypasses designated postal operators, circumventing the terminal dues system entirely. Once the items clear customs, they are entered to USPS’s system at domestic postage rates. Some of these items are entered into USPS’s system outside of USPS’s Global Direct Entry Wholesaler Program. For example, we spoke with a representative of a U.S. company that provides direct entry services for foreign mailers, preparing items for easier entry through customs, for example by labeling items with barcodes that describe the product, and for USPS by applying U.S. domestic postage. The company representative stated that once the items clear customs, the company transports the items to one of 140 USPS distribution points (i.e. USPS’s domestic mail processing facilities) to facilitate timely and efficient delivery. As we discuss in more detail later, these alternatives may offer various benefits and disadvantages to customers that may choose them based on a combination of price, available features, and speed of delivery. Alternatives to the Terminal Dues System Mainly Benefit USPS and U.S. Consumers, While the Effects on Other Stakeholders Are Unclear and Difficult to Quantify USPS officials stated that alternatives to the terminal dues system earn increased net revenue for USPS, for example, Rates negotiated as part of bilateral and multilateral agreements, UPU parcel rates, and Express Mail Service product rates set by USPS are higher than terminal dues rates; this difference means negotiated rates provide better cost coverage and generate higher net revenues than terminal dues rates. Direct entry mail generates greater net revenue for USPS, because shippers enter the mail directly into USPS’s domestic mail stream at domestic postal rates. USPS also realizes operational efficiencies from these alternatives due to mail entry and preparation requirements, leading to lower USPS costs. Direct entry mail is subject to the same preparation requirements as domestic mail entered in bulk quantities, such as being presorted and entered close to its final destination. Although alternatives to the terminal dues system account for a significant portion of international mail handled by USPS, terminal dues mail continues to grow at a higher rate. From fiscal year 2012 to fiscal year 2016, the volumes of (1) USPS terminal dues mail, (2) mail covered by bilateral and multilateral agreements, and (3) parcels increased, while Express Mail Service volume decreased. U.S. consumers may also benefit from all four alternatives due to faster delivery than under the terminal dues system and from special features such as tracking, especially for higher-value items. However, U.S. consumers may pay more to use these alternatives given the special features they offer. The effects of the alternatives on ECOs are unclear. Representatives from ECOs told us that USPS bilateral and multilateral agreement mail products and Express Mail Service products enjoy certain advantages, such as easier customs clearance, that make USPS’s products more attractive to customers. However, the representatives added that the effects of USPS bilateral agreements on their business are unclear, because USPS considers the rates to be proprietary information that is not publicly available. The representatives also noted that they did not believe their products are able to compete directly with USPS bilateral and multilateral mail products because the rates for these products are based on comparatively low terminal dues rates. A 2015 USPS OIG study found that ECOs’ rates are generally much higher than rates for USPS bilateral agreement and direct entry products, therefore ECOs’ products are not price competitive with USPS’s products. Express Mail Service and parcel products are priced higher than terminal dues rates and offer special features similar to those offered by ECOs, such as priority shipping and tracking, making those products more attractive for higher value and time sensitive items, and thus may compete with ECOs’ products. The effects of the alternatives on other international mail stakeholders, such as U.S. businesses that are affected by inbound or outbound international mail, are also unclear. For example, USPS bilateral agreements may increase mailer options by providing faster service and more product features, all at a lower price that discourages competition from ECOs. However, the full effects of bilateral agreements are unclear, in part because these rates are not public. The extent that U.S. businesses related to outbound mail have access to and use direct entry options into other countries is also unclear. In addition, other non-postal related factors such as monetary exchange rates and product prices affect the competitiveness of U.S. businesses that are affected by inbound or outbound mail could be more important than mailing prices to their international competitiveness. It is also difficult to quantify the effects of alternatives to the terminal dues system because of limited information. The information needed to determine the effects on domestic stakeholders from Express Mail Service products and bilateral agreements are not publicly available, as USPS regards this information as business proprietary. Effects on stakeholders from direct entry mail are also unclear, in part because this type of inbound mail may be difficult to distinguish from other domestic USPS mail, and information on direct entry mail is not collected by USPS. Use of alternatives also depends on terminal dues rates and other factors, such as overall mail volume trends, and the models and analyses we reviewed do not take these alternatives into account when modeling international mail trends. For example, none of the models and analyses we reviewed took into account bilateral agreements due to the lack of publicly available information. Agency Comments We provided a draft of this product to USPS, PRC, and the Department of State for their review and comment. In USPS’s comments, reproduced in appendix IV, USPS generally agreed with our findings, described the impact of upcoming terminal dues changes, and emphasized that USPS has been taking efforts to improve its cost coverage and collect more revenue for international mail. In PRC’s comments, reproduced in appendix V, PRC generally agreed with our findings. State did not provide any formal comments. USPS also provided technical comments, which we incorporated as appropriate. PRC and State did not provide any technical comments. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report describes the financial effects on domestic stakeholders of the following: (1) the rates under the current UPU terminal dues system, (2) planned changes to those rates, and (3) alternatives to the terminal dues system. For all of our objectives, we interviewed officials at the U.S. Postal Service (USPS), the Department of State (State), U.S. Customs and Border Protection (CBP), and the Postal Regulatory Commission (PRC); industry consultants; and selected mail stakeholders affected by terminal dues. We selected stakeholders affected by terminal dues based on our reviews of comments submitted to PRC on proposals related to terminal dues for the 2016 UPU congress, as well as interviews with USPS, PRC, State officials, and industry consultants. To obtain background information and provide context for this report, we reviewed relevant federal statutes and U.S. policies; documentation from the UPU, State, USPS, USPS Office of Inspector General (USPS OIG); and knowledgeable consultants. Relevant legal sources that we reviewed include Title 39 of the U.S. Code and the 2006 Postal Accountability and Enhancement Act. In addition, we reviewed the UPU constitution, UPU general regulations, and UPU agreements reached at the 2016 UPU congress, which define the terminal dues system. We reviewed documents of State’s Advisory Committee on International Postal and Delivery Services under the Federal Advisory Committee Act, including minutes and proposals, and USPS and USPS OIG reports on terminal dues and international mail flows. We also reviewed related studies by third parties, including PRC-sponsored reports by Copenhagen Economics on the economic impacts of terminal dues and other terminal dues related reports. In addition, we reviewed GAO reports on related postal issues, as well as relevant academic literature, industry journals, books, and other publications, including news articles. We analyzed available USPS information for fiscal years 2015 and 2016 on international mail to and from the United States and to and from UPU- designated target and transitional countries to determine the most significant mail flows. USPS information for inbound and outbound international mail by country was only available beginning in fiscal year 2015. To determine the current rates under the UPU terminal dues system and the effects of these rates on selected stakeholders, we reviewed and analyzed UPU documentation on terminal dues rates, and we identified and interviewed affected domestic mail stakeholders to obtain their views on the potential impacts of the current terminal dues system. We judgmentally selected stakeholders through interviews with USPS, State, PRC, and industry group officials and consultants and our review of mail stakeholder comments submitted to PRC pursuant to PRC proceedings on terminal dues and international mail related proposals and agreements. In addition, to select business and consultant stakeholders, we also reviewed published reports documenting their knowledge of international mail and terminal dues issues. We interviewed USPS and CBP officials and observed international mail processing by USPS and CBP at two USPS International Service Centers (ISC). We applied standard economic principles to describe effects of the current terminal dues system on domestic stakeholders. To describe the effects of the current terminal dues system on USPS specifically, we reviewed USPS position papers, analyses, and reviewed and analyzed terminal dues models and analyses showing the effects of terminal dues, and reviewed UPU documents describing the terminal dues system, relevant USPS OIG reports, and PRC proceedings and Annual Compliance Determination Reports. We analyzed USPS information and reports on inbound and outbound international mail, including volume, costs, and revenue from fiscal years 2012 to 2016 and UPU information describing the terminal dues system and rates from 2014 to 2017. We assessed the reliability of USPS’s information on the volume, costs, and revenue for international mail by reviewing documentation related to how the data are collected and processed. We found this information to be sufficiently reliable for the limited purpose of presenting this descriptive information. We also visited the Chicago O’Hare International Airport and the New York John F. Kennedy International Airport International Service Centers to observe how USPS processes inbound and outbound international mail and how USPS interacts with CBP to clear international mail for delivery to U.S. addressees. We selected the Chicago O’Hare International Airport and the New York John F. Kennedy International Airport International Service Centers because they process most inbound international mail volume, as well as their location and interviews with USPS and CBP officials. To determine planned changes to UPU terminal dues rates, we reviewed UPU documents that described the changes to the terminal dues system resulting from the 2016 UPU congress, including the 2018 through 2021 terminal dues rate structure. We also reviewed USPS and State proposals to the 2016 UPU congress and stakeholder comments submitted to PRC on proposals to the 2016 UPU congress. We applied standard economic principles to describe effects of the planned changes on domestic stakeholders. We reviewed six economic models and analyses estimating different effects of the current and future terminal dues system on global postal flows and on various stakeholders. We selected the six models and analyses for analysis based on how current they were and whether they produced empirical findings related to the effects of terminal dues or changes in terminal dues. These models and analyses were taken from the published academic literature, economic papers, government reports, and government analyses. The studies used a range of methodologies from simulation modeling to experimental methods, in part due to the paucity of data on a number of variables such as trade flows or how terminal dues affect certain stakeholders, such as consumers and businesses. Our overall review of the studies was based on economic criteria and GAO guidance which included: the purpose of the model, the assumptions used, the data or lack of data, model validation methods, transparency of the model and data, sensitivity analysis, and peer review. Where appropriate, we also compared the results of the models or analyses to other similar modeling results. The analyses we assessed address different questions relating to various mail stakeholders. We determined that these analyses appropriately include, though with certain limitations, the key elements of an economic analysis. Our overall assessment is that while these models and analyses include limitations and caveats, they still inform decision-makers and stakeholders about the different economic effects of terminal dues. To determine the alternatives to the terminal dues system used by U.S. mail stakeholders and the implications of those alternatives for stakeholders, we interviewed USPS, PRC, and State officials, representatives from mailing industry companies, express consignment operators and other selected stakeholders affected by the terminal dues system, international mail consultants, and freight shipping and forwarding firms. We analyzed USPS information and reports on inbound and outbound international mail from fiscal year 2012 through fiscal year 2016, including volume, cost, and revenue data. We assessed the reliability of USPS’s information on the volume, costs, and revenue for international mail by reviewing documentation related to how the data are collected and processed. We found this information to be sufficiently reliable for the limited purpose of presenting this descriptive information. We selected the four alternatives offered by USPS to the terminal dues system, including USPS bilateral and multilateral agreements with other designated postal operators, Express Mail Service products, parcels, and direct entry mail. To describe these alternatives and determine their implications for mail stakeholders, we reviewed and analyzed USPS bilateral agreements with other designated postal operators, stakeholder comments on proposed bilateral agreements, PRC decisions on the proposed agreements, and USPS and USPS OIG documents and reports describing our selected alternatives. We reviewed USPS’s bilateral agreements with China Post, Canada Post, Hong Kong Post, Korea Post, and Royal PostNL in the Netherlands, which were in force during the course of our work. We applied standard economic principles to describe effects of these alternatives on domestic stakeholders. The performance audit on which this report is based was conducted from May 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with USPS from June to October 2017 to prepare this public version of the original sensitive report. This public version was also prepared in accordance with these standards. This is a public version of GAO-17-571SU that we issued in August 2017. This report excludes information that was deemed to be proprietary by USPS and that must be protected from public disclosure. Therefore, this report omits proprietary information and certain data related to USPS’s revenues, costs and volumes for international mail. Although the information provided in this report is more limited in scope, it addresses the same objectives as the sensitive report and the methodology used for both reports is the same. Appendix II: List of Countries by Universal Postal Union’s (UPU) Country Groups from 2014–2017 and from 2018–2021 The UPU divides its member countries into country groups based on the UPU’s “postal development indicator,” which is largely based on gross national income per capita and attempts to factor in the cost to deliver a letter based on statistics from the United Nations, the World Bank, and the UPU. The UPU uses the country groups to, among other things, apply terminal dues rates to international letter mail sent between member countries. The UPU consolidated its five country groups for the 2014– 2017 period into four country groups for the 2018–2021 period: Group I includes the most developed countries, Groups II and III include developing countries, and Group IV includes the least developed countries. Appendix III: Average Annual Rate-Cap Increases of the Universal Postal Union’s (UPU) Terminal Dues Appendix IV: Comments from the United States Postal Service Appendix V: Comments from the Postal Regulatory Commission Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Derrick Collins (Assistant Director); Greg Hanna (Analyst-in-Charge); Barbara El Osta; Camilo Flores; Kenneth John; Mike Mgebroff; Malika Rice; and Amy Rosewarne made key contributions to this report.
Why GAO Did This Study In 2016, USPS handled over 1 billion pieces of international mail—both inbound (received from other countries) and outbound (sent to other countries). International mail is governed by the UPU, which is comprised of over 190 member countries, including the United States. The UPU establishes remuneration rates, called terminal dues, for certain types of international mail exchanged between member countries. Questions have been raised about how current and future planned rates for terminal dues affect USPS and other stakeholders that are involved in international mail. GAO was asked to review the terminal dues system. Among other issues, this report examines the financial effects of: (1) the current UPU rates for terminal dues and (2) the planned changes to those rates on USPS and mail stakeholders. GAO analyzed USPS's mail data for fiscal years 2012–2016; USPS's, the Postal Regulatory Commission's (PRC), and UPU's policies and documents; and applicable U.S. statutes. GAO interviewed USPS, PRC, and Department of State officials, and mail stakeholders, including U.S. companies, such as FedEx and UPS, and consultants. These stakeholders were identified through public comments made to PRC on proposals related to terminal dues. While not generalizable, the views provide illustrative examples. This is a public version of a sensitive but unclassified report issued in August 2017. Information related to USPS's revenues, costs, and volumes for international mail that USPS has deemed proprietary has been omitted from this report. What GAO Found The Universal Postal Union (UPU), a specialized agency of the United Nations, created the terminal dues system so that designated postal operators in member countries could compensate designated postal operators in other countries for delivering mail in those countries. GAO found that it is not possible to quantify the financial effects of the terminal dues system on various U.S. mail stakeholders because the data needed to conduct such an analysis are not readily available. However, stakeholders GAO spoke with and literature GAO reviewed described differing impacts of the terminal dues system on U.S. stakeholders. For example, Analysis by the United States Postal Service (USPS)—the U.S. designated postal operator—found that the rates for inbound international terminal dues mail does not cover its costs for delivering that mail in the United States. As a result, USPS's net losses on this type of mail more than doubled between 2012 and 2016. In contrast, USPS analysis indicates that the rates for outbound international terminal dues mail has resulted in net positive revenues for USPS, which offset the losses from inbound terminal dues mail. U.S. businesses that send outbound terminal dues mail may benefit to the extent that their costs to mail items to certain countries through USPS may be lower than the actual mail delivery costs in those countries. U.S. consumers who receive imported products may pay lower mailing costs for products originating from low terminal dues rate countries. Express consignment operators such as FedEx and the United Parcel Service said the terminal dues system creates a competitive disadvantage for them. Representatives from these companies said that they have difficulty competing for some international mail business because they cannot offer pricing as low as the postage based on the terminal dues rates offered by designated postal operators. The UPU adopted increased terminal dues rates for member countries starting in 2018. GAO found that these planned changes could affect U.S. stakeholders differently, but the effects are also difficult to quantify because of limited information and forecasting variability. Nevertheless, stakeholders identified examples of the potential effects that the planned changes could have on them. For example: For USPS, an increase in inbound terminal dues rates should reduce related losses for delivering this mail; although USPS's costs may increase from paying higher terminal dues rates to countries where USPS sends most of its outbound terminal dues mail. U.S. businesses that send outbound terminal dues mail may have to pay higher postage to USPS to cover the increase in terminal dues rates to send mail to other countries, thus increasing costs to them. U.S. consumers who receive lower-priced imported products may experience a reduced benefit because of the higher terminal dues rates for inbound mail. The increased rates for inbound terminal dues mail may allow rates offered by express consignment operators to become more competitive as they may be able to offer their mail products at more comparable costs.
gao_GAO-19-208SP
gao_GAO-19-208SP_0
State and Local Governments Will Need to Make Policy Changes to Maintain Long-Term Fiscal Balance Our simulations suggest that the sector will likely continue to face a difference between revenue and spending during the next 50 years. This long-term outlook is measured by the operating balance—a measure of the sector’s ability to cover its current expenditures out of current receipts. While both expenditures and revenues are projected to increase as a percentage of gross domestic product (GDP) during the simulation period, a difference between the two is projected to persist because expenditures are generally expected to grow at a faster rate than revenues. (see figure 1). Absent any policy changes by state and local governments, revenues are likely to be insufficient to maintain the sector’s capacity to provide services at levels consistent with current policies during the next 50 years. Our simulations suggest that state and local governments will need to make policy changes to avoid fiscal imbalances before then and assure that revenues are at least equal to expenditures. We simulated the state and local government sector’s operating balance (the difference between the sector’s operating revenues and operating expenditures) in order to understand the sector’s long-term fiscal outlook. The sector’s operating expenditures were 15.1 percent of GDP in 2017. As shown in figure 2, these state and local government sector operating expenditures are comprised of employee compensation, social benefit payments, interest payments, capital outlays, and other expenditures. The sector’s operating revenues were 13.8 percent of GDP in 2017. As shown in figure 3, these state and local government sector operating revenues are comprised of taxes, transfer receipts, and other types of revenues. One way of measuring the long-term fiscal challenges faced by the state and local government sector is through an indicator known as the “fiscal gap.” The fiscal gap is an estimate of actions—such as revenue increases or expenditure reductions—that must be taken today and maintained for each year going forward to achieve fiscal balance during the simulation period. While we measured the gap as the amount of reductions in expenditures needed to prevent negative operating balances, increases in revenues, reductions in expenditures, or a combination of the two of sufficient magnitude would allow the sector to close the fiscal gap. Our simulations suggest that the fiscal gap is about 14.7 percent of total expenditures or about 2.4 percent of GDP. That is, assuming no change in projected total revenues, eliminating the difference between the sector’s expenditures and revenues during the 50-year simulation period would likely require action to be taken today, and maintained for each year equivalent to a 14.7 percent reduction in the sector’s total expenditures (see figure 4). Alternatively, assuming no change in projected total expenditures, closing the fiscal gap by increasing revenue would also likely require actions of similar magnitude. More likely, eliminating the difference between expenditures and revenues would involve some combination of spending reductions and revenue increases. Health Care Cost Growth and Other Factors Contribute to the State and Local Sector’s Fiscal Imbalance Medicaid and Employee Health Benefits Are Key Drivers of Long-Term Spending Our simulations suggest that growth in the sector’s overall spending is largely driven by health care expenditures. As shown in figure 5, these expenditures are projected to increase from about 4.1 percent of GDP in 2018 to 6.3 percent of GDP in 2067. Two types of health care expenditures—Medicaid spending and spending on health benefits for state and local government employees and retirees—will likely constitute a growing expenditure for state and local governments during the simulation period. Medicaid expenditures are expected to rise, on average, by 1 percentage point more than GDP each year. According to CBO, growth in Medicaid spending reflects growth in both the number of people receiving Medicaid benefits and the cost of Medicaid benefits each person receives. Specifically, CBO reported that between 2019 and 2028, Medicaid spending is projected to grow at an average rate of 5.5 percent per year—nearly 5 percentage points of this growth is due to an increase in per capita costs and about 1 percentage point of this growth is due to an increase in enrollment. Data from CBO and the Centers for Medicare & Medicaid Services (CMS) also suggest that growth in Medicaid spending per capita is generally expected to outpace GDP growth in the future—referred to as excess cost growth. Our estimates of Medicaid excess cost growth using CMS data suggest that Medicaid spending per capita will grow, on average, about 0.5 percent faster than GDP per capita for the period from 2018 through 2067. Our simulations also suggest that health benefits for state and local government employees and retirees—a type of employee compensation spending—are likely to rise, on average, by 0.9 percentage points more than GDP each year. Growth in these health benefits also reflects growth in the projected number of employees and retirees and growth in the projected amount of health benefits for each employee and retiree. Growth in spending by states and local governments on health care per capita, which includes spending on employee and retiree health benefits, is generally expected to outpace GDP per capita. Data from CMS suggest that national health expenditures per capita are likely to grow on average about 0.8 percent faster than GDP per capita each year during the simulation period from 2018 through 2067. If employee and retiree health benefits follow trends in overall national health spending, they will likely make up an increasingly large share of total employee compensation going forward (see figure 6). While state and local government contributions to employee pension plans—another type of employee compensation spending—will likely decline as a percentage of GDP, as shown in figure 6, our simulations nonetheless suggest that state and local governments may need to take steps to manage their pension obligations in the future. From 1998 through 2007, state and local governments’ pension contributions amounted to about 8 percent of wages and salaries on average. In addition, for the period from 2008 through 2017, pension contributions amounted to about 12.3 percent of wages and salaries on average. Our simulations suggest that those pension contributions will need to be about 12.9 percent of wages and salaries for state and local governments to meet their long-term pension obligations. This is the case even though pension asset values have increased in recent years, from about $2.4 trillion in 2008 to about $4.2 trillion in 2017 (adjusted for inflation and measured in 2012 dollars). This suggests that state and local governments may need to take additional steps to manage their pension obligations by reducing benefits or increasing employees’ contributions. Along with pension contributions, other types of state and local government expenditures are projected to grow more slowly than GDP. For example, in 2017, wages and salaries of state and local government employees constituted a large expenditure for the sector. However, these expenditures are projected to decline as a percentage of GDP during the simulation period. Our simulations also suggest that state and local governments’ capital outlays—which include spending on infrastructure, such as buildings, highways and streets, sewer systems, and water systems, as well as equipment and land— will grow more slowly than GDP if state and local governments continue to provide current levels of capital per resident. Growth in Medicaid Grants and Personal Income Taxes Drive Revenues Our simulations suggest that federal grants overall will increase as a share of GDP, while Medicaid grants will likely grow more quickly than other types of federal grants (see figure 7). Thus, Medicaid grants will likely make up an increasing share of revenues in the future. Since Medicaid is a matching formula grant program, the projected increase in federal Medicaid grants, therefore, reflects expected increased Medicaid expenditures that will be shared by state governments. Our simulations also suggest that federal investment grants (i.e., grants intended to finance capital infrastructure investments) and other federal grants unrelated to Medicaid (i.e., grants intended to finance education, social services, housing, and community investment) are likely to decline as a share of GDP. Further, our simulations suggest that if historical relationships between state and local governments’ tax revenues and tax bases persist, total tax revenues for the state and local government sector will increase from 8.8 percent of GDP in 2018 to 9.4 percent of GDP by the end of the simulation period. This increase is driven largely by the growth in personal income taxes, as shown in figure 8. Specifically, our simulations suggest that personal income tax revenues will increase as a share of GDP by about 1 percentage point during the simulation period. Sales taxes and property taxes, on the other hand, are projected to remain relatively constant as a share of GDP during the simulation period through 2067. Policy Changes and Other Considerations Could Affect the State and Local Government Sector’s Fiscal Outlook While our long-term simulations do not account for pending or future federal policy changes that will result in changes to expenditures and revenues, an understanding of several recent federal policy changes related to taxes and health care are important to note because they present sources of uncertainty for the state and local government sector’s long-term fiscal outlook. In addition, as is the case in any model that is reliant on historical data to simulate a long-term outlook, other considerations, such as economic growth and rates of return on pension assets, could shift future fiscal outcomes. These policy changes and uncertainties are discussed below and may help federal policy makers and state and local governments consider how these changes could affect the long-term outlook. Tax-and Health-Related Policies Could Affect the Sector’s Long-Term Fiscal Outlook Tax Policies Recently enacted legislation, such as Public Law 115-97, commonly referred to by the President and administrative documents as the Tax Cuts and Jobs Act (TCJA), could affect the sector’s revenues over the long-term. Enacted in December 2017, TCJA included significant changes to corporate and individual tax law, with implications for state and local government tax collections. In particular, for individual taxpayers, for tax years 2018 through 2025, tax rates were lowered for nearly all income levels, some deductions from taxable income were changed (personal exemptions were eliminated, while the standard deduction was increased), and certain credits, such as the child tax credit, were expanded. The effect of TCJA on the long-term state and local fiscal outlook is still evolving, and will likely depend on how states incorporate the law’s changes into their state income tax rules. That is, because some states link their state income taxes to federal income tax rules, states must decide whether to let the changes from TCJA flow through to their state income tax systems, or establish new state income tax rules. For example, some states have adopted the federal definition of taxable income as a starting point for state tax calculations, while other states use the federal definition of adjusted gross income as a starting point. The choices states make to continue to link to these definitions could have long-term implications for their state tax revenues. In addition, under TCJA, the amount of the federal itemized deductions allowed for all state and local income, sales, and property taxes (commonly referred to as the state and local tax (SALT) deduction) is now capped at $10,000 for tax years 2018 to 2025. The magnitude or net effect of these changes is uncertain in that states are still working to understand the impact of the tax laws on their revenues. It remains to be seen whether and how states will see changes in their revenues in the future. Moreover, a recent U.S. Supreme Court decision involving state sales taxes could have implications for states’ ability to collect revenue. Specifically, the court’s ruling in June 2018 in South Dakota v. Wayfair, Inc. held that states could require out-of-state sellers to collect and remit sales taxes on purchases made from those out-of-state sellers, even if the seller does not have a substantial physical presence in the taxing state. Prior to this ruling, a seller that did not have a substantial physical presence in a state could not be required to collect and remit a sales tax on goods sold into the state. Instead, a purchaser may have been required to pay a use tax (i.e., a tax levied on the consumer for the privilege of use, ownership, or possession of taxable goods and services) in the same amount to his or her state government. In 2017, we reported that states could realize between an estimated $8.5 billion and $13.4 billion in additional state sales tax revenue across all states if all sellers were required to collect taxes on all remote sales at current rates. The extent to which states realize changes in sales tax revenue will likely depend on how they revise their state laws and enforcement efforts in response to this June 2018 ruling. Health Care Policies Enacted health care legislation could also affect the long-term fiscal position of state and local governments. As we have reported in prior work, the effect of the Patient Protection and Affordable Care Act (PPACA) on the long-term state and local fiscal outlook could depend on how states implement PPACA, and on future rates of health care cost growth. For example, consider the states that have opted, under PPACA, to expand Medicaid program coverage to millions of lower income adults. While the federal government is expected to cover a large share of the costs of the Medicaid expansion, these states are ultimately expected to bear some of the costs. Specifically, the federal government reimbursed 100 percent of the costs of the expanded population beginning in 2014. This reimbursement rate will decline from the 2018 reimbursement rate of 94 percent to 90 percent by 2020. As such, the reduced federal reimbursement rate may affect those states that expanded their Medicaid populations in recent years. As discussed earlier in this report, our simulations suggest that Medicaid spending will make up an increasing share of the state and local government sector’s operating expenditures in the future. A weakening of the economy could add to the fiscal pressures states face in funding these Medicaid obligations. As our prior work has shown, past recessions in 2001 and 2007 hampered states’ ability to fund increased Medicaid enrollment and maintain their existing services. Specifically, Medicaid enrollment increased during these recessions, in part due to increased unemployment, which led more individuals to become eligible for the program. We have also reported on the use of Medicaid demonstrations, which allow states to test new approaches to coverage to improve quality and access, or generate savings or efficiencies. Specifically, CMS may waive certain Medicaid requirements and approve new types of expenditures that would not otherwise be eligible for federal Medicaid matching funds. For example, under demonstrations, states have extended coverage to certain populations, provided services not otherwise eligible for Medicaid, and made payments to providers to incentivize delivery system improvements. We previously reported that, as of November 2016, nearly three-quarters of states have CMS- approved demonstrations. In fiscal year 2015, federal spending under demonstrations represented a third of all Medicaid spending nationwide. We also reported that in 10 states, federal spending on demonstrations represented 75 percent or more of all federal spending on Medicaid. Joint financing of Medicaid is a fixture of this federal-state partnership. Demonstration waivers hold the potential for changing state Medicaid spending. However, as we have reported, these demonstrations are required, under HHS policy, to achieve budget neutrality and not raise costs for the federal government. Economic Growth and Other Factors Could Affect the Sector’s Fiscal Outlook In addition to federal tax- and health-related policy changes, a number of other factors could affect the state and local government sector’s long- term fiscal outlook. Specifically, we developed simulations using alternative assumptions of the growth of key model variables—which include economic growth, health care excess cost growth, and the rate of return on pension assets. We determined that changes in the growth projections of these key variables could affect the operating balance of state and local governments, thereby shifting future fiscal outcomes for the sector. Economic Growth Future trends in GDP growth could affect the state and local government sector’s fiscal outlook. Data from CBO and the Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds (OASDI Trustees) project real GDP to grow by 1.9 percent per year on average from 2018 through 2028, and by 2.1 percent per year on average after 2028, respectively. Using these projections, our simulations suggest that maintaining current policies would cause the sector’s expenditures to exceed its revenues and that the difference between revenues and expenditures would become increasingly negative during the next several decades. However, simulations we developed using the OASDI Trustees’ alternative projections of real GDP growth suggest that the difference between revenues and expenditures would expand before narrowing toward the end of the simulation period if real GDP were to grow at a faster rate—2.8 percent per year on average—as shown in figure 9. Our simulations also show that if GDP were to grow at a slower rate—1.5 percent per year on average—the difference between revenues and expenditures would expand. This would result in an increasingly negative operating balance during the simulation period. As discussed earlier in this report, excess cost growth in health care is another key determinant of the sector’s fiscal balance. Data from CBO project Medicaid spending per capita to grow about 1.5 percent faster than GDP per capita on average for the period from 2019 through 2028. Data from CMS project Medicaid spending per capita to grow about 0.6 percent faster on average for the period from 2029 through 2067. Data from CMS also project national health expenditures per capita to grow about 0.8 percent faster than GDP per capita for the period from 2018 through 2067. Using these projections, our simulations suggest that maintaining current policies will cause the sector’s expenditures to exceed its revenues, and that the difference between revenues and expenditures will become increasingly negative during the next several decades. However, simulations developed using alternative projections of excess cost growth in Medicaid and national health expenditures suggest that the difference between revenues and expenditures may be reduced but not eliminated within the simulation period if excess cost growth in health care is zero. In the scenario where excess cost growth rises faster—0.7 percent on average for Medicaid for the period from 2029 through 2067 and 1 percent for national health expenditures for the period from 2018 through 2067—our simulations show that the difference between revenues and expenditures will persist for the remainder of the simulation period (see figure 10). The rate of return on pension assets could also affect the state and local government sector’s fiscal outlook. Based on an inflation-adjusted rate of return on pension assets of 5 percent, our simulations suggest that state and local governments will need to make pension contributions equivalent to about 12.9 percent of their wages and salaries to meet their long-term pension obligations. However, this estimate is sensitive to the rate of return on state and local governments’ pension assets. Simulations we developed using a higher rate of return—7.5 percent—suggest that pension contributions needed to meet pension obligations would be about 3 percent of state and local government employees’ wages and salaries. In addition, under this scenario, our simulations suggest that the difference between revenues and expenditures will be reduced, but not eliminated within the simulation period. Alternatively, we estimated that if the rate of return on pension assets is relatively low—at 2.5 percent— required pension contributions would need to be about 23 percent of state and local government employees’ wages and salaries during the simulation period. Under this scenario, our simulations show that the sector’s negative operating balance will continue to grow larger throughout the simulation period. It follows therefore, that high rates of return on pension assets are associated with an improved outlook for state and local governments, and vice versa (see figure 11). This report was prepared under the direction of Michelle A. Sager, Director, Strategic Issues, who can be reached at (202) 512-6806 or sagerm@gao.gov, and Oliver M. Richard, Director, Center for Economics, who can be reached at (202) 512-8424 or richardo@gao.gov if there are any questions. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: State and Local Government Fiscal Model Simulation Methodology Data To simulate measures of fiscal balance for the state and local government sector for the long term, we used aggregate data on the state and local government sector and national data on other variables from the following sources: Agency for Healthcare Research and Quality; Board of Governors of the Federal Reserve System; Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds (OASDI Trustees); Bureau of Economic Analysis (BEA); Bureau of Labor Statistics; Centers for Medicare & Medicaid Services (CMS); Congressional Budget Office (CBO); and Federal Reserve Bank of St. Louis. Model Specification Overview Our approach generally follows the approach used in GAO-08-317 and in subsequent updates of that report. Specifically, we developed a model that projects the levels of receipts and expenditures of the state and local government sector (henceforth, the sector) in future years based on current and historical spending and revenue patterns. We use table 3.3 of the National Income and Product Accounts (NIPA)—State and Local Government Current Receipts and Expenditures—prepared by BEA at the U.S. Department of Commerce as an organizing framework for developing our model of the sector’s revenues and expenditures (see table 1). In this table, current revenues are grouped in five main categories. Current tax receipts. These receipts are tax payments made by persons or businesses to state and local governments. They include income taxes, general sales taxes, property taxes, and excise taxes. Current taxes also include fees for motor vehicle licenses, drivers’ licenses, and business licenses. Social insurance contributions. These contributions finance the provision of certain social benefits to qualified persons, and include contributions from employers and employees for temporary disability insurance, worker’s compensation insurance, and other programs. Income receipts from government assets. These receipts include interest, dividends, and rental income, such as royalties paid on drilling on the outer continental shelf. Also, state and local governments earn interest and dividend income on financial assets. Current transfer receipts. Transfer receipts are receipts for which state and local governments provide nothing of value in return. Current transfer receipts include federal grants, fines, fees, donations, and tobacco settlements. Also included are net insurance settlements, certain penalty taxes, court fees, and other miscellaneous transfers. Current surplus of government enterprises. This surplus is a profit- type measure for state and local government enterprises, such as water, sewer, gas, and electricity providers; toll providers; liquor stores; air and water terminals; public transit; and state lotteries. Some types of enterprises, such as state lotteries, consistently earn surpluses which are used to fund general government activities. In contrast, many enterprises run deficits, which, in turn, reduce receipts. State and local governments also receive income from the sale of goods and services, such as school tuition. In the NIPAs, this income is treated as an offset against expenditures, not revenue. This income comes from voluntary purchases that might have been made from a private sector provider of such services. In addition to current receipts, state and local governments receive capital transfer receipts. These receipts include estate and gift taxes, and federal government investment grants for capital such as highways, transit, air transportation, and water treatment plants. State and local government current expenditures are grouped into four main categories. Consumption expenditures. Generally, spending for which some value is provided in return. State and local government consumption spending is the sum of inputs used to provide goods and services, including compensation of general government employees, consumption of general government fixed capital (depreciation), and intermediate goods and services purchased, less sales to other sectors and own-account investment. Current transfer payments. Payments for which nothing of value is provided in return. For state and local governments, current transfer payments consist primarily of social benefits, which are payments to persons to provide for needs that arise from circumstances such as sickness, unemployment, retirement, and poverty. There are two kinds of social benefits—benefits from social insurance funds, such as temporary disability insurance and workers’ compensation, and other social benefits, such as medical benefits from Medicaid and the state Children’s Health Insurance Program (CHIP), family assistance from Temporary Assistance to Needy Families, education assistance, and other public assistance programs. While NIPA table 3.3 also includes other current transfer payments to the rest of the world as part of current transfer payments, these amounts are generally equal to zero. Interest payments. These include actual and imputed interest and represent the cost of borrowing by state and local governments to finance their capital and operational costs. Subsidies. State and local government subsidies are largely payments to railroads. State and local government spending also includes gross investment, capital transfer payments, and net purchases of nonproduced assets. Gross investment is spending on capital goods like structures, equipment, and intellectual property—items that are called fixed assets or capital because of their repeated or continuous use in providing government services for more than 1 year. Structures include residential and commercial buildings, highways and streets, sewer systems, and water systems. State and local government capital transfer payments include disaster-related insurance benefits paid to the U.S. territories and the Commonwealths of Puerto Rico and Northern Mariana Islands. Net purchases of nonproduced assets are composed of net purchases of land less oil bonuses (payments to states for the long-term rights to extract oil). Our main indicator of the sector’s fiscal balance is its operating balance net of funds for capital expenditures (henceforth, operating balance), which is a measure of the sector’s ability to cover its current expenditures out of current revenues. The operating balance is defined as total receipts minus (1) capital outlays not financed by medium- and long-term debt issuance, (2) total current expenditures less depreciation, (3) current surplus of state and local government enterprises, and (4) net social insurance fund balance. Alternative indicators of fiscal balance include net saving and net lending or borrowing. Net saving is the difference between current receipts and current expenditures. Since current expenditures exclude capital investment but include a depreciation measure, net saving can be thought of as a measure of the extent to which governments are covering their current operations from current receipts. Net lending or borrowing is the difference between total receipts and total expenditures, and is analogous to the federal unified surplus or deficit. Total receipts differ from current receipts because they include capital transfer receipts. Total expenditures differ from current expenditures because they include capital investment, capital transfer payments, and net purchases of nonproduced assets. However, they exclude fixed capital consumption. The former three categories are cash expenditures, while the latter is a noncash charge. Net lending or net borrowing represents the governments’ cash surplus or borrowing requirement. This measure is normally negative because governments borrow to finance their capital investment (and sometimes to finance current operations as well). The following equations describe how we simulated state and local government receipts and expenditures, as well as the intermediate variables used in those simulations. For this update, we started with historical data for 2017, or the most recent year available, and then simulated each variable for each year from 2018 through 2092 (the simulation period). National Demographic, Macroeconomic, and Health Care Variables To simulate state and local government receipts and expenditures, we use simulations of various national-level demographic, macroeconomic, and health care variables derived from projections produced by CBO, CMS, and the OASDI Trustees, and otherwise derived using our own assumptions (see table 2). This approach is similar to the approach we have used in prior model updates. State and Local Government Defined Benefit Pension Contribution Rate To simulate state and local government spending on defined benefit pensions, we first estimate the contribution rate (as a fraction of state and local government general government wages and salaries) that state and local governments would need to make each year going forward to ensure that their pension systems are fully funded on an ongoing basis. Our goal is to estimate the financial commitments to employees that have been and are likely to continue to be made by the state and local sector to better understand the full fiscal outlook for the sector. As such, our analysis projects the liabilities that the sector is likely to continue to incur in the future based on simulations of future numbers of retirees receiving pension benefits and their benefit amounts; future numbers of employees, their wages and salaries, and their pension contributions; and assets in state and local government defined benefit pension funds. Although we are only interested in applying contribution rates over the simulation time frame, we actually have to derive the contribution rate for a longer time frame in order to find the steady-state level of necessary contributions. This longer time frame is required because the estimated contribution rate increases as the projection horizon increases and eventually converges to a steady state. If the projection period is of insufficient length, the steady-state level of contribution is not attained, and the necessary contribution rate is understated. We simulated variables used to estimate the pension contribution rate using the approach summarized in table 3. This approach is similar to the approach we have used in prior model updates. Future growth in the number of state and local government retirees— many of whom will be entitled to pension and health care benefits—is largely driven by the size of the workforce in earlier years. We simulated the number of state and local government retirees by assuming that the growth rate in the number of retirees is a weighted average of the growth rates in lagged general government and government enterprise employment. We estimated the weights using a regression of the percent change in the number of retirees on the percent change in employment 1, 6, 11, 16, 21, 26, 31, 36, and 41 years in the past. The coefficients on the past percentage changes in employment were constrained to be non- negative and to sum to 1. For this regression, we removed cyclical swings in employment using the Hodrick-Prescott filter. Similarly, future changes in the real amount of pension benefits will be a function of past changes in real wages and salaries. As indicated in table 3, we used a weighted average of past values of the state and local government employment cost index to simulate the employment cost index for state and local government retirees. We chose the weights to reflect changes in the share and average real benefit level of three subsets of the retiree population over time: (1) new retirees entering the beneficiary pool, (2) deceased retirees leaving the pool, and (3) continuing retirees from the previous year. We simulated the weight for new retirees in a year as the number of retirees less the number of continuing retirees divided by the number of retirees. We simulated the weight for deceased retirees as the mortality rate multiplied by last year’s retirees divided by this year’s retirees. We simulated the weight for continuing retirees as last year’s retirees divided by this year’s retirees. Finally, we simulated the employment cost index for state and local government retirees as the sum of the weight on new retirees multiplied by the state and local government employment cost index and the weight on continuing retirees multiplied by the state and local government employment cost index 8 years prior, less the weight on deceased retirees multiplied by the state and local government employment cost index 21 years prior. As discussed above, we started with historical data for 2017, or the most recent year available, simulated all of the variables in table 3 over the long run, and then used the consumer price index (CPI) and the real return on pension assets to calculate the total present value of wages and salaries for state and local government general government and government enterprise employees, the total present value of real pension benefits paid to state and local government retirees, and the total present value of state and local government employees’ pension contributions. Then, we calculated the total present value of state and local governments’ pension liabilities as the total present value of real pension benefits paid to state and local government retirees less the total present value of state and local government employees’ pension contributions, and the value of assets in state and local government defined benefit pension funds in 2017. Finally, we estimated state and local governments’ pension contribution rate as the ratio of the total present value of their pension liabilities to the total present value of wages and salaries for state and local government employees. Interest Rates on State and Local Government Financial Assets and Liabilities Table 4 summarizes the approach we used to simulate interest rates on state and local government financial assets and liabilities. This approach is similar to the approach we have used in prior model updates. State and Local Government Receipts Table 5 summarizes our approach to simulating state and local government receipts. This approach is similar to the approach we have used in prior model updates. These variables track state and local government receipts in table 1 above as follows: State and local government personal income tax revenue is the sum of state personal income tax revenue and local personal income tax revenue; State and local government personal tax revenue is the sum of personal income tax revenue and other personal tax revenue; State and local government revenue from taxes on production and imports is the sum of general sales tax revenue, excise tax revenue, property tax revenue, and revenue from other taxes on production and imports; State and local government current tax revenue is the sum of personal tax revenue, revenue from taxes on production and imports, and corporate income tax revenue; State and local government current transfer receipts are equal to federal Medicaid grants minus Medicare Part D payments to the federal government, plus other federal grants (excluding investment grants), transfer receipts from businesses, and transfer receipts from persons; State and local government current receipts are the sum of current tax revenue, current transfer receipts, income on assets, social insurance contributions, and government enterprise surplus; State and local government capital transfer receipts are the sum of federal investment grants and estate and gift tax revenue; and State and local government total receipts are the sum of current receipts and capital transfer receipts. State and Local Government Expenditures Our general approach to simulating state and local government expenditures is to assume that state and local governments maintain the current level of public goods and services provision per capita (see table 6). Thus, we generally assume that expenditures keep up with U.S. population growth and some measure of inflation, where the relevant rate of inflation varies depending on the specific type of expenditure. However, we use alternative approaches—described below—to simulate depreciation, interest payments, and social benefits for health care. This approach is similar to the approach we have used in prior model updates. These variables correspond to state and local government expenditures in table 1 as follows: Employee compensation is the sum of wages and salaries, pension contributions, health benefits for current employees, health benefits for retirees, and other compensation, for state and local government general government employees. Consumption expenditures are the sum of employee compensation, general government fixed capital consumption, and other general government consumption expenditures. Social benefit payments are the sum of Medicaid benefits, non- Medicaid health benefits, and non-health social benefits. Current expenditures are the sum of consumption expenditures, social benefit payments, interest payments, and subsidy payments. Total expenditures are the sum of current expenditures, gross investment, capital transfer payments, and purchases of nonproduced assets, minus general government and government enterprise fixed capital consumption. State and Local Government Financial Assets and Liabilities Table 7 summarizes our approach for simulating state and local government financial assets and liabilities. This approach is similar to the approach we have used in prior model updates. Our method for simulating the sectors’ short-term debt outstanding leverages the fact that for any entity, there is a direct relationship between budget outcomes and changes in financial position. Specifically, if expenditures exceed receipts, the gap needs to be financed by some combination of changes in financial assets and changes in financial liabilities. If governments spend more than they take in, they must pay for it by issuing debt, cashing in assets, or some combination of the two. Conversely, if receipts exceed expenditures and the sector is a net lender, its net financial investment (the net change in financial assets minus the net change in financial liabilities) must equal the budget surplus. The relationship between budget outcomes and the sector’s financial position is shown in the following accounting identity: total receipts – total expenditures = change in financial assets – change in financial liabilities. The sector’s financial liabilities include short-, medium-, and long-term debt; trade payables; and loans from the federal government, so the accounting identity can be rewritten as follows: total receipts – total expenditures = change in financial assets – change in medium- and long-term debt – change in trade payables – change in federal government loans – change in short term debt. For a given difference between total receipts and total expenditures, various combinations of changes in financial assets and changes in financial liabilities can satisfy this identity. However, we assumed that financial assets grow at the same rate as U.S. GDP, that medium- and long-term debt outstanding is determined using the historical relationship described in table 7, that federal government loans to state and local governments are determined using the historical relationship described in table 7, and that trade payables grow at the same rate as other state and local government consumption spending. If the first four terms on the right hand side of the identity are already determined, then only the fifth term— the change in short-term debt—is free to satisfy this identity. State and Local Government Fiscal Balance As discussed above, our indicators of fiscal balance are operating balance, net saving, and net lending or borrowing. This approach is similar to the approach we have used in prior model updates. Recall that we defined operating balance as follows: operating balance = total receipts – (gross investment + capital transfer payments + net purchases of nonproduced assets – medium- and long-term debt issuance) – (current expenditures – consumption of general government fixed assets) – current surplus of state and local government enterprises – net social insurance fund balance. By substituting for total receipts and current expenditures using the relationships described above and rearranging terms, we can also calculate operating balance using a formula that more easily identifies its revenue components—the items in the first set of parentheses—and expenditure components—the items in the second set of parentheses: operating balance = (current tax revenues + estate and gift tax revenues + social insurance fund contributions + income receipts from assets + current transfers + federal investment grants + medium- and long-term debt issuance) – (compensation of general government employees + social benefit payments + interest payments + gross investment + capital transfer payments + net purchases of nonproduced assets + other general government consumption expenditures + subsidy payments + net social insurance fund balance). Estimated Historical Relationships Some of our simulations are based on estimated historical relationships between pairs of variables: Elasticity of real personal consumption expenditures less food and services with respect to real wages and salaries; Elasticity of the real U.S. market value of real estate with respect to Relationship between effective interest rates on financial assets and Relationship between state and local government bond yields and 10- year Treasury rates; Relationship between effective interest rates on long-term state and local government debt and federal government loans and state and local government bond yields; Elasticity of real state personal income tax revenue with respect to Elasticity of real state and local government excise tax revenue with respect to real wages and salaries; Relationship between long-term debt issuance as a fraction of gross investment and nonproduced asset purchases in excess of federal investment grants and the change in state and local government bond yields; and Relationship between real federal government lending to state and local governments and real U.S. GDP. To estimate each of these historical relationships, we used the following approach: first, we assessed the order of integration of both variables using unit root tests of the levels and the first differences, where a variable is integrated of order 0 (I(0) or stationary) if we rejected the null hypothesis of a unit root in the levels at standard significance levels, and is integrated of order 1 (I(1) or first-order nonstationary) if we could not reject the null hypothesis of a unit root in the levels but we could do so for the first differences. For relationships between variables that were both stationary, we estimated an autoregressive distributed lag model, where y is the dependent variable, x is the independent variable, and ε is an independent, identically distributed error term. The long-run impact on y of a one unit change in x is given by ∑ . We initially chose the number of lags based on the Bayesian Information Criteria and then added additional lags of the dependent variable, if needed, until the residuals were consistent with a white noise process at standard significance levels. For relationships between variables that were both first-order nonstationary, we used the same approach but also used the Pesaran, Shin, and Smith bounds test for the existence of a cointegrating (long-run equilibrium) relationship. We concluded that the variables were cointegrated if we rejected the null hypothesis of no relationship at standard significance levels. Tables 8 and 9 summarize the estimated regression models as well as the results of the unit root, white noise, and cointegration tests. Indicators of Fiscal Balance for the State and Local Government Sector We simulated the model for the 75-year period from 2018 through 2092, and we used the results to calculate the operating balance for the state and local government sector as a percentage of U.S. GDP. Our results suggest that if the sector maintains current policy and continues to provide current per capita levels of public goods and services, then its operating balance will decline from about -1 percent of U.S. GDP to about -3 percent of U.S. GDP over the next 50 years. To shed light on how maintaining the operating balance at or above zero would affect the state and local government sector, we used the model to simulate the level of total expenditures that would keep the operating balance greater than or equal to zero. We then calculated the difference between the present value of total expenditures simulated assuming the sector maintains balance, and the present value of total expenditures simulated assuming the sector maintains current policies, both as a percentage of the present value of total expenditures assuming the sector maintains current policies, and as a percentage of the present value of U.S. GDP. We calculated all of the present values for the 50-year period from 2018 through 2067, and we used a discount rate equal to the average of the 3-month Treasury rate and the 10-year Treasury rate for each year. Our results suggest that the difference between the present value of total expenditures that maintain balance and the present value of total expenditures that maintain current policies is about -14.7 percent of the present value of total expenditures that maintain current policies, or about -2.4 percent of the present value of U.S. GDP. That is, our simulations suggest that maintaining balance would require the sector to spend about 14.7 percent less than it would spend each year to maintain current policies. We note that a similar exercise based on simulating total revenues required to maintain the operating balance at or above zero would generate a similar result. Caveats and Limitations Our approach has a number of limitations and the results should be interpreted with caution: The state and local government fiscal model is not designed for certain types of analyses. The simulations are not intended to provide precise predictions. Even though we know that these governments regularly make changes to tax laws and expenditures, the model essentially holds current policy in place and analyzes the fiscal future for the sector as if those policies were maintained because it would be highly speculative to make any assumptions about future policy adjustments. Fiscal outcomes, as related to the state and local government sector’s financial position and solvency, may not reflect all aspects of the sector’s fiscal health. Other indicators include economic indicators that go beyond the sector’s financial position to include economic growth, income, or distributional equity, as well as indicators of the quality of services provided by the sector, including education, health care, infrastructure, and other public goods and services. Our unit of analysis is the state and local government sector as a whole, so our results provide an assessment of the sector’s fiscal outlook. However, individual state and local governments likely exhibit significant heterogeneity in their expenditure and revenue patterns, so their fiscal outlooks will likely differ from that for the sector. Nevertheless, it is informative to assess the overall fiscal outlook of the sector because doing so reveals the outlook for the average state or local government. In addition, aggregate data on the sector are available on a more timely basis than data for individual state and local governments. This allows for a better assessment of the sector’s current fiscal outlook. Our results for the sector also provide a baseline from which to view the experiences of individual state and local governments. Finally, assessing the fiscal outlook of the sector as a whole can help mitigate the tendency to extrapolate from the most visible, but potentially not representative, experiences of individual states or localities. Appendix II: State and Local Government Fiscal Model Alternative Simulations Our baseline approach to simulating the fiscal outlook for the state and local government sector is described in appendix I. As part of our simulation approach, we used five variables with values for the simulation period—the period from 2018 through 2092—that are projected outside the model and that do not rely on maintaining historical relationships: U.S. population, real U.S. gross domestic product (GDP) growth, national health care excess cost growth, Medicaid excess cost growth, and the real rate of return on pension assets. U.S. population. For our baseline simulations, we used the Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds’ (OASDI Trustees) intermediate population projections. Real U.S. GDP. For our baseline simulations, we projected real U.S. GDP to grow at the same rate as Congressional Budget Office (CBO) projections for the period from 2018 through 2028 and to grow at the same rate as the OASDI Trustees’ intermediate projections of real U.S. GDP growth for the period from 2029 through 2092. National health expenditures excess cost growth. For our baseline simulations, we used Centers for Medicare & Medicaid Services’ (CMS) baseline projection of national health expenditures excess cost growth. Medicaid excess cost growth. For our baseline simulations, for the period from 2029 through 2092, we used Medicaid excess cost growth derived from CMS’s baseline projections. Real rate of return on state and local government pension assets. For our baseline simulations, we assumed a 5 percent real rate of return on state and local government pension assets. To assess the sensitivity of our results to changes in these baseline projections, we selected two alternative projections of each of these variables, one associated with a faster growth rate or rate of return and one associated with a slower growth rate or rate of return. U.S. population. For our alternative simulations, we used the OASDI Trustees’ high cost and low cost population projections. Real U.S. GDP. For our alternative simulations, we used the OASDI Trustees’ high cost and low cost projections of real U.S. GDP growth. National health expenditures excess cost growth. For our alternative simulations, we used CMS’s alternative projection of national health expenditures excess cost growth. As another alternative, we simulated the model assuming excess cost growth for national health expenditures is zero. Medicaid excess cost growth. For our alternative simulations, for the period from 2029 through 2092, we used Medicaid excess cost growth derived from CMS’s alternative projections for the period from 2029 through 2092. As another alternative, we simulated the model assuming Medicaid excess cost growth is zero for the period from 2029 through 2092. Real rate of return on state and local government pension assets. For our sensitivity analysis, we used real rates of return of 2.5 percent and 7.5 percent. Table 10 shows the average annual growth rate or rate of return associated with the baseline and alternative projections of each variable for the simulation period. For our simulations based on alternative assumptions about U.S. population growth and real U.S. GDP growth, as well as simulations based on alternative assumptions about real pension asset returns, we simulated the model changing one variable at a time and leaving the others fixed at their baseline values. For example, for one simulation we used the slower assumption for real U.S. GDP growth and the baseline assumptions for all other variables. For our simulations based on alternative assumptions about excess cost growth for national health expenditures and for Medicaid, we changed both variables in the same direction and left the others fixed at their baseline values. For example, for one simulation we used zero excess cost growth for both national health expenditures and for Medicaid, and made the baseline assumption for the other variables. Thus, our sensitivity analysis is in the spirit of a partial equilibrium comparative statics analysis that sheds light on how each of the individual variables may affect the state and local government sector’s fiscal outlook. However, these variables are likely to be correlated so future changes in one would likely be associated with changes in others. Appendix III: Related GAO Products State and Local Governments’ Fiscal Outlook: December 2016 Update, GAO-17-213SP. Washington, D.C.: Dec. 8, 2016. State and Local Governments’ Fiscal Outlook: December 2015 Update, GAO-16-260SP. Washington, D.C.: Dec. 16, 2015. State and Local Governments’ Fiscal Outlook: December 2014 Update, GAO-15-224SP. Washington, D.C.: Dec. 17, 2014. State and Local Governments’ Fiscal Outlook: April 2013 Update, GAO-13-546SP. Washington, D.C.: Apr. 29, 2013. State and Local Governments’ Fiscal Outlook: April 2012 Update, GAO-12-523SP. Washington, D.C.: Apr. 5, 2012. State and Local Government Pension Plans: Economic Downturn Spurs Efforts to Address Costs and Sustainability, GAO-12-322. Washington, D.C.: Mar. 2, 2012. State and Local Governments’ Fiscal Outlook: April 2011 Update, GAO-11-495SP. Washington, D.C.: Apr. 6, 2011. State and Local Governments: Knowledge of Past Recessions Can Inform Future Federal Fiscal Assistance, GAO-11-401. Washington, D.C.: Mar. 31, 2011. State and Local Governments: Fiscal Pressures Could Have Implications for Future Delivery of Intergovernmental Programs, GAO-10-899. Washington, D.C.: July 30, 2010. State and Local Governments’ Fiscal Outlook: March 2010 Update, GAO-10-358. Washington, D.C.: Mar. 2, 2010. Update of State and Local Government Fiscal Pressures, GAO-09-320R. Washington, D.C.: Jan. 26, 2009. State and Local Fiscal Challenges: Rising Health Care Costs Drive Long- term and Immediate Pressures, GAO-09-210T. Washington, D.C.: Nov. 19, 2008. State and Local Governments: Growing Fiscal Challenges Will Emerge during the Next 10 Years, GAO-08-317. Washington, D.C.: Jan. 22, 2008. Our Nation’s Long-Term Fiscal Challenge: State and Local Governments Will Likely Face Persistent Fiscal Challenges in the Next Decade, GAO-07-1113CG. Washington, D.C.: July 18, 2007. State and Local Governments: Persistent Fiscal Challenges Will Likely Emerge within the Next Decade, GAO-07-1080SP. Washington, D.C.: July 18, 2007. Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to the contacts listed above, Brenda Rabinowitz and Courtney LaFountain (Assistant Directors), David Aja, Brett Caloia, Ann Czapiewski, Joe Silvestri, Stewart Small, Andrew J. Stephens, Frank Todisco, Walter Vance, and Chris Woika made significant contributions to this report.
Why GAO Did This Study Fiscal sustainability presents a national challenge shared by all levels of government. Since 2007, GAO has published simulations of long-term fiscal trends in the state and local government sector, which have consistently shown that the sector faces long-term fiscal pressures. While most states have requirements related to balancing their budgets, deficits can arise because the planned annual revenues are not generated at the expected rate, demand for services exceeds planned expenditures, or both, resulting in a near-term operating deficit. This report updates GAO's state and local fiscal model to simulate the fiscal outlook for the state and local government sector. This includes identifying the components of state and local expenditures likely to contribute to the sector's fiscal pressures. In addition, this report identifies considerations related to federal policy and other factors that could contribute to uncertainties in the state and local government sector's long-term fiscal outlook. GAO's model uses the Bureau of Economic Analysis's National Income and Product Accounts as the primary data source and presents the results in the aggregate for the state and local sector as a whole. The model shows the level of receipts and expenditures for the sector until 2067, based on current and historical spending and revenue patterns. In addition, the model assumes that the current set of policies in place across state and local government remains constant to show a simulated long-term outlook. What GAO Found GAO's simulations suggest that the state and local government sector will likely face an increasing difference between revenues and expenditures during the next 50 years as reflected by the operating balance--a measure of the sector's ability to cover its current expenditures out of its current receipts. While both expenditures and revenues are projected to increase as a percentage of gross domestic product (GDP), a difference between the two is projected to persist because expenditures are expected to grow faster than revenues throughout the simulation period. GAO's simulations also suggest that growth in the sector's overall spending is largely driven by health care expenditures--in particular, Medicaid spending and spending on health benefits for state and local government employees and retirees. These expenditures are projected to grow as a share of GDP during the simulation period. GAO's simulations also suggest that revenues from personal income taxes and federal grants to states and localities will increase during the simulation period. However, revenues will grow more slowly than expenditures such that the sector faces a declining fiscal outlook. GAO also identified federal policy changes that could affect the state and local government sector's fiscal outlook. For example, the effects of the recently-enacted Tax Cuts and Jobs Act will likely depend on how states incorporate the Act into their state income tax rules. In addition, other factors, such as economic growth and rates of return on pension assets, could shift future fiscal outcomes for the sector.
gao_GAO-19-223
gao_GAO-19-223_0
Background EM’s cleanup sites and areas of cleanup work, EM’s status as a program, the history of EM’s requirements for operations activities, and key EM offices and DOE oversight bodies for EM’s cleanup work. EM Cleanup Sites and Areas of Cleanup Work EM has a headquarters office and 16 sites at which the agency oversees cleanup work. Figure 1 shows the EM sites where cleanup work remains. EM divides its cleanup work into six work areas. These areas, described below, sometimes include both operations activities and capital asset projects: 1. spent nuclear fuel stabilization and disposition, including safe shipping, receipt, storage, and disposition of spent nuclear fuel and heavy water; 2. nuclear materials stabilization and disposition, including the management, disposition, safe surveillance, and maintenance of nuclear materials; 3. radioactive liquid waste stabilization and disposition, including treatment, management, and permanent disposal of radioactive liquid waste stored in storage tanks; 4. nuclear facility decontamination and decommissioning, including the deactivation, decontamination, and decommissioning of EM-owned nuclear, radioactive, and industrial buildings and structures; 5. solid waste stabilization and disposition, including receipt, treatment, storage, and disposal of legacy and newly generated low-level waste, mixed low-level waste, transuranic waste, hazardous waste, and sanitary waste; and 6. soil and water remediation, including cleanup of waste regulated under the Resource Conservation and Recovery Act and the Comprehensive Environmental Response, Compensation, and Liability Act. EM’s Status as a Program EM refers to itself as a program, and EM’s organization and mission fit PMI’s definition of a program. According to PMI, programs include multiple program components, such as sub-programs (in EM’s case, each cleanup site is a sub-program) and projects (in EM’s case, the cleanup work at each site), which are interrelated and managed in a coordinated way to obtain benefits not available from managing them individually. According to PMI officials, organizations often use the terms “program” and “project” interchangeably, but the two terms have different meanings and apply to different levels of management. Programs are a means of executing a strategy and achieving organizational goals and objectives. A program may continue indefinitely. In contrast, a project is a temporary endeavor undertaken to create a unique product, service, or result. Projects are executed to improve the efficient implementation of a program. The relationship between a program and a project is illustrated in figure 2 below. History of EM’s Requirements for Operations Activities In June 2009, EM developed the category of work that EM calls operations activities to differentiate this work from capital asset projects. Until then, EM managed all of its cleanup work as projects under Order 413.3B. EM documentation from that time explained that EM decided to differentiate its cleanup work so that it could quickly make use of an infusion of $6 billion for EM under the American Recovery and Reinvestment Act of 2009 (Recovery Act). EM officials stated that EM could not use the funds quickly at that time if the work had to follow the project management requirements in Order 413.3B. In 2010, shortly after the initiation of the Recovery Act work, EM decided to make the approach of managing part of its work as operations activities permanent. EM officials could not provide any documentation from the time supporting this decision, which was not consistent with EM findings from 2009. In particular, according to EM documentation from 2009, executing all cleanup work under Order 413.3B had served EM well in defining and controlling the technical scope, project and life-cycle costs, completion dates, and risks of its cleanup work, and had helped EM improve its overall performance and become more efficient. EM began managing operations activities based on a memorandum developed by EM leadership. In 2012, EM developed the operations activities protocol, which superseded the 2010 memorandum for managing operations activities. This protocol stated that although operations activities are not subject to DOE’s Order 413.3B requirements, EM will apply the appropriate project management principles from this order using a “graded approach.” We reviewed the 2012 operations activities protocol in October 2012 and found that it contained less stringent requirements for operation activities than Order 413.3B for capital asset projects. We also found that EM did not have a clear classification policy that set out under what conditions EM should consider particular cleanup work to be an operations activity or a capital asset project. In the absence of such a policy, EM classified as operations activities certain cleanup work that DOE’s Office of Project Management considered to be capital asset projects. We recommended that EM provide DOE’s Office of Project Management with information on EM’s classification decisions. In 2012, DOE agreed with our recommendation, and EM officials stated in August 2018 that they are developing guidance. In July 2017, EM developed a cleanup policy that applies to both operations activities and capital asset projects. For managing capital asset projects, this policy supplements Order 413.3B. For managing operations activities, this policy supersedes the 2012 operations activities protocol. The 2017 cleanup policy states that EM will apply DOE’s project management principles described in Order 413.3B to its operations activities in a tailored way. At the time of our review, EM had developed 11 standard operating policies and procedures that are associated with the 2017 cleanup policy and that provide guidance on areas such as program performance reporting, assessing contractors’ performance against contract requirements, and what officials have approval authority at major steps in the contract process. However, according to EM officials, the standard operating policies and procedures are not requirements. Key EM Offices and DOE Oversight Bodies for EM’s Cleanup Work The EM program is executed by two main components: EM headquarters, which serves as the program manager for the EM program, and 16 cleanup sites, which serve as sub-programs. The following EM headquarters and site officials are key to managing and overseeing EM’s operations activities, according to the 2017 cleanup policy: The Assistant Secretary for Environmental Management serves as the head of EM and is responsible for the execution of EM’s mission. In December 2017, the Assistant Secretary for EM began reporting to the DOE Undersecretary of Science, who in turn reports to the DOE Deputy Secretary of Energy. The Assistant Secretary for Environmental Management, among other things, provides leadership and develops mission strategies, policy, and guidance for the EM cleanup program. The Principal Deputy Assistant Secretary for Environmental Management serves as the EM management official responsible for operations, including coordination, oversight, and leadership on scope, cost, and schedule elements. Under the 2017 cleanup policy, this official has approval authority for contracts equal to or greater than $200 million. This official is also responsible for conducting periodic contract reviews for contracts with a total estimated cost equal to or greater than $200 million. The Associate Principal Deputy Assistant Secretary for Field Operations provides leadership and develops mission strategies, policy, and guidance for site operations. This official is responsible for, among other things, meeting monthly with each site individually to discuss the status of cleanup work there. The EM Deputy Assistant Secretary for Acquisition and Project Management is responsible for providing independent oversight and reports to the Associate Principal Deputy Assistant Secretary for Corporate Services. Under the 2017 cleanup policy, this official is responsible for programmatic peer reviews that review cleanup activities at each site. This official is also responsible for the implementation of Order 413.3B and review of capital asset projects. At each of the 16 cleanup sites, the EM site manager is responsible and accountable for management and integration of all EM site-level activities. Under the 2017 cleanup policy, site managers have approval authority over contracts under $200 million. The site manager is also required to conduct periodic contract reviews for contracts with a total estimated cost of less than $200 million. Outside of EM, two DOE bodies play a role in the oversight of EM’s capital asset projects, but not of operations activities: DOE’s Office of Project Management has served as DOE’s enterprise project management organization since July 2015, when the Secretary of Energy gave it this responsibility as part of an initiative to improve DOE’s program and project management. As such, DOE states that this office—as an enterprise project management organization—is responsible for providing leadership and assistance in developing and implementing DOE-wide policies, procedures, programs, and management systems pertaining to project management, as well as for independently monitoring, assessing, and reporting on project execution performance. Officials from this office are experts in project management, especially as it relates to capital asset projects, and oversee the implementation of DOE’s Order 413.3B. This office also validates project performance baselines— scope, cost, and schedule—for the department’s capital asset projects, including EM’s. The Project Management Risk Committee reviews and provides advice on capital asset projects with a total project cost of $100 million or more. The Risk Committee’s purpose is to assess the risks associated with projects across DOE and advise DOE senior leaders on project management, including on cost, schedule, and technical issues. The committee includes nine senior DOE officials from across the department, including top project management officials from the National Nuclear Security Administration, the Office of Science, and EM. DOE’s EM Program Manages Most of Its Multibillion-Dollar Cleanup Work as Operations Activities, Posing Cost and Schedule Risks DOE’s EM program manages most of its cleanup work as operations activities, posing cost and schedule risks. These risks stem from EM’s management of such work using less stringent requirements than for capital asset projects even though EM spends billions of dollars annually on operations activities. Site managers have the discretion to classify cleanup work as operations activities, even if the work has characteristics of capital asset projects, because DOE and EM have not established requirements for classifying EM’s cleanup work. In addition, EM has not addressed concerns raised by DOE project management experts that some operations activities should be classified as capital asset projects. DOE’s EM Program Manages Most of Its Cleanup Work as Operations Activities, under Less Stringent Requirements Than Capital Asset Projects EM manages its cleanup work under different requirements, depending on whether it classifies the work as a capital asset project or an operations activity, with operations activities having less stringent requirements. EM currently manages most of its work as operations activities. EM’s work is divided into 77 operations activities and 20 capital asset projects. In the fiscal year 2019 budget, operations activities accounted for 77 percent of EM’s approximately $7.2 billion budget— about $5.5 billion—while capital asset projects accounted for 18 percent of EM’s budget—about $1.3 billion. Figure 3 illustrates how EM classified and funded its work during fiscal year 2019. For capital asset projects, EM manages the work in accordance with the requirements in DOE’s Order 413.3B, which is DOE’s project management order. This order contains numerous, detailed requirements that describe the steps and project management best practices to follow throughout the life of a project. The DOE Secretary strengthened this order in May 2016 by adding more stringent requirements, based in part on our prior recommendations. Examples of the requirements included in this order include:  A capital asset project with a total project cost over $50 million must undergo rigorous reviews outside the project’s management line. Different types of reviews are to be conducted by an independent body within the program for capital asset projects over $50 million, DOE’s Office of Project Management and the Project Management Risk Committee for capital asset projects over $100 million, and the Energy Systems Acquisition Advisory Board for capital asset projects over $750 million. Review and approval are to be received from the Under Secretary for capital asset projects over $100 million, and the Deputy Secretary for capital asset projects over $750 million. A capital asset project must complete its original scope of work within 110 percent of the original cost baseline to be considered successful. The program must conduct a root cause analysis to determine the underlying contributing causes of cost overruns, schedule delays, and performance shortcomings, if the program, the project manager or independent oversight offices realize a capital asset project can no longer meet its established scope, cost or schedule baseline. Contingency to cover potential risks that might appear during the life of a project must be included as part of the total project cost estimate included in the performance baseline. All cost and schedule estimates developed during the life of the project must follow GAO best practices. For operations activities, EM follows the requirements in its 2017 cleanup policy, which has fewer, less detailed, and less stringent requirements than Order 413.3B. For example, in contrast to the more stringent requirements in Order 413.3B, under EM’s 2017 cleanup policy: The highest level of review an operations activity must receive is by EM’s top management for contracts equal to or greater than $200 million. For an operations activity to be considered successful, it must be completed within 110 percent of the current cost and scope baseline—not the original baseline established at the beginning of cleanup work. There is no requirement to conduct a root cause analysis for operations activities. EM does not fund contingency for operations activities. Cost and schedule estimates made before EM authorizes execution of a contract are to follow GAO best practices, but the policy does not include a requirement to follow best practices for cost estimates developed during contract execution. Figure 4 below illustrates how operations activities are managed under less stringent requirements than capital asset projects. EM project management officials in charge of developing the 2017 cleanup policy stated that EM intentionally wrote this policy at a high level because EM planned to develop standard operating policies and procedures that would establish more detailed steps to implement the policy. As noted earlier, these standard operating policies and procedures provide guidance but are not requirements. DOE and EM Have Not Established Requirements for Classifying EM’s Cleanup Work or Addressed Concerns That Some Operations Activities Should Be Capital Asset Projects Neither DOE nor EM has a policy on how to classify cleanup work as either operations activities or capital asset projects. According to DOE Office of Project Management officials, DOE does not have a department- wide policy on how to classify cleanup work. Instead, these officials stated that DOE’s general management approach is to let its individual programs, such as EM, decide how to classify their work. EM officials explained that EM allows each site manager to determine independently how to classify cleanup work because according to EM’s 2017 cleanup policy, the site manager is responsible and accountable for the planning and execution of all site-level activities. DOE project management experts on the Project Management Risk Committee and in DOE’s Office of Project Management have raised concerns related to EM’s 2017 cleanup policy and the classification of cleanup work since 2015. These officials have stated that some current operations activities should be classified as capital asset projects. Specifically: In November 2015, EM approached DOE’s Project Management Risk Committee with a proposal for a new cleanup policy, which later became EM’s 2017 cleanup policy. In comments on the proposal, the committee’s members expressed concerns that the proposed policy did not address how EM would classify cleanup work, noting that if programs or sites get to decide on what is a capital asset project and what is not—which in turn drives the level of DOE oversight—then this approach was not an appropriate governance model. The committee’s members also questioned why EM chose not to use the already available requirements in Order 413.3B. EM did not respond to the committee’s concerns. Instead, according to the committee’s meeting minutes, the DOE Undersecretary for Management and Performance, who at the time oversaw EM, informed the committee in November 2015 that EM was proceeding with drafting its new cleanup policy. In late 2016, DOE’s Office of Project Management officials drafted an appendix to Order 413.3B that sought to define operations activities and capital asset projects. Under the classification proposal in the draft appendix, some of the work now classified as operations activities would have become capital asset projects and subject to more stringent requirements. For example, under the appendix, the cleanup of radioactive liquid waste tanks and solid waste exhumation and disposition would have been designated as capital asset projects. However, EM officials informed officials from the DOE Office of Project Management that EM would continue to develop its own policy, which it issued in July 2017. This 2017 cleanup policy did not reclassify any of the operations activities that, in the opinion of DOE’s Office of Project Management, should be capital asset projects. Officials from DOE’s Office of Project Management we interviewed said that continuing to classify and manage most of EM’s cleanup work as operations activities poses significant risks to DOE. According to these officials, managing the work this way poses cost and schedule risks for the following reasons, among others: Because the review of operations activities is conducted entirely within EM, DOE does not have information on how EM manages operations activities and cannot hold EM accountable for cost- effective and timely completion of this cleanup work, which represents a $5.5 billion investment by taxpayers in operations activities in fiscal year 2019 (see fig. 3). Operations activities are not required to go through a thorough upfront planning process to determine the scope of work to be completed. Therefore, these activities are more subject to scope creep, cost overruns, and schedule delays, which can detract from EM’s credibility with Congress and other stakeholders. Because EM does not set aside contingency funds to cover risks for its operations activities—a project management best practice and requirement under Order 413.3B—if risks are realized, EM must either reduce or delay scope to later years, which increases costs, causes schedule delays, and undermines EM’s ability to budget for activities across the EM program. Officials from DOE’s Office of Project Management stated that EM did not respond to their concerns that EM’s approach to classification of cleanup work poses unwarranted cost and schedule risks. Officials in EM told us they view the role of DOE’s Office of Project Management and the Project Management Risk Committee as limited to reviewing Order 413.3B requirements and overseeing capital asset projects. However, since July 2015, DOE’s Office of Project Management has served as DOE’s enterprise project management organization, with department-wide responsibilities for overseeing project management. As previously noted, DOE states that this office is responsible for, among other things, independently monitoring, assessing, and reporting on project execution performance. Therefore, review of classification of cleanup work that constitutes projects is within the scope of the office’s responsibilities. Until EM works together with DOE’s Office of Project management to (1) establish requirements for classifying cleanup work as capital asset projects or operations activities and (2) assess EM’s ongoing operations activities to determine if they should be reclassified as capital asset projects based on the newly established requirements, the department may incur more project management risk of cost increases and schedule delays than it should for hundreds of billions of dollars of remaining work. EM’s Cleanup Policy Does Not Follow Most Selected Program and Project Management Leading Practices EM’s 2017 cleanup policy, which governs the EM program and its operations activities, does not follow most selected leading practices for program and project management. More specifically, EM’s 2017 cleanup policy does not follow any of 9 selected program management leading practices related to scope, cost, schedule performance, and independent reviews. Further, EM’s 2017 cleanup policy follows 3 of 12 selected project management leading practices related to these areas; it does not follow the remaining 9. Figure 5 shows the percentage of selected program and project management leading practices that DOE’s Office of Environmental Management’s 2017 cleanup policy follows. EM’s Cleanup Policy Does Not Follow Any of Nine Selected Leading Program Management Practices EM’s 2017 cleanup policy does not follow (i.e., does not meet, minimally meets, or partially meets) the nine leading practices for program management related to scope, cost, schedule performance, and independent reviews that we selected based on PMI’s standards. More specifically, the policy partially met two of the leading practices, minimally met four others, and did not meet three, as discussed below: Having a program management plan and a roadmap that are updated regularly. (Minimally meets.) EM’s policy does not require an overarching program management plan or strategic plan that encompasses the work at all sites. The policy does require that each site maintain a life-cycle baseline based on the scope, cost, and schedule of work, which are components of a program management plan. However, the requirement is specific to each site and not the entire EM program. Having a reliable, integrated, comprehensive life-cycle cost estimate that is updated on a regular basis. (Partially meets.) EM’s policy requires an integrated life-cycle cost estimate for the entire EM program but does not state that the cost estimate must be reliable or updated on a regular basis. Having a reliable, integrated master schedule that is updated on a regular basis. (Does not meet.) EM’s policy does not require an integrated master schedule at the program level. Measuring performance against both a program’s life-cycle cost and integrated master schedule baselines. (Does not meet.) EM’s policy does not require that EM track and monitor all high-level program components against a program’s life-cycle cost and integrated master schedule baselines for the entire EM program. Completing performance reporting and analysis in a way that provides a clear picture of program performance. (Minimally meets.) EM’s policy requires performance reporting to the EM headquarters management level, but it does not require that performance information be analyzed to give a clear picture of program performance. Having a lessons learned database. (Partially meets.) EM’s policy requires that EM collect and disseminate lessons learned, but the policy does not specify a framework, such as a database, for how the lessons learned should be collected and shared. Conducting program risk management throughout the life of the program. (Does not meet.) EM’s policy does not require EM to conduct risk management throughout the life of the program. Monitoring and controlling the program, including conducting root cause analyses and developing corrective action plans. (Minimally meets.) EM’s policy does not have any requirements related to monitoring and controlling activities at a program level when there is evidence that the program’s cost or schedule baseline will not be met. It does require some monitoring and controlling activities at the site level. Having an independent oversight body that conducts periodic reviews of the progress of the program in delivering its expected benefits. (Minimally meets.) EM’s policy does not require any independent entity outside EM to review the performance of the EM program as a whole in delivering its expected benefits. The policy requires EM’s Office of Project Management to conduct a periodic Programmatic Peer Review of cleanup work at each site, but this review is not independent of EM. EM officials stated that even though EM’s policy does not follow these program management leading practices, EM officials may take some actions that address these leading practices. For example, to address the leading practice of having a lessons learned database, EM officials explained that EM’s Office of Project Management generates and distributes across EM a monthly lessons-learned bulletin on a topic of its choosing, and these lessons learned are uploaded on a site accessible to everyone within EM. They also explained that officials across EM could enter lessons learned in a DOE-wide lessons-learned database managed by DOE’s Office of Environment, Health, Safety, and Security. In addition, to address the leading practice of monitoring and controlling the program, including conducting root cause analyses and developing corrective action plans, the new Assistant Secretary for Environmental Management requested the development of a root cause analysis and a corrective action plan for the EM program in August 2018. To address the Assistant Secretary’s request, EM officials stated that in November 2018 they identified nine improvement areas for the EM program, for which they are developing corrective measures. However, when we reviewed the actions EM officials cited they took to address the selected leading practices, we found that they fell short of following leading practices. For example, the lessons learned listed in the bulletins we reviewed were related only to capital asset projects, and the database cited by EM officials is not used often by EM; it contains a total of six entries on EM-related issues from 2005 to 2017. In addition, EM officials stated they do not apply key practices that can be used to identify and apply lessons learned. Further, EM officials in charge of developing a root cause analysis and a corrective action plan stated that EM does not have a process for doing so and that EM has not prepared such an analysis or plan since 2011. They also stated that EM does not intend to publish this document and that EM will not develop a root cause analysis to show the problems these corrective measures are supposed to address. The selected leading practices help ensure that a program achieves its goals and intended benefits and that it optimizes scope, cost, and schedule performance, and independent review of performance. Without documenting such leading practices in policy, EM officials may not be aware of expectations to carry them out and may not do so consistently. Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks, such as by clearly documenting internal control in management directives, administrative policies, or operating manuals. Furthermore, these standards state that management periodically reviews policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. Until EM reviews and revises its cleanup policy to include program management leading practices related to scope, cost, schedule performance, and independent review, the EM program is at risk of continued uncontrolled changes to the program’s scope, exceeding its cost estimate and schedule, failing to meet its programmatic goals, and increasing DOE’s environmental liabilities. EM’s Cleanup Policy Does Not Follow Most Selected Project Management Leading Practices EM’s 2017 cleanup policy, which applies to operations activities, follows (i.e., substantially or fully meets) 3 and does not follow (i.e., does not meet, minimally meets, or partially meets) 9 of the 12 leading practices for project management related to scope, cost, schedule performance, and independent reviews that we selected based on PMI’s standards. Specifically, the policy follows these three selected leading practices: Establishing a performance baseline and tracking it from the beginning to the end of the project. (Substantially meets.) EM’s policy requires that a contractor must establish a cost baseline and complete key performance measures within 110 percent of the approved, current cost baseline. The policy also requires that managers in charge of the work be responsible for successfully executing work within the approved performance baseline. Conducting monitoring and controlling activities to measure performance at regular intervals. (Fully meets.) EM’s policy requires periodic project reviews from various levels, from the federal cleanup director in charge of the operations activity and site manager, all the way to EM senior leadership. Using an EVM system that is independently certified and continuously monitored to assess project performance. (Substantially meets.) EM’s policy requires the implementation at the contract level of a work control system, either an EVM system or an approved alternative. EM guidance suggests that the EVM system be surveilled regularly, although EM does not require the EVM system to be independently certified. The policy did not follow the other 9 selected project management leading practices; specifically, it partially met 5, while the remaining 4 were minimally or not met, as explained below: Establishing a project execution plan with policies and procedures to manage and control project planning. (Does not meet.) EM’s policy does not require a plan to establish policies and procedures to manage and control project planning. Clearly and completely defining the scope of a project so that its performance can be measured. (Partially meets.) EM’s policy requires that the scope be defined for a segment—typically a 5- to 10- year contract—at the beginning of the work. However, EM’s policy also states that the segment’s scope may be reduced to free up funding to cover risks. When risks occur and the scope is reduced, the segment’s performance may not be accurately and fully measured. Developing a cost estimate using GAO best practices. (Partially meets.) EM’s policy requires that EM follow our best practices for cost estimating prior to starting the execution of a segment. However, once the contractor begins executing the segment, the policy does not require EM to follow our best practices, even when independent cost estimates are developed during a baseline change process. Developing and maintaining an integrated master schedule using GAO best practices. (Minimally meets.) EM’s policy requires that the contract specify the schedule for the segment, which could be an input to an overall integrated master schedule for that segment. The policy does not require that an integrated master schedule be developed and maintained in accordance with GAO best practices. Conducting risk assessments throughout the life cycle of the project; prioritizing risks in a risk register; developing risk mitigation strategies; and determining the appropriate amount of contingency. (Minimally meets.) EM’s policy does not require a risk management plan for projects. In addition, the policy states that EM will not fund contingency to cover risks that may occur for operations activities. Capturing lessons learned throughout the continuum of a project in a database and disseminating them among projects. (Partially meets.) EM’s policy requires the EM Deputy Assistant Secretary for Acquisition and Project Management to collect and disseminate lessons learned, but the policy does not specify that this process should be done throughout the continuum of a project or that lessons learned should be disseminated among operations activities. Developing a root cause analysis and corrective action plan to identify and address the underlying causes of cost overruns, schedule delays, and performance shortcomings when a cost or schedule overrun occurs. (Does not meet.) The policy does not contain any information on the steps that EM will take, such as developing a root cause analysis and corrective action plan, once management becomes aware that a cost or schedule overrun is probable for an operations activity. Conducting a variety of independent reviews throughout the life of a project, including at key decision points, and on multiple aspects of the project, such as the mission need, cost, earned- value management system, and baseline review. (Partially meets.) EM’s policy requires reviews of segments conducted or organized by EM’s Office of Project Management. However, there are no requirements for any independent reviews conducted by DOE offices or other entities outside EM. Establishing project-reporting systems/databases to provide a clear picture of project performance to management and to keep the contractor accountable. (Partially meets.) EM’s policy established a requirement that performance information be reported in the Integrated Planning, Accountability, and Budgeting System database for each operations activity. However, EM’s policy does not address how this performance information will provide a clear picture of performance and how it will be used to keep the contractor accountable. Our findings on the inclusion of project management leading practices in EM’s 2017 cleanup policy are consistent with concerns raised by DOE’s Project Management Risk Committee. According to meeting minutes from December 2015, the committee expressed concerns that EM’s proposed cleanup policy (adopted in July 2017) appeared to run counter to the Secretary’s initiative to apply best practices to oversight of project management. In committee meeting minutes from November 2015, the committee expressed concern with the level of rigor that would be applied to independent cost analysis, project reviews, general oversight, and risk mitigation under the new cleanup policy. According to PMI, effective project management is key to implementing an organization’s strategy, and has a dramatic impact on the bottom line; organizations that invest in proven project management practices—such as these selected leading practices—continue to experience greater success than their underperforming counterparts. In addition, under federal standards for internal control, management periodically reviews policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. Until EM reviews and revises its policy to include project management leading practices related to scope, cost, and schedule performance, and independent reviews, EM’s operations activities are at risk of scope creep or uncontrolled changes to scope, exceeding their initial budget and schedule, and failing to meet their goals. EM’s Performance Measures for Operations Activities Do Not Provide a Clear Picture of Overall Performance EM uses three tools to measure the overall performance of operations activities, but these tools do not provide a clear picture of overall performance. These tools are earned value management, performance metrics, and milestones, according to EM documentation and officials. However, EM has not followed best practices for its contractors’ EVM systems; EM’s performance metrics do not link performance to cost; and EM postpones milestones when they are at risk of missing them and does not consistently track or report those milestone changes over time. Figure 6 summarizes our findings on these three performance measures and how they affect EM’s ability to effectively manage the cleanup effort. EM Relies on Three Tools to Measure Performance of Its Operations Activities To measure the overall performance of its operations activities, EM relies primarily on EVM data, supplemented by program-wide performance metrics and cleanup milestones, according to EM documentation and officials. EVM is a management tool used to measure the value of work accomplished in a given period and compare it with the planned value of work scheduled for the same period and with the actual cost of the work accomplished. EVM data can alert project managers to potential problems sooner than expenditures alone can. The use of EVM as a management tool is considered a best practice for conducting cost and schedule performance analysis for projects. EM’s 2017 cleanup policy requires that contractors use an EVM system or an approved alternative for monitoring and controlling work at the contract level. We reviewed all 20 EM contracts covering operations activities and found that EM requires its contractors to maintain EVM systems for 17 of all 20 contracts. EM paid contractors for maintaining these systems and providing EVM reports to EM. For example, EM has paid one contractor $1 million annually to maintain its EVM system, and EM has paid contractors anywhere from $10,000 to $235,000 annually to receive their EVM reports, according to EM responses to our information request. EVM by itself may not be sufficient to measure the progress of operations activities, according to EM’s 2012 operations activities protocol. The second tool EM uses to measure performance is performance metrics. EM developed 17 program-wide performance metrics for its cleanup work. The goal of these metrics is to measure progress toward completing the scope of work for the contract and the entire life of an operations activity. EM headquarters collects information from the sites monthly to measure how each activity has performed against a goal set at the beginning of each year. Examples of EM’s performance metrics include the number of cleanup sites being eliminated, the cubic meters of transuranic waste being disposed of, the number of containers of high-level waste packaged for final disposition, and the number of closed radioactive liquid waste tanks. The EM cleanup sites set targets for these metrics annually. According to EM officials, many operations activities have one or more of these performance metrics associated with them, but some do not. Appendix II contains the full list of EM’s performance metrics. The third tool EM uses to measure performance are cleanup milestones. Cleanup milestones represent deadlines for various cleanup-related activities derived from agreements DOE enters into with its regulators, including the Environmental Protection Agency and states. There are many different types of milestones, including enforceable and planning milestones. Generally, an enforceable milestone has a fixed, mandatory due date that is subject to the availability of appropriated funds while a planning milestone is not enforceable and usually represents a placeholder for shorter term work. EM collects program-wide performance information from the three performance measures tools in a centralized database known as the Integrated Planning, Accountability, and Budgeting System. These performance data are used by EM to manage its program and to provide information to DOE management, Congress, and other stakeholders. According to DOE’s Office of Inspector General and EM officials, this database was developed as a program management tool to provide information to EM headquarters officials, to ensure effective overall program performance; DOE’s Chief Financial Officer, for inclusion in DOE-wide reports; Congress and taxpayers, to identify the remaining environmental cleanup liability and to provide transparency regarding contractor performance; and stakeholders, to make sure the work reported is accurate, timely, complete, and in accordance with agreements. EM Has Not Ensured That EVM Systems Are Comprehensive, Provide Reliable Data, or Are Used by Leadership for Decision-Making EM relies on contractors’ EVM systems to measure the performance of its contractors’ operations activities, but EM has not followed (i.e., has not met, has minimally met, or has partially met) best practices to ensure that these systems are (1) comprehensive, (2) provide reliable data, and (3) are used by EM leadership for decision-making—which are the three characteristics of a reliable EVM system. Moreover, EM has allowed the contractors to categorize a large portion of their work in a way that limits the usefulness of the EVM data. EM Has Not Followed Best Practices for Its Contractors’ EVM Systems Our analysis of EM contractors’ EVM systems for operations activities found that EM has not followed (i.e., has not met, has minimally met, or has partially met) best practices, as discussed below. As a result, EM has not ensured that these systems are: (1) comprehensive, (2) provide reliable data, and (3) used by EM leadership for decision-making—which are the three characteristics of a reliable EVM system. (See app. III for more specific information on EM’s performance on each best practice considered and app. IV for information on how each contract followed each best practice.) Comprehensive: Best practices to ensure EVM systems are comprehensive are: (1) requiring the contractor’s EVM systems be certified to meet guidelines established by the Earned Value Management Systems EIA-748-D Intent Guide; (2) conducting an integrated baseline review to ensure that all work is accurately captured in the performance measurement baseline; and (3) performing regular surveillance to ensure the contractors continue to maintain their EVM systems in a way to meet the EIA-748-D guidelines. We found that 17 out of 20 contractors’ EVM systems were certified to be compliant with the EIA-748-D guidelines, but of these 17, 4 contractors had self-certified their EVM systems. However, only about half of the EVM systems met the best practices for conducting integrated baseline reviews and performing ongoing surveillance. Among those, many of the reviews were not rigorous enough to ensure that the performance measurement baseline captured all of the work. In November 2017, EM issued a standard operating policy and procedure, which suggests that EVM systems be surveilled regularly. However, we discovered that EM officials were not performing thorough surveillance reviews to ensure that EVM systems were in alignment with the EIA-748-D guidelines and that the data being reported by the EVM systems were reliable. Provide reliable data: Best practices to ensure that the contractors’ EVM systems provide reliable data are (1) the EVM data do not contain any anomalies and (2) estimates at completion— the expected total cost of completing all work based on the contractor’s performance to date—are realistic. The EVM data for contracts covering operations activities contained numerous, unexplained anomalies in all the months we reviewed, including missing or negative values for some of the completed work to date. Negative values should occur rarely, if ever, in EVM reporting because they imply the undoing of previously scheduled or performed work. In addition, we found problems with the estimate at completion listed in all 20 contractors’ EVM systems. More specifically, we found (1) many instances where the actual costs exceeded the estimates at completion even though there was still a lot of work remaining; (2) several occasions where the estimates at completion were less than half of the original budget at the beginning of the project; and (3) several contractors reported estimates at completion of zero dollars when their original budgets were for hundreds of millions of dollars. These problems indicated that the EVM systems were not being updated in a timely manner or were not well monitored since the estimate at completion values were too optimistic and highly unlikely. Used by EM leadership for decision-making. Best practices to ensure that the data from the contractors’ EVM systems are used by EM leadership for decision-making are: (1) reviewing EVM data, including cost and schedule variances, on a regular basis; (2) ensuring that EM management use EVM data to develop corrective action plans; and (3) ensuring that the performance measurement baseline is updated to reflect changes. We reviewed monthly reports EM sites present to EM headquarters management for review. We found that none of the sites adequately reported EVM variances to EM headquarters management; they were all missing some EVM information such as trend data or the estimate at completion. In addition, many of the sites’ monthly reports did not include corrective action plans for addressing variances, if any, between planned and actual performance. We also reviewed monthly reports that the EM Office of Project Management started to present to EM headquarters senior leadership in October 2017, and found that these reports included most of the EVM indicators for all 15 contracts on which EM Office of Project Management reported. However, EM Office of Project Management officials stated that they have only started suggesting corrective action to EM headquarters senior leadership since early 2018; it is too soon to tell how EM headquarters senior leadership is using this information to determine which contracts need the most attention and which corrective actions management will develop and take. Moreover, this monthly report uses unreliable EVM data, as we found in the prior characteristic. Finally, regarding the third best practice, EM provided evidence that 17 out of 20 contractors had a formal process in place for updating the budget baseline. However, the extent to which contractors followed their processes was questionable given the problems we found with the estimates at completion, as discussed in the prior characteristic. Even though EM requires most of its contractors for operations activities to maintain EVM systems and pays them for doing so, EM’s 2017 policy generally does not require that EVM systems be maintained and used in a way that follow EVM best practices. Until EM updates its cleanup policy to require that EVM systems be maintained and used in a way that follow EVM best practices, EM leadership may not have access to reliable performance data to make informed decisions in managing its cleanup work and to provide to Congress and other stakeholders on billions of dollars’ worth of cleanup work every year. Much of the Cleanup Work Is Categorized in a Way That Limits the Usefulness of the EVM Data Compounding the limitations with the EVM systems currently in place, EM has categorized a large portion of its work in a way that limits the usefulness of the EVM data. Specifically, a sizable amount of the work is categorized as level of effort for all 14 contracts for which we could identify the percentage of the level-of-effort work (in dollars). Work that is categorized as level of effort does not have defined deliverables or physical products. Progress for level-of-effort work is measured by the passage of time, but is not measured against a scheduled amount, so no schedule variance occurs. The effectiveness of EVM systems, which are designed to measure performance against cost and schedule targets, will be limited if there is a high amount of level-of-effort work, according to our best practices. Thus, according to best practices, categorizing work as level of effort should be minimized to the extent possible if EVM is being used to measure performance, and contracts with level-of-effort work over 15 percent should be subject to additional scrutiny. As shown in figure 7 below, the range for EM’s contracts on operations activities is between 36 and 83 percent. (We used letters for each contract, rather than identifying the site or contractor). According to EM officials, at least half of the level-of-effort work conducted under the cleanup contracts consists of recurring activities necessary to maintain the sites, which EM refers to as “minimum safety” work. According to EM officials, examples of such work include physical security, health and radiation protection and services, or critical facility and infrastructure maintenance for safe conditions. These officials said that minimum safety work makes up 30 to 60 percent of individual sites’ budgets, for a total of at least $2.7 billion, or 42 percent, of EM’s $6.4 billion fiscal year 2018 budget. The Assistant Secretary for EM noted in September 2018 that much of DOE’s environmental cost liability has to do with the management of the minimum safety work. The Assistant Secretary also noted that significant potential cost savings could result from reducing minimum safety work and planned to start an initiative in fiscal year 2019 to examine how EM can reduce this work. EM officials agreed that some of the contractor’s work currently categorized as level of effort could in fact be measured discretely. According to an ANSI guideline, only work not measurable or for which measurement is impractical may be categorized as level of effort. EM officials we interviewed stated that EM relies on its contractors to categorize work as discrete or as level of effort, and EM approves these decisions during the integrated baseline review. According to EM officials, there is no EM policy or guidance on what circumstances justify categorizing work as level of effort. Federal standards for internal controls state that management should design control activities to achieve objectives and respond to risks, such as by clearly documenting internal control in management directives, administrative policies, or operating manuals. Until EM develops a policy that ensures that work is categorized as level of effort only in appropriate, specified circumstances, such as when work is not measurable or when measurement is impractical, it may not have reliable performance data to help it achieve its objective of reducing risks and costs associated with billions of dollars’ worth of cleanup work every year. Performance Metrics and Milestones for EM Cleanup Work Do Not Provide a Clear Picture of Performance We found that EM’s 17 performance metrics for its cleanup work measure the scope of work accomplished in a specific year but do not link that work to the cost of completing it. For example, EM reported in the Integrated Planning, Accountability, and Budgeting System database eliminating 72,000 gallons of radioactive liquid waste out of a target of 342,000 gallons for fiscal year 2017 at the Savannah River Site, and disposing of 1,734 cubic meters of low-level waste out of a target of 360 cubic meters at the Idaho site. However, in neither case did EM indicate how much that work cost to accomplish. According to officials from DOE’s Office of Project Management, the scope of work accomplished is not a good indicator of performance by itself because it does not allow the project manager to know whether EM received good value from the contractor. In contrast, EVM systems allow managers to measure the value of work accomplished in a given period. As discussed above, EM collects EVM data, but EM’s performance metrics do not link to the EVM data. According to federal standards for internal control, management should use quality information to achieve an entity’s objectives and the quality information must be complete, among other things. In EM’s case, its objective, as stated in its mission, includes completing its cleanup work in a way that reduces associated risks and costs. By integrating reliable EVM data into EM’s performance metrics for operations activities, EM could provide a clearer picture of performance and better indicate whether EM is achieving its objective of reducing risks and costs. With regard to cleanup milestones, we found in February 2019 that EM has hundreds of milestones, but the exact number cannot be determined because of inconsistencies in tracking and defining milestones between sites and EM headquarters, and sites have the discretion to send updated milestone data to EM headquarters when they choose. As a result, some sites track milestones differently than EM headquarters does. We also found that EM does not consistently define or track met, missed, or postponed cleanup-related milestones at selected sites, and EM’s milestone reporting to Congress is inconsistent. EM sites renegotiate milestone dates with their regulators before they are missed, and EM does not track the history of these changes. This is because once milestones are changed, sites are not required to maintain or track the original milestone dates. As a result, the new milestones become the new agreed-upon time frame, essentially resetting the deadline. Further, in its report to Congress on enforceable milestones’ status, EM reports the most recently renegotiated milestone dates with no indication of whether or how often those milestones have been missed or postponed. Thus, the EM program is unable to use milestone data to provide a clear, reliable picture of its performance. Furthermore, EM officials at headquarters and selected sites said they had not conducted root cause analyses on missed or postponed milestones. Thus, EM cannot address systemic problems and consider them when renegotiating milestones with regulators. In addition, without such analysis, EM and its cleanup regulators lack information to set more realistic and achievable milestones. As a result, future milestones are likely to continue to be pushed back, further delaying the cleanup work and likely increasing cleanup costs. In this same report, we recommended, among other things, that EM should establish a standard definition of milestones across the cleanup sites, track changes to the milestones, report annually to Congress on the status of its milestones, and conduct root cause analyses of performance shortcomings that lead to missed or postponed milestones. Conclusions DOE’s EM program has the challenging mission of safely cleaning up radioactive waste, spent nuclear fuel, and environmental contamination from 50 years of federal nuclear weapons production and energy research, while working to reduce associated risks and costs within the established regulatory framework. Since its mission began in 1989, EM has spent more than $164 billion on its cleanup work, and it faces future cleanup costs of more than $377 billion—the federal government’s single largest environmental liability. To improve management of projects undertaken within the department, including EM, DOE established its Office of Project Management and strengthened project management requirements in Order 413.3B for managing capital asset projects. However, since 2009, when EM created a new category of cleanup work called operations activities, EM has opted not to apply DOE’s project management requirements to almost 80 percent of its cleanup work. From fiscal years 2011 to 2018, EM’s environmental liability increased by about $214 billion. DOE’s Office of Project Management officials have raised concerns about how EM classifies this work. Until EM works together with DOE’s Office of Project management (1) to establish requirements for classifying cleanup work as capital asset projects or operations activities and (2) to assess EM’s ongoing operations activities to determine if they should be reclassified as capital asset projects based on the newly established requirements, the department may incur more project management risk of cost increases and schedule delays than it should for hundreds of billions of dollars of remaining work. In July 2017, EM released a new cleanup policy containing requirements for managing its program and its operations activities, but this policy does not follow most of the selected program and project management leading practices we identified related to management of scope, cost, and schedule performance, and independent review of performance. Until EM reviews and revises its cleanup policy to include program and project management leading practices related to scope, cost, schedule performance, and independent reviews, the EM program is at risk of uncontrolled changes to scope, exceeding its cost estimates and schedule, failing to meet its goals, and increasing DOE’s environmental liabilities. The new Assistant Secretary for the Office of Environmental Management has acknowledged the importance of improving EM’s performance in addressing the department’s large and growing environmental liabilities. However, the three tools that EM uses to measure its overall program performance and contractors’ performance on operations activities— earned value management, performance metrics, and cleanup milestones—do not provide a clear, reliable picture of performance for EM leadership, Congress, and other stakeholders. In particular, EM’s EVM systems for operations activities are not comprehensive, do not provide reliable data, and are not used by EM leadership to measure overall performance of the EM program. Furthermore, a large portion of the work performed by contractors is categorized as level of effort, limiting the usefulness of the EVM data. In addition, EM’s performance metrics are not linked to the costs of the work performed. Until EM updates its cleanup policy to require that EVM systems be maintained and used in a way that follows EVM best practices, EM leadership may not have access to reliable performance data to make informed decisions in managing its cleanup work and to provide to Congress and other stakeholders on billions of dollars’ worth of cleanup work every year. Moreover, until EM develops a policy that ensures that work is categorized as level of effort only in appropriate, specified circumstances, such as when work is not measurable or when measurement is impractical, it may not have reliable performance data to help it achieve its objective of reducing risks and costs associated with billions of dollars’ worth of cleanup work every year. Finally, by integrating reliable EVM data into EM’s performance metrics for operations activities, EM could provide a clearer picture of performance and better indicate whether EM is achieving its objective of reducing risks and costs. Recommendations for Executive Action We are making the following seven recommendations to DOE: The Secretary of Energy should direct the Director of the Office of Project Management and the Assistant Secretary of the Office of Environmental Management to work together to establish requirements for classifying cleanup work as capital asset projects or operations activities. (Recommendation 1) The Secretary of Energy should direct the Director of the Office of Project Management and the Assistant Secretary of the Office of Environmental Management to work together to asses EM’s ongoing operations activities to determine if they should be reclassified as capital asset projects based on the newly established requirements. (Recommendation 2) The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to review and revise EM’s 2017 cleanup policy to include program management leading practices related to scope, cost, schedule performance, and independent reviews. (Recommendation 3) The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to review and revise EM’s 2017 cleanup policy to include project management leading practices related to scope, cost, schedule performance, and independent reviews. (Recommendation 4) The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to update its cleanup policy to require that EVM systems be maintained and used in a way that follows EVM best practices. (Recommendation 5) The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to develop a policy to ensure that work is categorized as level of effort only in appropriate, specified circumstances, such as when work is not measurable or when measurement is impractical. (Recommendation 6) The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to integrate EVM data into EM’s performance metrics for operations activities. (Recommendation 7) Agency Comments and Our Evaluation We provided DOE with a draft of this report for its review and comment. In its written comments, reproduced in appendix V, DOE generally agreed with the findings in the report and its recommendations and described actions that it intends to take in response to our recommendations. More specifically, of the seven recommendations, DOE concurred with four and partially concurred with three. DOE partially concurred with our recommendations that the Director of the Office of Project Management and the Assistant Secretary for the Office of Environmental Management (EM) work together to (1) establish requirements for classifying cleanup work as capital asset projects or operations activities, and (2) assess EM’s ongoing operations activities to determine if they should be reclassified as capital asset projects based on the newly established requirements. DOE stated that the department commits (1) to reviewing its methodology for categorizing work and revising it, as appropriate, as well as (2) to determining the appropriate application of any revisions to the work classification methodology to new and existing work. DOE also stated that the Assistant Secretary for EM is ultimately responsible for the proper classification of work and will consult with the Office of Project Management. We appreciate DOE’s commitment to addressing these two recommendations. As we stated in our report, in July 2015, the Secretary of Energy gave DOE’s Office of Project Management responsibility to serve as DOE’s enterprise project management organization. As such, DOE states that this office is responsible for providing leadership and assistance in developing and implementing DOE-wide policies, procedures, programs, and management systems pertaining to project management, as well as for independently monitoring, assessing, and reporting on project execution performance. Officials from this office are experts in project management, especially as it relates to capital asset projects. Given (1) the high-risk posed by EM’s cleanup work and the high environmental liability, which may continue to grow; (2) the difference in the stringency of requirements between managing and overseeing operations activities and capital asset projects; and (3) the concerns raised by DOE top project management experts that some current operations activities should be classified as capital asset projects, we encourage the Secretary to direct EM not only to consult with DOE’s Office of Project Management but to take advantage of the office’s role and expertise and direct EM to work with this office to come to an agreement about proper classification requirements and classification of current and future cleanup work. It is in DOE’s interest to ensure its cleanup work is classified and managed appropriately, regardless of which office is ultimately responsible for the proper classification of work. DOE concurred with our recommendations to review and revise EM’s 2017 cleanup policy to include program and project management leading practices related to scope, cost, schedule performance, and independent reviews and to require that EVM systems be maintained and used in a way that follows EVM best practices. DOE also concurred with our recommendation to develop a policy to ensure that work is categorized as level of effort only in appropriate, specified circumstances, such as when work is not measurable or when measurement is impractical. DOE also partially concurred with our recommendation to integrate EVM data into EM’s performance metrics for operations activities. For all these recommendations, DOE stated that EM is already in the process of reviewing the EM cleanup policy for necessary updates, revisions, and modifications. DOE further stated that EM will consider and incorporate changes relative to these recommendations, as appropriate, during this process, and EM will also consider any necessary changes to related guidance or policies and procedures. DOE also provided technical comments, which we incorporated in our report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our report examined: (1) how the EM program manages its cleanup work, (2) the extent to which EM’s cleanup policy follows selected program and project management leading practices, and (3) how EM measures the overall performance of its operations activities. To examine how the EM program manages its cleanup work, we reviewed various DOE documents, including DOE’s Order 413.3B, EM’s 2012 operations activities protocol, EM’s 2017 cleanup policy, standard operating policies and procedures associated with this cleanup policy, EM’s mission and functions document, EM’s draft 45-day review documentation, meeting minutes from the Project Management Risk Committee, draft appendix to Order 413.3B developed by DOE’s Office of Project Management, and documents received from cleanup sites. We also interviewed DOE officials from the Office of Project Management and members of the Project Management Risk Committee, and EM officials from headquarters, such as the Associate Principal Deputy Assistant Secretary for Field Operations, the Deputy Assistant Secretary for Acquisition and Project Management, officials from EM’s Office of Project Management, Office of Budget and Planning, Office of Program Planning, officials in charge of managing the Integrated Planning, Accountability, and Budgeting System database that collects monthly performance information from the sites, and officials from 5 of EM’s 16 cleanup sites. (We contacted all sites and interviewed 5 sites over the phone that responded to our request for an interview.) We then decided to conduct site visits. We visited two of these sites—Savannah River and Idaho— because they are among the sites with the highest number of operations activities and the most diverse types of and highest-cost cleanup work remaining. Our findings from these 5 sites are not generalizable to all EM sites, but they help explain the delineation of roles between the site managers and EM headquarters in managing and classifying cleanup work. We also attended an EM internal training session in which EM headquarters officials introduced the 2017 cleanup policy to officials at the Hanford site and attended EM cleanup public conferences. Moreover, we reviewed the role of DOE’s Office of Project Management in EM’s cleanup work. More specifically, we examined whether this office played a role in the development of EM’s 2017 cleanup policy and classification of EM’s cleanup work, consistent with its designation as DOE’s enterprise project management organization. To assess the reliability of EM’s fiscal year 2019 budget data, we requested information about EM’s Financial Integration System module of the Integrated Planning, Accountability, and Budgeting System database, from which these data were provided. Based on the responses from officials in charge of this database, we determined the data to be sufficiently reliable for our purposes. To examine the extent to which EM’s cleanup policy follows selected program and project management leading practices, we selected two sets of criteria for program and project management leading practices using leading practices from the Project Management Institute, which are generally recognized as the top leading practices for program and project management. To select program management leading practices, we first reviewed the Project Management Institute’s The Standard for Program Management—Third Edition (2013). We identified 9 program management leading practices based on PMI’s standards related to a program’s management of scope, cost, schedule performance, and independent review of performance. To select project management leading practices, we first identified 12 project management leading practices listed in DOE’s Order 413.3B related to a project’s management of scope, cost, schedule performance, and independent review of performance. We then compared these 12 project management leading practices to PMI’s A Guide to the Project Management Body of Knowledge–Fifth Edition, which includes PMI’s standards for project management, to make sure these leading practices align with PMI’s standards for project management. To select these leading practices, (1) two GAO analysts separately examined the PMI and DOE documentation, then, (2) a GAO specialist independent of the team producing this report reviewed the leading practices we selected. All three GAO staff agreed on these selected leading practices. To validate our selection of program and project management leading practices, we shared these selected leading practices with PMI representatives and incorporated their feedback, as appropriate. PMI representatives agreed with the program and project management leading practices that we selected. We then compared EM’s 2017 cleanup policy and the 11 associated standard operating policies and procedures developed by EM by the time of our analysis (by May 2018) with the 9 program management and 12 project management leading practices we selected. We included these standard operating policies and procedures in our analysis because EM officials stated that EM intentionally wrote this policy at a high level because EM planned to develop standard operating policies and procedures that would establish more detailed steps to implement the policy. We analyzed the extent to which the policy and the 11 standard operating policies and procedures follow these leading practices. We also interviewed EM headquarters and site officials to learn more about the 2017 cleanup policy. We used a 5-point scoring system to determine the extent to which EM’s cleanup policy follows selected program and project management leading practices. We used the following 5-point scoring system: “fully met” means that complete evidence was provided that satisfied the leading practice; “substantially met” means that evidence was provided that satisfied a large portion of the leading practice; “partially met” means that evidence was provided that satisfied about half of the leading practice; “minimally met” means that evidence was provided that satisfied a small portion of the leading practice; and “did not meet” means that no evidence was provided that satisfied the leading practice. If the score for each leading practice was “fully met” or “substantially met,” we concluded that EM’s cleanup policy and its associated standard operating policies and procedures followed the leading practice. In contrast, if the score was “partially met,” “minimally met,” or “not met,” we concluded that EM’s policy did not follow the leading practice. To determine this score, two GAO analysts separately examined EM’s policy document and then agreed on a final score for each of the leading practices. To examine how EM measures the performance of its operations activities, we analyzed EM’s use of the three measures of performance that EM policy identified: earned value management (EVM); performance metrics; and cleanup milestones. To evaluate EM’s EVM systems, we compared EM’s use of EVM with 8 of the 10 best practices for earned value management found in our Cost Estimating and Assessment Guide, which draws best practices from federal cost-estimating organizations and industry. Specifically, we reviewed the use of EVM systems in the 21 contracts EM uses to execute its operations activities and compared this review’s results with EVM best practices. To gather this information, we submitted a data collection instrument to all 16 sites to ascertain whether or not they follow these best practices for each contract containing operations activities. We also requested documentation, such as EVM system certification information or surveillance reports, supporting their answers. We relied mainly on the sites’ responses but, when available, also reviewed the documentation we received to check the sites’ answers for accuracy and completeness. To determine whether information on EVM is reported to EM senior leadership, we also reviewed (1) monthly progress reports EM sites presented to EM headquarters management that ranged from April 2017 to April 2018 depending on the site and (2) monthly reports that EM Office of Project Management presents to EM headquarters senior leadership; specifically the April 2018 Cleanup Program Monthly Performance and the EM Segment Activity Portfolio Summary, or “Quad Chart,” reports, which were the most recent reports available at the time of this analysis. In addition, as part of our analysis, we analyzed EM headquarters’ EVM data on operations activities from October 2016 through September 2017 (the most recent data available at the time of our review) to determine whether or not the EVM data were reliable. We checked for data anomalies, such as missing or negative values for each of those months. We also reviewed DOE and EM documents—such as monthly progress reports submitted by the 16 sites to EM headquarters for review or the monthly reviews prepared by an EM headquarters office for senior management—to see what EVM data senior management used for decision-making. To provide a score for our analysis, we used the following 5-point scoring system to score the answer for each contract for each best practice: “fully met” means that complete evidence was provided that satisfied the best practice; “substantially met” means that evidence was provided that satisfied a large portion of the best practice; “partially met” means that evidence was provided that satisfied about half of the best practice; “minimally met” means that evidence was provided that satisfied a small portion of the best practice; and, “did not meet” means that no evidence was provided that satisfied the best practice. For each best practice, we color-coded the assessment at the contract level. Contracts that fully met or substantially met the criteria were coded green, those that partially met the criteria were coded yellow, and those that did not or minimally meet the criteria were coded red. We then assigned a score for each color: 1 for red, 3 for yellow, and 5 for green. We determined the overall score for each best practice by taking the average across the 20 contracts we reviewed. After scoring each best practice individually, we then used these scores to develop an average score for the three EVM characteristics: whether EM has ensured that these EVM systems are (1) comprehensive, (2) provide reliable data, and (3) are used by EM leadership for decision-making. To examine EM’s use of performance metrics data, we reviewed annual performance metrics collected by EM headquarters for every operations activity from 2010 to 2017. We chose this period because 2010 is the time when EM started classifying work as operations activities while 2017 was the most recent available data at the time of our analysis. We reviewed relevant documentation, and interviewed agency officials knowledgeable about those data, among other things. Specifically, we interviewed DOE and EM officials at headquarters and from the five cleanup sites (including in-person interviews at the Savannah River and Idaho sites). We also reviewed our prior work in GAO-19-207 related to EM’s cleanup agreements and milestones. We conducted this performance audit from April 2017 to February 2019, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: EM’s Program-wide Performance Metrics Presented to Congress, as of the end of Fiscal Year 2017 Appendix II: EM’s Program-wide Performance Metrics Presented to Congress, as of the end of Fiscal Year 2017 The information in this table is from DOE’s fiscal year 2019 budget request, which was the most recent request presented to Congress. DOE, Department of Energy: FY 2019 Congressional Budget Request for Environmental Management, DOE/CF-0142, Vol. 5 (Washington, D.C.: March 2018). Appendix III: GAO Assessment of How Earned Value Management Systems Used for EM’s Operations Activities Met Best Practices GAO assessment of individual best practice Substantially met. Seventeen out of 20 contracts we reviewed had a certified EVM system, of which 4 self- certified. EM officials reported that the remaining three contracts were not certified or were not required to be certified. Partially met. Thirteen out of the 20 contracts we reviewed had conducted or planned to conduct an integrated baseline review to ensure that the performance measurement baseline provides reliable cost and schedule data for managing the program and projecting accurate estimates at completion. However, many of these reviews were not rigorous enough to ensure that the performance measurement baseline captured all of the work. Not assessed. Partially met. Eleven out of the 20 contracts fully met this best practice, and contractors performed self- assessments or conducted annual reviews for 5 additional contracts. However, EM field and headquarters officials were not performing thorough reviews to check whether the EVM systems were in alignment with the EIA-748-D guidelines to ensure that the data being reported by the systems were reliable. Partially met. The EVM data for operations activities contracts contained numerous, unexplained anomalies in all the months we reviewed—including missing or negative values for some of the completed work to date. Having anomalies in the EVM data occurring each month can cause potential distortions resulting in inaccurate projections of estimates at completion. Not assessed. Characteristic / Score Does EM’s use of EVM systems follow characteristic? GAO assessment of individual best practice Minimally met. We found problems with the estimate at completion in all of the 20 contracts we analyzed. For example, we found instances where the estimates at completion were either (1) less than half the original budget, (2) higher than expected, or 3) zero when the original budget was for hundreds of millions of dollars. These problems indicated that the EVM systems were not being updated in a timely manner or were not well monitored since the estimate at completion values were too optimistic and highly unlikely. Partially met. We reviewed two sources of information on earned value management reporting to EM senior leadership for this best practice. 1) When reviewing the monthly reports EM sites present to EM headquarters management, we found that none of the sites adequately reported EVM data. 2) When reviewing the new monthly report format that EM’s Office of Project Management presents to EM headquarters senior leadership since October 2017, we found that EM reported on the performance of 15 out of the 20 contracts. We found that these reports included most of EVM indicators for all 15 contracts on which EM Office of Project Management reported. However, this monthly report uses unreliable EVM data, as we found in the prior characteristic. Partially met. We reviewed two sources of information on earned value management reporting to EM senior leadership for this best practice. 1) When reviewing the monthly reports EM sites present to EM headquarters management, we found that they contained corrective action plans for only 3 contracts. 2) When reviewing the new monthly reports that EM’s Office of Project Management present to EM headquarters senior leadership since October 2017, EM Office of Project Management officials stated that they have only started suggesting corrective action to EM headquarters senior leadership since early 2018; it is too soon to tell how EM headquarters senior leadership is using this information to determine which contracts need the most attention and which corrective actions management will develop and take. Substantially met. EM provided evidence that 17 out of 20 contractors had a formal process in place for updating the budget baseline. However, the extent to which contractors followed their processes was questionable given the problems we found with the estimates at completion, as discussed in the prior characteristic above. Appendix IV: EM’s Earned Value Management Systems Used by Contracts Containing Operations Activities B Not reviewed Assessment for each best practice: Not met—provided no evidence that satisfies any of the best practice; Minimally met—provided evidence that satisfies a small portion of the best practice; Partially met—provided evidence that satisfies about half of the best practice; Substantially met—provided evidence that satisfies a large portion of the best practice; and Met – provided complete evidence that satisfies the entire best practice. We determined the overall score for each best practice by taking the average across the 20 contracts we reviewed. We did not evaluate the following two best practices: (1) the schedule reflects the work breakdown structure, the logical sequencing of activities, and the necessary resources and (2) EVM data are consistent among various reporting formats. We excluded these two best practices because we examined the use of EVM by contractors at a higher program level and did not conduct in-depth analysis of each contractor’s EVM system. EM uses 21 contracts for its operations activities. We reviewed the use of EVM systems in 20 of these contracts because one contract (contract K) is a fixed price contract, which does not require the use of EVM. Appendix V: Comments from the Department of Energy Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Nico Sloss (Assistant Director), Cristian Ion (Analyst in Charge), Nathan Anderson, Margot Bolon, Jenny Chow, Jennifer Echard, Juan Garay, Cindy Gilbert, Katherine Nicole Laubacher, Cynthia Norris, Karen Richey, Dan C. Royer, Kiki Theodoropoulos, and David Wishard made key contributions to this report.
Why GAO Did This Study EM's mission is to complete the cleanup of nuclear waste at 16 DOE sites and to work to reduce risks and costs within its established regulatory framework. In December 2018, DOE reported that it faced an estimated $494 billion in future environmental cleanup costs—a liability that roughly tripled during the previous 20 years. GAO was asked to examine EM's operations activities. This report examines, among other objectives, (1) how EM manages its cleanup work and (2) the extent to which EM's cleanup policy follows selected leading practices for program and project management. To do this work, GAO reviewed agency documents and interviewed DOE project management experts and EM officials. GAO compared EM's policy with selected leading practices endorsed by the Project Management Institute for program and project management related to scope, cost, schedule, and independent review. What GAO Found The Department of Energy's (DOE) Office of Environmental Management (EM) manages most of its cleanup of nuclear waste (77 percent of its fiscal year 2019 budget) under a category that EM refers to as operations activities, using less stringent requirements than a category of work, known as capital asset projects. (See figure) Capital asset projects—which involve the acquisition of land and other assets, including through environmental remediation—must undergo a series of reviews by independent experts and DOE's senior leadership. In contrast, operations activities are not reviewed outside of EM. EM's policy defines operations activities as reoccurring facility or environmental operations, as well as activities that are project-like, with defined start and end dates. EM cleanup site managers have discretion on how to classify cleanup work because DOE and EM have not established classification requirements. Since 2015, experts in DOE's Office of Project Management have raised concerns that some operations activities should be classified as capital asset projects, and that managing them under less stringent requirements poses cost and schedule risks. For example, the experts stated the cleanup of tanks of radioactive liquid waste should be designated as capital asset projects. However, these experts also stated that EM did not respond to their concerns, even though the office has department-wide responsibilities for overseeing project management. Until EM works with DOE's Office of Project Management to establish requirements for classifying cleanup work, the department may incur more cost and schedule risks than it should. EM's cleanup policy does not follow any of 9 selected program management leading practices or 9 of 12 selected project management leading practices. For example, EM's 2017 cleanup policy does not follow the program management leading practice of conducting risk management throughout the life of a program or the project management leading practice of requiring independent reviews of operations activities. These leading practices help ensure that a program optimizes scope, cost, and schedule performance and that it achieves its goals and intended benefits. Until EM revises its cleanup policy to follow leading practices, EM's operations activities are at risk of uncontrolled changes to scope, exceeding initial budget and schedule, and failing to meet their original goals. What GAO Recommends GAO is making seven recommendations, including that EM (1) establish cleanup work classification requirements and (2) revise its cleanup policy to follow program and project management leading practices. DOE generally agreed with GAO's recommendations.
gao_GAO-18-139
gao_GAO-18-139_0
Background The FAR establishes several types of source selection procedures, which include the tradeoff procedure on one end of the best value continuum and LPTA procedures on the other end. (see fig. 1). DOD may elect to use the LPTA procedure where the requirement is clearly defined and the risk of unsuccessful contract performance is minimal. In such cases, DOD has determined that cost or price should play a dominant role in the source selection. When using LPTA procedures, DOD specifies its minimum requirements in the solicitation. Firms submit their proposals and DOD determines which of the proposals meet or exceed those requirements, no tradeoffs between cost or price and non-cost factors (for example, technical capabilities or past performance) are permitted, and the award is made based on the lowest price technically acceptable proposal submitted to the government. Non- cost factors are rated on an acceptable or unacceptable basis. By contrast, DOD may elect to use tradeoff procedures in acquisitions where the requirement is less definitive, more development work is required, or the acquisition has a greater performance risk. In these instances, non-cost factors may play a dominant role in the source selection process. Tradeoffs between price and non-cost factors allow DOD to accept other than the lowest priced proposal. The FAR requires DOD to state in the solicitation whether all evaluation factors other than cost or price, when combined, are significantly more important than, approximately equal to, or significantly less important than cost or price. DOD’s March 2016 Source Selection guide offers additional guidance regarding the use of LPTA source selection procedures. The guidance is mandatory for acquisitions conducted as part of a major system acquisition program and all competitive FAR part 15 procurements with an estimated value over $10 million. The guidance states that LPTA procedures may be used in situations where there would not be any value on a product or service exceeding the required technical or performance requirements. The guidance also states that such situations may include acquisitions for well-defined, commercial, or non-complex products or services and where risk of unsuccessful contract performance is minimal, and when it has been determined there would be no need or value to pay more for higher performance. Section 813 of the fiscal year 2017 NDAA required that DOD revise the DFARS to require that LPTA procedures only be used in situations when the following six criteria are met. 1. DOD can clearly describe the minimum requirements in terms of performance objectives, measures, and standards that will be used to determine acceptability of offers; 2. DOD would realize no, or little, value from a proposal exceeding the solicitation’s minimum technical requirements; 3. The proposed technical approaches can be evaluated with little or no subjectivity as to the desirability of one versus the other; 4. There is a high degree of certainty that a review of technical proposals other than that of the lowest-price offeror would not identity factors that could provide other benefits to the government; 5. The contracting officer has included a justification for the use of LPTA procedures in the contract file; and 6. The lowest price reflects full life-cycle costs, including for operations and support. Section 813 also established that implementing revisions to the DFARS were to be completed within 120 days of enactment of the NDAA, but the revisions had not been put in place as of October 2017. DOD officials stated that the changes to the DFARS are currently in progress. Past GAO Reports on DOD Source Selection Procedures In 2010 and 2014, we reported on DOD’s use of best value tradeoff source selection procedures. In 2010, we found that, for 60 of the 88 contracts reviewed, DOD used a tradeoff process and weighted non-cost factors as more important than price. In these cases, DOD was willing to pay more when a firm demonstrated it understood complex technical issues more thoroughly, could provide a needed good or service to meet deadlines, or had a proven track record in successfully delivering products or services of a similar nature. In addition, we determined that when making tradeoff decisions, DOD selected a lower priced proposal nearly as often as it selected a higher technically rated, but more costly, proposal. In so doing, DOD chose not to pay more than $800 million in proposed costs by selecting a lower priced offer over a higher technically rated offer in 18 contracts. The majority of solicitations where non-cost factors were equal to or less important than cost were for less complex requirements. DOD faced several challenges when using best value tradeoff procedures, including: the difficulties in developing meaningful evaluation factors, the additional time investment needed to conduct best value procurements, and the greater level of business judgment required of acquisition staff when compared to other acquisition approaches. To help DOD effectively employ the best value tradeoff process, we recommended that DOD develop training elements such as case studies that focus on reaching tradeoff decisions. DOD concurred and implemented the recommendation in August 2012. In 2014, we found that DOD had increased its use of LPTA procedures for new contracts with obligations over $25 million—using LPTA source selection procedures to award an estimated 36 percent of new fiscal year 2013 contracts compared to 26 percent in fiscal year 2009—and that officials’ decisions on which source selection method would be used was generally rooted in knowledge about the requirements and contractors. For contracts with obligations over $25 million, DOD used LPTA source selection procedures primarily to acquire commercial products such as fuel, and we identified relatively few uses of LPTA to acquire higher dollar services. For contracts with obligations over $1 million and under $25 million, DOD used LPTA procedures an estimated 45 percent of the time for a mix of products and services, including fuel, aircraft parts, computer equipment, construction-related services, engineering support services, and ship maintenance and repairs. We did not make recommendations to DOD in this report. DOD Used LPTA Procedures Infrequently for Contracts Valued at $10 Million or More for Information Technology or Support Services The Army, Navy, and Air Force rarely used LPTA source selection procedures for IT and support services contracts valued at $10 million or more that were awarded in the first half of fiscal year 2017. Our analysis found that the three military departments awarded 781 new contracts valued at $10 million or more during this time frame. Of these 781 contracts, 133 contracts were awarded for IT and support services. However, only 9 of the 133 contracts used LPTA source selection procedures (see fig. 2). Table 1 provides information on the 7 contracts we reviewed that were awarded in the first half of fiscal year 2017 that used LPTA source selection procedures. As previously noted, we excluded 2 of the 9 contracts from further review due to bid protests. Factors DOD Officials Considered When Determining to Use LPTA Procedures DOD Officials Considered Several Factors, Including the Nature of the Requirement, When Determining to Use LPTA Procedures Contracting officials cited a number of factors that were considered when determining to use LPTA procedures in the 7 selected contracts we reviewed. For all of the contracts, officials determined that the government would not receive a benefit for paying more than the lowest price. For these contracts, contracting officials also stated that LPTA procedures were used, in part, because the requirements were well- defined, non-complex, or reoccurring. Additional details on the contracts follow. The Army awarded an IDIQ contract, with a one-year base period and four 1-year options, for support services in Afghanistan with an estimated ceiling value of $85,000,000. This is a reoccurring requirement to hire Afghan nationals to provide on-site construction management, engineering, and technical support services for reconstruction projects throughout Afghanistan. The acquisition plan states that Afghan nationals can more freely move about the country compared to U.S. personnel. Further, a contracting official stated that it was determined that no additional value would be gained by paying a premium for these services and that the lowest price was the best choice. In addition, to mitigate risk of poor performance, one requirement of the contract is to maintain a qualified workforce. Officials stated that approximately 90 percent of personnel performing on the previous contract are working on the current contract. The Air Force awarded three contracts for base operation support services—vehicle maintenance, airfield maintenance, fuel management, and traffic management—at an Air Force Reserve Base and two Air Reserve Stations. All of the contracts were awarded with a one-month orientation period, one-year base period, four 1-year options, and a final 6-month option, with total estimated values ranging from $24.7 million to $38.2 million. Acquisition plans for these requirements stated that the services were well defined. Additionally, contracting officials stated that there is at least a decade of past experience with these requirements, and, as a result, the requirements are well known. The Air Force awarded a contract for centralized mail sorting services in Germany. The contract consists of a 2-month phase-in period, a 2- month base period, four 1-year options, and one 8-month option, with a total estimated value of approximately $21.5 million. The acquisition plan for this requirement stated that a LPTA source selection procedure was chosen because the requirement was well-defined and not technically complex. For example, the acquisition plan noted that there was more than a decade of historical data that helped define and estimate the volume of mail that would need to be sorted. Contracting officials reiterated that LPTA was used since the service was well-defined, the risk of poor performance was low, and that it was determined that additional trade-offs would not provide an additional benefit to the taxpayer. The Army awarded an IDIQ contract to look for vulnerabilities in software code. The contract, which was set aside for small businesses, had a 5-year ordering period and an estimated ceiling value of $17.1 million. The contractor was required to perform a software review using several government approved code analysis tools and then characterize any potential vulnerabilities identified by the tools in terms of risk levels prescribed by established government cybersecurity standards. Army requirements officials stated that they determined there was no additional value to be gained from additional innovations in doing either task. Our review found some indication that the requirement, however, might not have been clearly understood by offerors. For example, the Army received 12 offers which ranged from $800,177 to $46,680,003. The contracting officer attributed the range of offers to the inexperience of some offerors with preparing proposals or misunderstanding this type of requirement, and the two lowest offers were determined to be technically unacceptable. The Navy awarded a contract to perform commercially available monthly telephone maintenance, which includes preventive and remedial maintenance on a specific brand of phone systems that Navy locations in California use. The contract consists of a one-year base period and two 1-year options, with an estimated total value of approximately $15.9 million. The acquisition plan stated that only certified authorized dealers could perform maintenance on these phones. A contracting official stated the requirement was well-defined and required the highest tier of maintenance options that could be offered, and, as a result, there was no tradeoff available. The highest tier requires that maintenance be available 24 hours a day, 7 days a week in multiple Navy locations, and that the contractor must respond to emergencies within 15 minutes during normal business hours. The contract also includes maintenance for all switches, inside wiring and any necessary relocation services, among other support requirements. Factors Cited by Contracting Officials When Choosing LPTA Procedures Were Generally Consistent with Criteria Listed in Section 813 1. One contracting official determined that minimum performance requirements for the $15.9 million contract for monthly telephone maintenance services could be described using objective performance measures, and the contract documents showed the technical acceptability of offers was tied to the description of these requirements in the statement of work. In another example, documents related to the award of a $27.9 million Air Force contract for base operations services show performance objectives and standards set forth as evaluation factors. procedures may be used only when DOD would realize little or no value from a proposal that exceeds the solicitation’s minimum technical requirements. Our interviews with contracting officials and review of contract documents found that in each case, DOD officials assessed whether the department could receive value from a contract awarded on a tradeoff basis where the proposal exceeded the minimum technical requirements, and determined that there would be no additional value to be gained. 3. Most officials said they felt that it was possible to evaluate the proposals they received with little subjectivity, although they had not always explicitly made and documented this assessment. Officials for two contracts stated, for example, that the threshold question of technical acceptability for their contracts was whether the offering firms possessed certain licenses or accreditation to perform services on specific equipment or in specific locations. No subjectivity was involved in this assessment; therefore, they viewed the question of technical acceptability as essentially objective. However, because they were not required to document this assessment, contract documents did not provide evidence of an assessment of subjectivity. 4. Officials for most of the contracts we reviewed stated they had determined that a review of technical proposals other than that of the lowest-price offeror would not identity factors that could provide other benefits. In one case officials ultimately reviewed additional proposals, which is allowed under current DOD source selection guidance. DOD’s March 2016 source selection guidance does not require contracting officers to consider the fifth and sixth criteria listed in Section 813. Accordingly, we found that contracting officers did not always document justifications for choosing LPTA procedures and did not determine that the lowest price offered reflected full life-cycle costs. Specifically, we found that: 5. Although the files for all 7 contracts contained some record of the choice of LPTA source selection procedures, files for 3 of the 7 contracts simply stated that LPTA procedures would be used and did not include an explanation or justification for the choice. Only the documents for the four Air Force contracts included some explanation of the reasons for choosing LPTA source selection procedures. While not required by DOD source selection guidance when our selected contracts were approaching source selection, providing a justification for using LPTA is one of the criteria that Section 813 requires DOD to include among the revisions to the DFARS. 6. None of the officials for our selected contracts had confirmed that the lowest price offered reflected full life-cycle costs, which is one of the criteria that Section 813 requires DOD to include among the revisions to the DFARS. For the mail delivery, telephone maintenance, and base operations support contracts we reviewed, two contracting officials noted that full life-cycle costs were not applicable and a third stated that life-cycle costs cannot be determined for a service contract. As previously noted, all of the contracts in our review were for services, not for products. A Defense Procurement and Acquisition Policy official acknowledged that the application of the criterion could cause confusion and that DOD officials are considering this issue as part of efforts to revise the DFARS. As previously noted, DOD is currently developing the revisions to the DFARS that are contemplated by Section 813. DOD officials could not provide a specific timeframe for when the DFARS would be revised, noting that the revisions would need to be reviewed by the Office of Information and Regulatory Affairs at the Office of Management and Budget, and then released for public comment before the revisions could be finalized. Agency Comments We are not making any recommendations in this report. We provided a draft of this report to DOD for comment. DOD had no comments on the draft report. We are sending copies of this report to appropriate congressional committees and the Secretary of Defense. The report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following staff members made key contributions to this report: Justin Jaynes (Assistant Director), Matthew T. Crosby, Lorraine Ettaro, Stephanie Gustafson, Julia Kennon, Victoria Klepacz, W. William Russell, Roxanna Sun, Ann Marie Udale, Khristi Wilkins, and Lauren Wright.
Why GAO Did This Study DOD obligated about $300 billion through contracts for goods and services in fiscal year 2016. When awarding a contract competitively, DOD may use the LPTA source selection process to select the lowest-priced offer that is technically acceptable. In contrast, DOD may use the trade-off source selection process to award a higher-priced contract to a firm if the firm's offer provides greater benefit and it is worth paying the additional cost. The National Defense Authorization Act for Fiscal Year 2017 calls on DOD to avoid using the LPTA process for information technology, cybersecurity, and other knowledge-based professional support services. The Act also included a provision for GAO to report on DOD's use of LPTA procedures for contracts valued at more than $10 million. This report assesses the (1) extent to which DOD used LPTA procedures for certain services, and (2) factors that contracting officials considered when deciding to use LPTA procedures. GAO reviewed data from the Federal Procurement Data System-Next Generation to identify 781 contracts valued at $10 million or above awarded by the Army, Navy, and Air Force in the first half of fiscal year 2017, the most recent period for which data were available. GAO then selected 133 of these contracts for information technology and support services, which include services reflected in the Act. GAO identified that 9 contracts used LPTA procedures and reviewed 7 of these, including interviewing officials and reviewing contract documents. DOD had no comments on the draft report. What GAO Found During the first half of fiscal year 2017, the Army, Navy, and Air Force rarely used lowest price technically acceptable (LPTA) source selection procedures when awarding contracts valued at $10 million or more for the types of services identified by the National Defense Authorization Act, such as information technology services. Department of Defense (DOD) guidance states that LPTA procedures are typically for requirements that are well-defined, commercial, or non-complex products or services with a minimal risk of unsuccessful contract performance. The figure shows the military departments' limited use of LPTA procedures for contracts for selected services. For the 7 contracts that GAO reviewed, contracting officials determined that the government would not receive a benefit for paying more than the lowest price. Contracting officials also stated that LPTA was used, in part, because the requirements were well-defined, non-complex, or reoccurring. For example, the Navy used LPTA procedures to award a contract for commercially available monthly telephone maintenance services. In addition, the Air Force used LPTA procedures to award a contract for mail sorting and delivery. Section 813 of the fiscal year 2017 National Defense Authorization Act requires DOD to amend its regulations to require contracting officers to consider specific criteria when deciding to use LPTA procedures. DOD has not yet revised its regulations to implement Section 813. Nevertheless, for the 7 contracts GAO reviewed, contracting officials' considerations when choosing to use LPTA procedures were often consistent with most of these new criteria. DOD officials are currently developing the revisions to the Defense Federal Acquisition Regulation Supplement that are contemplated by Section 813.
gao_GAO-18-130
gao_GAO-18-130_0
Background DOD is the largest U.S. federal department and one of the most complex organizations in the world. In support of its military operations, the department manages many interdependent business functions, including logistics management, procurement, health care management, and financial management. DOD relies extensively on IT to support its business functions. According to its IT investment data, the department has 2,097 business system investments. The department’s fiscal year 2018 IT budget request states that DOD plans to spend about $8.7 billion in fiscal year 2018 on its defense business systems. The IT budget organizes investments by mission areas. The four mission areas are enterprise information environment, business, warfighting, and defense intelligence. Figure 1 shows the amount of DOD’s requested fiscal year 2018 IT budget that the department plans to spend on each mission area. The department further organizes its IT budget by segments. For example, the business mission area includes segments such as financial management, health, and human resource management. Figure 2 shows the department’s projected fiscal year 2018 spending for each segment in the business mission area. GAO designated the department’s business systems modernization efforts as high risk in1995 and has continued to do so in the years since. DOD currently bears responsibility, in whole or in part, for half of the programs (17 of 34 programs) across the federal government that we have designated as high risk. Seven of these areas are specific to the department, and 10 other high-risk areas are shared with other federal agencies. Collectively, these high-risk areas are linked to the department’s ability to perform its overall mission and affect the readiness and capabilities of U.S. military forces. DOD’s business systems modernization is one of the department’s specific high-risk areas and is essential for addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial and supply chain high-risk areas. Since 2005, we have issued 11 reports in response to mandates directing GAO to assess DOD’s actions to respond to business system modernization provisions contained in Section 2222 of Title 10, United States Code. These reports contained 23 recommendations to help strengthen the department’s management of its business systems. As of September 2017, the department had implemented 13 of the recommendations and 2 had been closed as not implemented. The other 8 recommendations remain open. The 11 reports are listed in appendix II. The NDAA for Fiscal Year 2016 Included Provisions for Managing Defense Business Systems The NDAA for Fiscal Year 2016 included provisions requiring DOD to perform certain activities aimed at ensuring that its business system investments are managed efficiently and effectively. Specifically, the act established requirements for the department related to issuing policy and guidance for managing defense business systems; developing and maintaining a defense business enterprise architecture; establishing a Defense Business Council to provide advice to the Secretary on managing defense business systems; and obtaining approvals before systems proceed into development (or if no development is required, into production or fielding) and related annual reviews. According to the Joint Explanatory Statement accompanying the NDAA for Fiscal Year 2016, the act revised Section 2222 of Title 10, United States Code, to streamline requirements and clarify the responsibilities of senior officials related to acquiring and managing business systems. Key revisions pertain to: Covered defense business systems. The code previously defined a covered defense business system as a system having a total cost of over $1 million over the period of the future-years defense program. As revised, the code now defines a covered defense business system as a system that is expected to have a total amount of budget authority over the period of the current future-years defense program of over $50 million. Priority defense business systems. The act established a new category of system, called a priority defense business system. This is a system that is (1) expected to have a total amount of budget authority of over $250 million over the period of the current future- years defense program, or (2) designated by the DCMO as a priority defense business system based on specific program analyses of factors including complexity, scope, and technical risk, and after notification to Congress of such designation. Thresholds and officials responsible for review and certification of defense business systems. The code previously stated that the DCMO had responsibility for reviewing and certifying all defense business system investments over $1 million over the future-years defense program. The revised code states that, unless otherwise assigned by the Secretary of Defense, military department Chief Management Officers (CMO) are to have approval authority for their covered defense business system investments below $250 million over the future-years defense program. The DCMO is to have approval authority for defense business systems owned by DOD components other than the military departments, systems that will support the business process of more than one military department or other component, and priority defense business systems. Certification requirements. The code previously required that a defense business system program be reviewed and certified, at least annually, on the basis of its compliance with the business enterprise architecture and appropriate business process reengineering. In addition to these requirements, the revised code requires that the business system program be reviewed and certified on the basis of having valid, achievable requirements and a viable plan for implementing the requirements; having an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the- shelf systems; and being in compliance with the department’s auditability requirements. Key Roles and Responsibilities for Managing Defense Business Systems DOD Instruction 5000.75: Business Systems Requirements and Acquisition assigns roles and responsibilities for managing defense business system investments. Table 1 identifies the key entities and their responsibilities for managing defense business system investments. DOD Has Made Progress in Complying with Legislative Provisions for Managing Defense Business Systems, but More Remains to Be Done DOD has taken steps to address provisions of the NDAA for Fiscal Year 2016 related to defense business system investments. Specifically, as called for in the act, the department has established guidance that addresses most legislative requirements for managing its defense business systems; however, the military departments are still developing guidance to fully address certification requirements for their systems. Further, DOD has developed a business enterprise architecture and is in the process of updating the architecture to improve its content. The department also has a plan to improve the usefulness of the business enterprise architecture; however, the department has not delivered the plan’s intended capabilities. In addition, the department is in the process of updating its IT enterprise architecture; however, it does not have a plan for improving the department’s IT and computing infrastructure for each of the major business processes. Further, the department has not yet demonstrated that the business enterprise architecture and the IT enterprise architecture are integrated. The department fully addressed the act’s requirement related to defense business system oversight. Specifically, the department’s governance board, called the Defense Business Council, addressed legislative provisions to provide advice to the Secretary of Defense. Lastly, DOD and the military departments did not apply new legislative requirements when certifying business systems for fiscal year 2017. Instead, the DOD DCMO certified the systems in our sample in accordance with the previous fiscal year’s (fiscal year 2016) certification requirements. DOD Issued Guidance Addressing Most Legislative Requirements for Managing Business Systems The NDAA for Fiscal Year 2016 required the Secretary of Defense to issue guidance by December 31, 2016 to provide for the coordination of, and decision making for, the planning, programming, and control of investments in covered defense business systems. The act required this guidance to address six elements: Policy to ensure DOD business processes are continuously reviewed and revised to implement the most streamlined and efficient business processes practicable and eliminate or reduce the need to tailor commercial off-the-shelf systems to meet or incorporate requirements or interfaces that are unique to the department. A process to establish requirements for covered defense business systems. Mechanisms for planning and controlling investments in covered defense business systems, including a process for the collection and review of programming and budgeting information for covered defense business systems. Policy requiring the periodic review of covered defense business systems that have been fully deployed, by portfolio, to ensure that investments in such portfolios are appropriate. Policy to ensure full consideration of sustainability and technological refreshment requirements, and the appropriate use of open architectures. Policy to ensure that best acquisition and systems engineering practices are used in the procurement and deployment of commercial systems, modified commercial systems, and defense-unique systems to meet DOD missions. Of these six elements called for by the act, the department has issued guidance that fully addresses four elements and partially addresses two elements. Table 2 summarizes our assessment of DOD’s guidance relative to the act’s requirements. DOD fully addressed the element requiring policy to ensure that the business processes of the department are continuously reviewed and revised. For example, DOD Instruction 5000.75 requires the functional sponsor of a defense business system to engage in continuous process improvement throughout all phases of the business capability acquisition cycle. The department also fully addressed the element to provide a process for establishing requirements for covered defense business systems with DOD Instruction 5000.75, which introduces the business capability acquisition cycle for business system requirements and acquisition. In addition, DOD fully addressed the element to provide mechanisms for planning and controlling investments in covered defense business systems. Specifically, the department’s Financial Management Regulation; Directive 7045.14 on its planning, programming, budgeting, and execution process; and the April 2017 Defense Business System Investment Management Guidance provide such mechanisms. For example, the April 2017 investment management guidance includes a process, called the integrated business framework, which the department is to follow for selecting, managing, and evaluating the results of investments in defense business systems. In addition, the directive assigns the DOD CIO responsibility for participating in the department’s annual resource allocation process and for advising the Secretary and Deputy Secretary of the Defense on IT resource allocations and investment decisions. Further, DOD fully addressed the requirement for a policy requiring the periodic review of covered business systems that have been fully deployed, by portfolio, to ensure that investments in such portfolios are appropriate. Specifically, the department’s April 2017 Defense Business System Investment Management Guidance requires the department to annually review an organization’s plan for managing its portfolio of defense business systems over the period of the current future-years defense program (e.g., Army’s plan for its financial management systems) to ensure, among other things, that the portfolio is aligned with applicable functional strategies (e.g., DOD’s strategy for its financial management functional area). DOD partially addressed the element requiring policy to ensure full consideration of sustainability and technological refreshment requirements, and the appropriate use of open architectures. Specifically, the department established policy requiring consideration of open architectures, but it has not established policy requiring consideration of sustainability and technological refreshment requirements. The Office of the DCMO stated that future guidance is expected to provide a policy to ensure full consideration of sustainability and technological refreshment requirements. However, the department could not provide a time frame for when the guidance will be developed and issued. Without a policy requiring full consideration of sustainability and technological refreshment requirements for its defense business system investments, the department may not be able to ensure that it has a full understanding of the costs associated with these requirements. As a result, the department may not be able to effectively manage spending on these systems. DOD has also partially addressed the element requiring policy to ensure that best acquisition and systems engineering practices are used in the procurement and deployment of commercial, modified-commercial, and defense-unique systems. Specifically, the department has established policy requiring the acquisition of business systems to be aligned with commercial best practices and to minimize the need for customization of commercial products to the maximum extent possible. On the other hand, the department has not established policy to ensure the use of best systems engineering practices. With regard to this finding, officials in the Office of the DCMO asserted that DOD Instruction 5000.75 addresses the requirement. However, while the instruction requires the system acquisition strategy to include a description of how the program plans to leverage systems engineering, it does not require the use of best systems engineering practices. Without a policy requiring the use of best systems engineering practices in the procurement and deployment of commercial, modified, and defense- unique systems, the department may be limited in its ability to effectively balance meeting system cost and performance objectives. DOD Issued Guidance that Addresses New Certification Requirements, and the Military Departments Have Made Mixed Progress in Issuing Supporting Guidance In addition to guidance for addressing the aforementioned legislative requirements for business systems management, the NDAA for Fiscal Year 2016 requires the Secretary to direct the DCMO and the CMO of each of the military departments to issue and maintain supporting guidance, as appropriate and within their respective areas of responsibility. In this regard, one of the key areas for which the DCMO and military department CMOs are to provide supporting guidance is the review and certification of defense business systems in accordance with specific requirements. Specifically, the act requires that, for any fiscal year in which funds are expended for development or sustainment pursuant to a covered defense business system program, the appropriate approval official is to review the system to determine if the system: has been, or is being, reengineered to be as streamlined and efficient as practicable, and whether the implementation of the system will maximize the elimination of unique software requirements and unique interfaces; is in compliance with the business enterprise architecture or will be in compliance as a result of planned modifications; has valid, achievable requirements, and a viable plan for implementing those requirements (including, as appropriate, market research, business process reengineering, and prototyping activities); has an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and is in compliance with the department’s auditability requirements. The act and DOD Instruction 5000.75 define the systems that the DOD DCMO is responsible for certifying and the systems that military department CMOs are responsible for certifying. Consistent with the act, in April 2017, the DCMO issued guidance for certifying officials that addresses the certification requirements. Table 3 provides our rating and assessment of the DCMO’s guidance for implementing defense business system certification requirements. By establishing guidance requiring that defense business systems be certified on the basis of the legislative requirements, the department is better positioned to ensure that a covered system does not proceed into development (or, if no development is required, into production or fielding) without the appropriate due diligence. Further, the department has taken steps which should help ensure that funds are limited to systems in development or sustainment that meet these requirements. Air Force and Navy Guidance Partially Addresses Certification Requirements; Army Has Not Yet Issued Guidance The military departments have made mixed progress in developing supporting guidance to assist in making certification decisions regarding systems within their respective areas of responsibility. More specifically, the Air Force has issued supporting guidance that addresses three of the act’s five certification requirements, but does not address the remaining two requirements. Navy has issued guidance that addresses two of the certification requirements, partially addresses one requirement, and does not address two requirements. The Army has not yet issued guidance on any of the five certification requirements. Table 4 provides an overview of our assessment of the Air Force’s, Navy’s, and Army’s guidance relative to the NDAA for Fiscal Year 2016 certification requirements. Each department’s efforts are further discussed following the table. Air Force. In April 2017, the Department of the Air Force issued guidance for certifying business systems for fiscal year 2018. The guidance addresses the requirements that a system be certified on the basis of sufficient business process reengineering, business enterprise architecture compliance, and valid requirements and a viable plan to implement them. Specifically, the guidance states that Air Force core defense business systems are required to comply with the business process reengineering guidance prescribed in the DCMO’s February 2015 Defense Business Systems Investment Management Process Guidance and for systems to assert compliance with the architecture through DCMO’s Integrated Business Framework—Data Alignment Portal. In addition, the guidance states that the department must follow DOD Instruction 5000.75, which requires that certifying officials determine that business requirements are valid and capability efforts have feasible implementation plans. However, the Air Force guidance does not address the remaining two certification requirements. Officials in the office of the Air Force DCMO acknowledged that the Air Force’s business system certification guidance does not address determining how the acquisition strategy is designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable or is in compliance with DOD’s auditability requirements. In May 2017, Air Force DCMO officials stated that the department was in the process of developing guidance. However, as of December 2017, the Air Force had not described specific plans to update its business system certification guidance. Navy. The Department of the Navy issued guidance in May 2016. This guidance addresses the requirements that a system be certified on the basis of sufficient business process reengineering and business enterprise architecture compliance. In this regard, the guidance provides guidelines for documenting business process reengineering and requires verification that business process reengineering is complete. The guidance also specifies that defense business systems are to map alignment with the business enterprise architecture in DCMO’s Integrated Business Framework–Data Alignment Portal. Navy’s guidance partially addresses the certification requirement for determining if a defense business system has valid requirements and a viable plan to implement them. Specifically, the guidance includes information on validating requirements; however, it does not include information on determining if a system has a viable plan to implement the requirements. In addition, Navy’s guidance does not address the remaining two certification requirements, which are to determine that the covered defense business system has an acquisition strategy that eliminates or reduces the need to tailor commercial-off-the-shelf systems, and that the system is in compliance with DOD’s auditability requirements. In August 2017, officials in the Office of the Under Secretary of the Navy (Management) stated that the office was in the process of updating its May 2016 Defense Business System Investment Certification Manual. The officials stated that the goal is to issue interim investment certification guidance by May 2018. As of November 2017, however, Navy had not established a plan for when it expects to publish finalized certification guidance. Army. The Department of the Army has not issued guidance that addresses any of the act’s certification requirements. The Army issued a template that was to be used to develop fiscal year 2018 portfolio review submissions. However, the template does not address any of the certification requirements. Officials in the Army’s Office of Business Transformation explained that the Army used DOD DCMO’s 2014 guidance to certify its business systems for fiscal year 2017. In May 2017, they stated that the Army was in the process of developing guidance to implement DOD’s new instruction. In November 2017, an official in the Army’s Office of Business Transformation stated that the office was in the process of completing the guidance and aimed to provide it to the Deputy Under Secretary’s office for signature in January 2018. However, the department has not committed to a specific time frame for when the new guidance is expected to be issued. Without guidance for the certification authority to determine that defense business systems have addressed each of the act’s certification requirements, the Air Force, Navy, and Army risk allowing systems to proceed into development or production that do not meet these requirements. In particular, the military departments risk wasting funds on developing and maintaining systems that do not have valid requirements and a viable plan to implement the requirements, introduce unnecessary complexity, or that do not adequately support the Department of Defense’s efforts to meet its auditability requirements. DOD Has Efforts Underway to Improve Its Business Enterprise Architecture, but Its IT Architecture Is Not Complete According to the NDAA for Fiscal Year 2016, DOD is to develop and maintain a defense business enterprise architecture to guide the development of integrated business processes within the department. In addition, the act states that the business architecture must be consistent with the policies and procedures established by the Director of the Office of Management and Budget. Among other things, OMB policy calls for agencies to develop an enterprise architecture that describes the current architecture, target architecture, and a transition plan to get to the target architecture. The act also calls for the business architecture to contain specific content, including policies, procedures, business data standards, business information requirements, and business performance measures that are to apply uniformly throughout the department. DOD has developed a business enterprise architecture that is intended to help guide the development of its defense business systems. The department issued version 10 of the business architecture, which is currently being used to support system certification decisions, in February 2013. The business architecture and related documentation include content describing aspects of the current architecture, target architecture, and a transition plan to get to the target architecture. In addition, the business architecture includes content that addresses the act’s requirements. Table 5 provides examples of required content in DOD’s business enterprise architecture. Nevertheless, some content included in version 10 of the business architecture is outdated and incomplete. For example, version 10 of the business architecture’s repository of laws, regulations, and policies was last updated in February 2013, and officials in the Office of the DOD CIO and Office of the DCMO confirmed that they are not current. Further, the department’s March 2017 business architecture compliance guidance stated that not all relevant business data standards are identified in the business architecture. In addition, based on our review, information about performance measures documented in the architecture is incomplete. For example, target values for performance measures associated with acquisition and logistics initiatives are not identified. According to officials in the Office of the DCMO, the department is working to update the business architecture. Specifically, the department has developed version 11 of the business architecture to, in part, replace outdated architecture content. According to the officials, version 11 of the architecture is currently available online, but version 10 remains the official version of the business enterprise architecture used for system certification decisions. The officials stated that the department continues to add content to version 11, and they expect that it will be used as the basis of system certification decisions for fiscal year 2019. In addition, DOD has ongoing work to address a key recommendation we made in July 2015 associated with improving the usefulness of its business architecture. In particular, we reported that the majority of military department portfolio managers that we surveyed believed that the business architecture had not been effective in meeting intended outcomes. For example, only 25 percent of the survey respondents reported that the business architecture effectively enabled DOD to routinely produce timely, accurate, and reliable business and financial information for management purposes. In addition, only 38 percent reported that the business architecture effectively guided the implementation of interoperable defense business systems. As a result, we reported that the architecture had produced limited value and recommended that the department use the results of our survey to determine additional actions that can improve the department’s management of its business enterprise architecture activities. In response to our recommendation, DOD identified opportunities to address our survey findings and developed a plan for improving its ability to achieve architecture-related outcomes. DOD’s business enterprise architecture improvement plan was signed by the Assistant DCMO in January 2017. However, the department has not yet demonstrated that it has delivered the capabilities described by the plan; thus, we will continue to monitor DOD’s progress to fully address this recommendation. DOD Has Taken Steps to Develop Its IT Enterprise Architecture, but Does Not Have a Plan That Provides a Road Map for Improving the Department’s IT and Computing Infrastructure In addition to the business enterprise architecture, according to the act, the DOD CIO is to develop an IT enterprise architecture. This architecture is to describe a plan for improving the IT and computing infrastructure of the department, including for each of the major business processes. Officials in the Office of the DOD CIO stated that the department considers its information enterprise architecture to be its IT enterprise architecture. The DOD CIO approved version 2.0 of its information enterprise architecture in August 2012. According to DOD documentation, this architecture describes the department’s current information enterprise (i.e., information resources, assets, and processes used to share information across the department and with its mission partners) and includes a vision for the target information enterprise; documents required capabilities, and the activities, rules, and services needed to provide them; and includes information for applying and complying with the architecture. Nevertheless, while the architecture includes content describing the department’s current and target information enterprise, which is consistent with OMB guidance, it does not include a transition plan that provides a road map for improving the department’s IT and computing infrastructure. Related to this finding, DCMO officials did not agree with our assessment concerning the department’s IT enterprise architecture transition plan. In this regard, officials in the Office of the DCMO stated that the department’s DOD IT Portfolio Repository includes information for managing efforts to improve IT and computing infrastructure at the system level. According to the repository’s data dictionary, this information can include system life cycle start and end dates, as well as information that supports planning for a target environment. However, documentation describing DOD’s information enterprise architecture does not identify the DOD IT Portfolio Repository as being part of the architecture. Moreover, it does not include a plan for improving the department’s IT and computing infrastructure for each of the major business processes. Officials in the Office of the CIO acknowledged that the architecture does not include such plans. According to the officials, the department is currently developing version 3.0 of its information enterprise architecture (i.e., its IT enterprise architecture). The officials stated that the department does not currently intend for the architecture to include a plan for improving the department’s IT and computing infrastructure that addresses each of the major business processes. They added, however, that there is an effort to ensure that functional areas, such as human resources management, are included. DCMO officials stated that the department has not defined how the DOD IT enterprise architecture needs to be segmented for each major business process because the infrastructure requirements seem to be similar for each of the processes. Without an architecture that includes a plan for improving its IT and computing infrastructure, including for each of the major business processes, DOD risks not ensuring that stakeholders across the department have a consistent understanding of the steps needed to achieve the department’s future vision, agency priorities, potential dependencies among investments, and emerging and available technological opportunities. DOD Has Not Demonstrated That Its Business and IT Architectures Are Integrated According to the act, the DOD business enterprise architecture is to be integrated into the DOD IT enterprise architecture. The department’s business architecture compliance guide also recognizes that the business architecture is to be integrated with the IT enterprise architecture. However, the department has not demonstrated that it has integrated the business enterprise architecture into the information enterprise architecture. Specifically, the department did not provide documentation associated with either architecture that describes how the two are, or are to be, integrated. The business enterprise architecture compliance guide states that DOD Directive 8000.01 implements the requirement that the two architectures are to be integrated. However, the directive does not address how they are, or are to be, integrated. Officials in the Offices of the CIO and the DCMO described steps they were taking to coordinate the development of the next versions of the information enterprise architecture (i.e., IT enterprise architecture) and business enterprise architecture. However, these steps were not sufficient to help ensure integration of the two architectures. Specifically, in June 2017, officials in the Office of the DOD CIO stated they were participating in the development of the next version of the business architecture and that the DOD CIO is represented on the Business Enterprise Architecture Configuration Control Board. Officials in the Office of the DCMO confirmed that DOD CIO officials participate on the board. However, officials from the Office of the DCMO said that, until it met in June 2017 the board had not met since 2014. Moreover, documentation of the June 2017 meeting, and a subsequent November 2017 meeting, did not indicate that the board members had discussed integration of the department’s business and information enterprise architectures. In addition, officials in the Office of the DCMO reported that the office has not actively participated in the information enterprise architecture working group. Further, our review of meeting minutes from this working group did not identify participation by officials in the Office of the DCMO, or that integration of the architectures was discussed. The Office of the DCMO described other mechanisms for its sharing of information about architectures with the Office of the DOD CIO. For example, the Office of the DCMO stated that it participates with DOD CIO bodies governing version 3.0 development. Nevertheless, the Office of the DCMO reiterated that technical integration of the architectures has not been designed. Until DOD ensures that its business architecture is integrated into its IT enterprise architecture, the department may not be able to ensure that its business strategies capitalize on technologies and that its IT infrastructure will support DOD’s business priorities and related business strategies. The Defense Business Council Addressed Legislative Provisions to Provide Advice on Defense Business Systems to the Secretary of Defense The NDAA for Fiscal Year 2016 requires the Secretary to establish a Defense Business Council, chaired by the DCMO and the DOD CIO, to provide advice to the Secretary on: developing the business enterprise architecture, reengineering the department’s business processes, developing and deploying business systems, and developing requirements for business systems. DOD established the department’s Defense Business Council in October 2012, prior to the act. According to its current charter, dated December 2014, the Council is co-chaired by the DCMO and the DOD CIO. In addition, the Council is to serve as the principal governance body for vetting issues related to managing and improving defense business operations. Among other things, it serves as the investment review board for defense business system investments. The Defense Business Council charter also states that the Council was established as a principal supporting tier of governance to the Deputy’s Management Action Group. The Deputy’s Management Action Group was established by an October 2011 memorandum issued by the Deputy Secretary of Defense. According to information published on DCMO’s website, the group was established to be the primary civilian-military management forum that supports the Secretary of Defense, and is to address top department issues that have resource, management, and broad strategic and/or policy implications. The group’s primary mission is to produce advice for the Deputy Secretary of Defense in a collaborative environment and to ensure that the group’s execution aligns with the Secretary of Defense’s priorities. According to the Office of the DCMO, the Defense Business Council determines whether or not to elevate a topic to the Deputy’s Management Action Group to address on behalf of the Secretary. Based on our review of meeting documentation for 27 meetings that the Defense Business Council held between January 2016 and August 2017, the Council discussed the four topics on which the NDAA for Fiscal Year 2016 requires it to provide advice to the Secretary. According to the Office of the DCMO, during the discussions of these topics, the Council did not identify any issues related to the topics that needed to be elevated to the Deputy’s Management Action Group. Table 6 identifies the number of meetings in which the Council discussed each topic during this time period. By ensuring that the required business system topics are discussed during Defense Business Council meetings, the department should be positioned to raise issues to the Deputy’s Management Action Group, and ultimately, to advise the Secretary of Defense on matters associated with these topics. DOD Certified Selected Business Systems for Fiscal Year 2017 on the Basis of Earlier Certification Requirements The NDAA for Fiscal Year 2016 requires that, for any fiscal year in which funds are expended for development or sustainment pursuant to a covered defense business system program, the Secretary of Defense is to ensure that a covered business system not proceed into development (or, if no development is required, into production or fielding) unless the appropriate approval official reviews the system to determine if the system meets five key requirements, as previously discussed in this report. In addition, the act requires that the appropriate approval official certify, certify with conditions, or decline to certify that the system satisfies these five requirements. The department issued DOD Instruction 5000.75, which established business system categories and assigned certifying officials, consistent with the act. Table 7 describes the business system categories and the assigned certifying officials, as defined in DOD Instruction 5000.75. The DOD DCMO certified the five systems in our sample (which included the military departments’ systems) for fiscal year 2017. However, these certifications were issued in accordance with the previous fiscal year’s (fiscal year 2016) certification requirements. Those requirements had stipulated that a defense business system program was to be reviewed and certified on the basis of the system’s compliance with the business enterprise architecture and appropriate business process reengineering, rather than on the basis of having met all five requirements identified in the NDAA for Fiscal Year 2016. Specifically, DCMO certified the systems on the basis of determining that the systems were in compliance with the business enterprise architecture and had been sufficiently reengineered. However, none of the systems were certified on the basis of a determination that they had valid, achievable requirements and a viable plan for implementing them; had an acquisition strategy to reduce or eliminate the need to tailor commercial off-the-shelf systems; or were in compliance with the department’s auditability requirements. Officials in the Offices of the DOD DCMO, the Air Force DCMO, the Under Secretary of the Navy (Management), and Army Business Transformation told us that the systems were not certified relative to three of the requirements because the department did not issue guidance to reflect changes made by the NDAA for Fiscal Year 2016 in time for the fiscal year 2017 certification process. Prior to the NDAA for Fiscal Year 2016, relevant legislation and DOD guidance only called for annual determinations to be made regarding whether a system complied with the business enterprise architecture and whether appropriate business process reengineering had been conducted. In January 2016, the DCMO issued a memorandum stating that the department planned to issue new guidance and policy to implement the new legislation by the end of February 2016. However, the department did not issue additional guidance addressing the new certification requirements until April 2017. The system certifications, which were required by the act to be completed before systems could spend fiscal year 2017 funds, occurred in August and September 2016. In explaining the delay in issuing new guidance on the certification requirements, officials in the Office of the DCMO stated that the statutory deadline for issuing guidance was December 31, 2016. They added that, given this statutory deadline, and the start of fiscal year 2017 on October 1, 2016, it was their determination that Congress did not intend for the NDAA for Fiscal Year 2016’s certification requirements to be fully implemented before fiscal year 2017 started. DCMO officials stated that they intend for the department to use the certification requirements established by the NDAA for Fiscal Year 2016 for future system certifications. While it was reasonable for the department to use the earlier guidance for its fiscal year 2017 certifications, given that the new guidance had not yet been issued, it will be important going forward that the department certifies business systems on the basis of the certification requirements established in the NDAA for Fiscal Year 2016 and its related guidance addressing these requirements. Certifying systems on the basis of the act’s requirements should help ensure that funds are not wasted on developing and maintaining systems that do not have valid requirements and a viable plan to implement the requirements, that introduce unnecessary complexity, or that impede the Department of Defense’s efforts to meet its auditability requirements. Conclusions Since the NDAA for Fiscal Year 2016 was signed in November 2015, DOD has issued guidance that addresses most provisions of the NDAA for Fiscal Year 2016 related to managing defense business system investments. However, the department has not established policies requiring consideration of sustainability and technology requirements and the use of best systems engineering practices in the procurement and deployment of its systems. Having these policies would better enable the department to ensure it is efficiently and effectively procuring and deploying its business systems. In addition, the Air Force, the Army, and Navy have made mixed progress in issuing guidance to assist in making certification decisions regarding systems within their respective areas of responsibility. Specifically, the Air Force and Navy issued guidance on the certification of business systems that does not fully address new certification requirements, while the Army has not issued any updated guidance for its certifications. As a result, the Air Force, Navy, and Army risk wasting funds on developing and maintaining systems that do not have valid requirements and a viable plan to implement the requirements, introduce unnecessary complexity, or do not adequately support the Department of Defense’s efforts to meet its auditability requirements. Also, DOD has developed an IT architecture, but this architecture does not address the act’s requirement that it include a plan for improving the department’s IT and computing infrastructure, including for each business process. In addition, DOD’s plans for updating its IT architecture do not address how the department intends to integrate its business and IT architectures, as called for by the act. As a result, DOD risks not having a consistent understanding of what is needed to achieve the department’s future vision, agency priorities, potential dependencies among investments, and emerging and available technological opportunities. Recommendations for Executive Action We are making six recommendations, including three to the Secretary of Defense and one to each of the Secretaries of the Air Force, the Navy and the Army: The Secretary of Defense should define a specific time frame for finalizing, and ensure the issuance of (1) policy requiring full consideration of sustainability and technological refreshment requirements for its defense business system investments; and (2) policy requiring that best systems engineering practices are used in the procurement and deployment of commercial systems, modified commercial systems, and defense-unique systems to meet DOD missions. (Recommendation 1) The Secretary of the Air Force should define a specific time frame for finalizing, and ensure the issuance of guidance for certifying the department’s business systems on the basis of (1) having an acquisition strategy designed to eliminate or reduce the need to tailor commercial off- the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and (2) being in compliance with DOD’s auditability requirements. (Recommendation 2) The Secretary of the Navy should define a specific time frame for finalizing, and ensure the issuance of guidance for certifying the department’s business systems on the basis of (1) having a viable plan to implement the system’s requirements; (2) having an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and (3) being in compliance with DOD’s auditability requirements. (Recommendation 3) The Secretary of the Army should define a specific time frame for finalizing, and ensure the issuance of guidance for certifying the department’s business systems on the basis of (1) being reengineered to be as streamlined and efficient as practicable, and determining that implementation of the system will maximize the elimination of unique software requirements and unique interfaces; (2) being in compliance with the business enterprise architecture; (3) having valid, achievable requirements and a viable plan to implement the requirements; (4) having an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and (5) being in compliance with DOD’s auditability requirements. (Recommendation 4) The Secretary of Defense should ensure that the DOD CIO develops an IT enterprise architecture which includes a transition plan that provides a road map for improving the department’s IT and computing infrastructure, including for each of its business processes. (Recommendation 5) The Secretary of Defense should ensure that the DOD CIO and Chief Management Officer work together to define a specific time frame for when the department plans to integrate its business and IT architectures and ensure that the architectures are integrated. (Recommendation 6) Agency Comments and Our Evaluation DOD provided written comments on a draft of this report, which are reprinted in appendix III. In the comments, the department stated that it concurred with three of the recommendations and partially concurred with three of the recommendations. DOD also provided evidence that it has fully addressed one of the recommendations. In addition, DOD provided technical comments that we incorporated in the report, as appropriate. DOD stated that it concurred with our first recommendation, which called for it to define a specific time frame for finalizing, and ensure the issuance of, policies that fully address provisions in the NDAA for Fiscal Year 2016. Furthermore, the department stated that it had complied with the recommendation. Specifically, the department stated that it had published its defense business systems investment management guidance in April 2017. This guidance identifies DOD’s Financial Management Regulation, Volume 2B, Chapter 18 “Information Technology” and supporting IT budget policy and guidance as well as DOD Instruction 5000.75 and supporting acquisition policy and guidance. The department stated that the Financial Management Regulation specifically addresses the requirement for sustainability and technological refreshment requirements for its defense business system investments. While DOD reported taking this action, we do not agree that the department has complied with our recommendation. In reviewing the department’s guidance, we found that none of the cited management documents includes a policy requiring consideration of sustainability and technological refreshment requirements for DOD’s defense business systems. Further, none of these documents includes a policy requiring that best systems engineering practices be used in the procurement and deployment of commercial, modified-commercial, and defense unique systems. Without a policy requiring full consideration of sustainability and technological refreshment requirements for its defense business system investments, the department may not be able to ensure that it has a full understanding of the costs associated with these requirements. Further, without a policy requiring the use of best systems engineering practices in systems procurement and deployment, the department may be limited in its ability to effectively balance meeting system cost and performance objectives. Accordingly, we continue to believe that our recommendation is valid. The department concurred with our second recommendation, that the Secretary of the Air Force define a specific time frame for finalizing, and ensure the issuance of, guidance that fully addresses certification requirements, in accordance with the NDAA for Fiscal Year 2016. Moreover, the department stated that the Air Force has complied with the recommendation. Specifically, DOD stated that Air Force Manual 63-144 details the consideration of using existing commercial solutions without modification or tailoring. However, while the manual provides a foundation on which the Air Force can build, it is not sufficient to fully address our recommendation because it does not include guidance on certifying business systems on the basis of having an acquisition strategy that eliminates or reduces the need to tailor commercial-off-the-shelf systems. In addition, the department did not demonstrate that the Air Force has issued guidance for certifying business systems on the basis of being in compliance with DOD’s auditability requirements. Rather, the Air Force stated that it has pending guidance that addresses the acquisition strategy and auditability requirements. We plan to evaluate the guidance to determine the extent to which it addresses our recommendation after it is issued. The department partially agreed with our third recommendation, that the Secretary of the Navy define a specific time frame for finalizing, and ensure the issuance of, guidance that fully addresses certification requirements. Specifically, DOD stated that Navy agreed to issue guidance. Subsequently, on March 8, 2018, Navy issued its updated guidance. However, Navy disagreed with the recommendation, as written, and suggested that GAO revise the recommendation to state that “The Secretary of the Navy should ensure guidance is issued according to established timeline for certifying the department’s business systems. . .” According to Navy, this change would support alignment with the timeline for certifying the department’s business systems driven by the Chief Management Officer investment review timeline. Based on our analysis, we found the guidance that Navy issued to be consistent with our recommendation. Thus, we plan to close the recommendation as fully implemented. We have also annotated this report, where appropriate, to explain that the Navy issued guidance while the draft of this report was at the department for comment. On the other hand, we did not revise the wording of our recommendation, as we believe it appropriately reflected the importance of Navy taking action to ensure the issuance of its guidance. The department stated that it concurred with our fourth recommendation, which called for the Secretary of the Army to define a specific time frame for finalizing, and ensure the issuance of, guidance for certifying the department’s business systems on the basis of the certification requirements. Furthermore, on March 23, 2018, the Army issued its guidance. However, because of the timing of this report relative to when the Army provided its guidance to us (on March 27, 2018), we have not yet completed an assessment of the guidance. We have annotated this report, where appropriate, to reflect the Army’s action on our recommendation. The department stated that it partially concurred with our fifth recommendation. This recommendation called for the DOD CIO to develop an IT enterprise architecture which includes a transition plan that provides a road map for improving the department’s IT and computing infrastructure, including for each of its business processes. Toward this end, the department agreed that the DOD CIO should develop an architecture that enables improving the department’s IT and computing infrastructure for each of its business processes. However, the department also stated that the recommendation is not needed because the goal is already being accomplished by a set of processes, organizations, protocols, and architecture data. For example, the department described processes and relationships between the Office of the DOD CIO and the Office of the Chief Management Officer and the boards that support the department’s business and IT enterprise architectures. In particular, the department stated that information enterprise architecture data relevant to the business enterprise are accessed via the DOD Information Enterprise Architecture Data Selection Wizard and imported into the business enterprise architecture. The department further stated that, if the business capability acquisition cycle process indicates a need to improve the IT or computing infrastructure, the Office of the Chief Management Officer has a protocol to initiate a proposal to change the information enterprise architecture. We agree that the department’s processes, organizations, protocols, and architecture data are keys to successful IT management. However, during the course of our audit, we found that documentation describing DOD’s IT architecture did not include a plan for improving the department’s IT and computing infrastructure for each of the major business processes. Moreover, officials in the Office of the CIO acknowledged that the architecture did not include such a plan. Without a transition plan that provides a road map for improving the department’s IT and computing infrastructure, including for each of its business processes, it will be difficult for the department to rely on its personnel to timely and proactively manage and direct modernization efforts of such a magnitude as DOD’s systems modernization efforts. Further, without such a plan, DOD risks not being able to ensure that stakeholders across the department have a consistent understanding of the steps needed to achieve the department’s future vision, agency priorities, potential dependencies among investments, and emerging and available technological opportunities. Thus, we maintain that the department should fully implement our recommendation. The department stated that it partially concurred with our sixth recommendation, that the DOD CIO and DCMO work together to define a specific time frame for when the department plans to integrate its business and IT architectures. In particular, the department stated that it agrees that the DOD CIO and Chief Management Officer should work together to establish a time frame and ensure coordination and consistency of the IT and business architectures. However, the department disagreed with the use and intent of the term “integrate,” as stated in the recommendation, although it did not explain the reason for this disagreement. Instead, it proposed that we change our recommendation to read “The GAO recommends the Secretary of Defense ensure the DoD CIO and CMO work together to define a specific timeline for coordinating its business and IT architectures to achieve better enterprise alignment among the architectures.” We agree that it is important to achieve coordination and consistency between the business and IT architectures. However, the department did not provide documentation associated with either architecture that describes how the two are, or are to be, integrated, as called for by the NDAA for Fiscal Year 2016 and DOD guidance. Integrating the architectures would help ensure that business strategies better capitalize on existing and planned technologies and that IT solutions and infrastructure support business priorities and related business strategies. Thus, we continue to believe that our recommendation is valid. However, we have updated the recommendation to state that the DOD CIO and the Chief Management Officer should work together. We made this change because, effective February 1, 2018, the Secretary of Defense eliminated the DCMO position and expanded the role of the Chief Management Officer, in accordance with the National Defense Authorization Act for Fiscal Year 2018. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. Appendix I: Objective, Scope, and Methodology Our objective was to determine the actions taken by the Department of Defense (DOD) to comply with provisions included in the National Defense Authorization Act for Fiscal Year 2016 (NDAA). These provisions require DOD to perform certain activities aimed at ensuring that its business system investments are managed efficiently and effectively. Specifically, we determined to what extent DOD has 1. established guidance for effectively managing its defense business 2. developed and maintained a defense business enterprise architecture and information technology (IT) enterprise architecture, in accordance with relevant laws and Office of Management and Budget (OMB) policies and guidance; 3. used the Defense Business Council to provide advice to the Secretary on developing the business enterprise architecture, reengineering the department’s business processes, developing and deploying business systems, and developing requirements for business systems; and 4. ensured that covered business systems are reviewed and certified in accordance with the act. To address the extent to which DOD has established guidance for effectively managing defense business system investments, we obtained and analyzed the department’s guidance, as well as the guidance established by the Departments of the Air Force, Army, and Navy, for managing defense business systems relative to the act’s requirements. Specifically, the NDAA for Fiscal Year 2016 required the Secretary of Defense to issue guidance, by December 31, 2016, to provide for the coordination of and decision making for the planning, programming, and control of investments in covered defense business systems. The act required this guidance to include the following six elements: Policy to ensure DOD business processes are continuously reviewed and revised to implement the most streamlined and efficient business processes practicable and eliminate or reduce the need to tailor commercial off-the-shelf systems to meet or incorporate requirements or interfaces that are unique to the department. Process to establish requirements for covered defense business systems. Mechanisms for planning and controlling investments in covered defense business systems, including a process for the collection and review of programming and budgeting information for covered defense business systems. Policy requiring the periodic review of covered defense business systems that have been fully deployed, by portfolio, to ensure that investments in such portfolios are appropriate. Policy to ensure full consideration of sustainability and technological refreshment requirements, and the appropriate use of open architectures. Policy to ensure that best acquisition and systems engineering practices are used in the procurement and deployment of commercial systems, modified commercial systems, and defense-unique systems to meet DOD missions. We assessed the February 2017 DOD Instruction 5000.75, Business Systems Requirements and Acquisitions, and April 2017 defense business system investment management guidance, which the department issued to address the act’s requirements. In addition, we assessed the department’s Financial Management Regulation and directive on its planning, programming, budgeting, and execution process, which the department stated also address the act’s provisions. We also assessed DOD’s guidance for managing business system investments relative to the act’s business system certification requirements. The act requires that the Secretary of Defense ensure that a covered defense business system not proceed into development (or, if no development is required, into production or fielding) unless the appropriate approval official determines that the system meets five requirements. The act further requires for any fiscal year in which funds are expended for development or sustainment pursuant to a covered defense business system program, the appropriate approval official to review the system to determine if the system: has been, or is being, reengineered to be as streamlined and efficient as practicable, and whether the implementation of the system will maximize the elimination of unique software requirements and unique interfaces; is in compliance with the business enterprise architecture or will be in compliance as a result of planned modifications; has valid, achievable requirements, and a viable plan for implementing those requirements (including, as appropriate, market research, business process reengineering, and prototyping activities); has an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and is in compliance with the department’s auditability requirements. We compared Office of the Deputy Chief Management Office (DCMO) certification guidance with the act’s certification requirements. In addition, we compared the guidance established by the Departments of the Air Force, the Army, and the Navy for certifying their business systems with the act’s certification requirements. We also interviewed cognizant officials responsible for managing defense business system investments at DOD, including the military departments. Specifically, we interviewed officials in the Office of the DCMO, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Office of the Chief Information Officer (CIO), and the Offices of the CMOs in the Departments of the Air Force, Army, and Navy. To determine the extent to which DOD has developed and maintained a defense business enterprise architecture and IT enterprise architecture, in accordance with relevant laws and OMB policy and guidance, we assessed the business enterprise architecture against the relevant laws and OMB policy and guidance; the IT enterprise architecture against the relevant laws and OMB policy and guidance; and the department’s efforts to integrate its business and IT architectures against the act’s requirement. To determine the extent to which the department has developed and maintained a business enterprise architecture in accordance with relevant laws and OMB policy and guidance, we reviewed version 10 of its business enterprise architecture, which was released in February 2013, and related information relative to the act’s requirements; U.S. Code, Title 44, Section 3601, which defines an enterprise architecture; and OMB policy and guidance. We also reviewed version 11 of the architecture to determine the extent to which it differed from version 10. Further, we reviewed the department’s business enterprise architecture improvement plan, which it developed in response to a recommendation we made in July 2015. Specifically, we recommended that the department use the results of our portfolio manager survey to determine additional actions that could improve the department’s management of its enterprise architecture activities. In response to our recommendation, the department developed and approved a plan in January 2017. We assessed the extent to which the department had delivered the planned capabilities relative to the plan. We also reviewed the extent to which the delivery dates of the three planned capabilities and associated tasks changed over time relative to the plan. To assess the extent to which the department developed and maintained an IT enterprise architecture in accordance with relevant laws and OMB policy and guidance, we reviewed content from the department’s IT enterprise architecture and compared it with requirements from the act, U.S. Code, Title 44, Section 3601, and OMB policy and guidance. Specifically, we reviewed version 2.0 of the department’s information enterprise architecture, which was released in August 2012, relative to the act’s requirement for the DOD CIO to develop an IT enterprise architecture that is to describe a plan for improving the IT and computing infrastructure of the department, including for each of the major business processes. We reviewed volumes I and II of the information enterprise architecture and the four enterprise-wide reference architectures to determine if the architecture described a plan for improving the IT and computing infrastructure of the department, as called for by the act. We also reviewed whether the architecture included content that described the current and the target environments, and a transition plan to get from the current to the target environment, consistent with OMB policy and guidance. To determine the extent to which the department has integrated its business and IT architectures, as required by the act, we reviewed DOD Directive 8000.01, Management of the Department of Defense Information Enterprise. We also reviewed meeting documentation from the information enterprise architecture working group responsible for the development of an updated architecture. In addition, we reviewed meeting documentation from the Business Enterprise Architecture Configuration Control Board to identify any discussions among CIO and DCMO officials regarding integration of the two architectures, as well as the level of participation by both parties. Finally, we interviewed officials in the Office of the DCMO and the Office of the CIO about efforts to develop and maintain a business enterprise architecture, develop an IT enterprise architecture, and integrate the business and IT architectures. To determine the extent to which the department has used the Defense Business Council to provide advice to the Secretary of Defense on developing the business enterprise architecture, reengineering the department’s business processes, developing and deploying business systems, and developing requirements for business systems, in accordance with the act, we analyzed the department’s December 2014 Defense Business Council Charter and April 2017 defense business systems investment management guidance. We compared information in the charter and guidance to the requirement that the Secretary establish the Defense Business Council to advise the Secretary on the required defense business system topics. In addition, we obtained and analyzed meeting summaries and briefings for 27 Defense Business Council meetings that took place from January 2016 through August 2017. Specifically, we assessed the frequency with which the meetings held during this time period addressed the required topics. We chose this time period because 2016 was the first calendar year following the enactment of the NDAA for Fiscal Year 2016. Further, we chose August 2017 as our end date because it was the last month’s data that we could reasonably expect to obtain and review within our reporting time frame. We also interviewed officials in the Offices of the DCMO and CIO about the Defense Business Council and the Deputy’s Management Action Group, which is the governance entity to which the Council reports. To determine the extent to which DOD has ensured that covered business systems are reviewed and certified in accordance with the act, we reviewed a nongeneralizable sample of business systems from DOD’s two categories of covered defense business systems that require certification. To select the sample, we considered Category I systems, which were systems that were expected to have a total amount of budget authority of more than $250 million over the period of the current future- years defense program, and Category II systems, which were systems that were expected to have a total amount of budget authority of between $50 million and $250 million over the period of the future-years defense program. We further categorized the Category II systems into four groups—those owned by the Air Force, the Army, Navy, and the remaining DOD components. We selected one system with the highest expected cost over the course of the department’s future-years defense program from each group. This resulted in our selection of five systems: one Category I system, one Category II system from each military department, and one Category II system from the remaining DOD components. We reviewed, respectively, DOD’s Healthcare Management System Modernization Program; Air Force’s Maintenance, Repair and Overhaul initiative; Army’s Reserve Component Automation System; Navy’s Electronic Procurement System; and the Defense Logistics Agency’s Defense Agencies Initiative Increment 2. We determined that the number of systems we selected was sufficient for our evaluation. For each system, we assessed the extent to which it had been certified on the basis of the five certification requirements in the act. Specifically, we evaluated investment decision memos and certification assertions to determine if each system had been certified according to the act’s requirements, which include ensuring that the system had been, or was being, reengineered to be as streamlined and efficient as practicable, and the implementation of the system would maximize the elimination of unique software requirements and unique interfaces; was in compliance with the business enterprise architecture or would be in compliance as a result of planned modifications; had valid, achievable requirements, and a viable plan for implementing those requirements; had an acquisition strategy designed to eliminate or reduce the need to tailor commercial off- the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and was in compliance with the department’s auditability requirements. We did not determine whether the certification assertions were valid. For example, we did not evaluate business process reengineering activities to determine if they were sufficient. We also interviewed DOD DCMO and military department officials about the certification of these systems. To determine the reliability of the business system cost data used to select the systems, we reviewed system documentation for the three systems DOD uses to store data, which include the Defense Information Technology Investment Portal, the DOD Information Technology Portfolio Repository, and the Select and Native Programming-Information Technology system. In this regard, we requested and reviewed department responses to questions about the systems and about how the department ensures the quality and reliability of the data. In addition, we requested and reviewed documentation related to the systems (e.g., data dictionaries, system instructions, and user training manuals) and reviewed the data for obvious issues, including missing or questionable values. We also reviewed available reports on the quality of the inventories (e.g., inspector general reports). We found the data to be sufficiently reliable for our purpose of selecting systems for evaluation. We conducted this performance audit from January 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Prior GAO Reports on Department of Defense Business System Modernization Since 2005, we have issued 11 reports assessing DOD’s actions to respond to business system modernization provisions contained in U.S. Code, Title 10, Section 2222. The reports are listed below. DOD Business Systems Modernization: Additional Action Needed to Achieve Intended Outcomes, GAO-15-627 (Washington, D.C.: July 16, 2015). Defense Business Systems: Further Refinements Needed to Guide the Investment Management Process, GAO-14-486 (Washington, D.C. May 12, 2014). DOD Business Systems Modernization: Further Actions Needed to Address Challenges and Improve Accountability, GAO-13-557 (Washington, D.C.: May 17, 2013). DOD Business Systems Modernization: Governance Mechanisms for Implementing Management Controls Need to Be Improved, GAO-12-685 (Washington, D.C.: June 1, 2012). Department of Defense: Further Actions Needed to Institutionalize Key Business System Modernization Management Controls, GAO-11-684 (Washington, D.C.: June 29, 2011). Business Systems Modernization: Scope and Content of DOD’s Congressional Report and Executive Oversight of Investments Need to Improve, GAO-10-663 (Washington, D.C.: May 24, 2010). DOD Business Systems Modernization: Recent Slowdown in Institutionalizing Key Management Controls Needs to Be Addressed, GAO-09-586 (Washington, D.C.: May 18, 2009). DOD Business Systems Modernization: Progress in Establishing Corporate Management Controls Needs to Be Replicated Within Military Departments, GAO-08-705 (Washington, D.C.: May 15, 2008). DOD Business Systems Modernization: Progress Continues to Be Made in Establishing Corporate Management Controls, but Further Steps Are Needed, GAO-07-733 (Washington, D.C.: May 14, 2007). Business Systems Modernization: DOD Continues to Improve Institutional Approach, but Further Steps Needed, GAO-06-658 (Washington, D.C.: May 15, 2006). DOD Business Systems Modernization: Important Progress Made in Establishing Foundational Architecture Products and Investment Management Practices, but Much Work Remains, GAO-06-219 (Washington, D.C.: November 23, 2005). Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, individuals making contributions to this report include Michael Holland (Assistant Director), Cheryl Dottermusch (Analyst in Charge), John Bailey, Chris Businsky, Camille Chaires, Nancy Glover, James Houtz, Anh Le, Tyler Mountjoy, Monica Perez-Nelson, Priscilla Smith, and Adam Vodraska.
Why GAO Did This Study DOD spends billions of dollars each year on systems that support its key business areas, such as personnel and logistics. For fiscal year 2018, DOD reported that these business system investments are expected to cost about $8.7 billion. The NDAA for Fiscal Year 2016 requires DOD to perform activities aimed at ensuring that business system investments are managed efficiently and effectively, to include taking steps to limit their complexity and cost. The NDAA also includes a provision for GAO to report every 2 years on the extent to which DOD is complying with the act's provisions on business systems. For this report, GAO assessed, among other things, the department's guidance for managing defense business system investments and its business and IT enterprise architectures (i.e., descriptions of DOD's current and future business and IT environments and plans for transitioning to future environments). To do so, GAO compared the department's system certification guidance and architectures to the act's requirements. GAO also interviewed cognizant DOD officials. What GAO Found The Department of Defense (DOD) has made progress in complying with most legislative provisions for managing its defense business systems, but additional actions are needed. For example, the National Defense Authorization Act (NDAA) for Fiscal Year 2016 required DOD and the military departments to issue guidance to address five requirements for reviewing and certifying the department's business systems. While DOD has issued guidance addressing all of these requirements, as of February 2018, the military departments had shown mixed progress. ● Fully addressed: The department provided evidence that it fully addressed this requirement. ◐ Partially addressed: The department provided evidence that it addressed some, but not all, portions of this requirement. ◌ Not addressed: The department did not provide any evidence that it addressed this requirement. Source: GAO analysis of Department of Defense documentation. | GAO-18-130 The military departments' officials described plans to address the gaps in their guidance; however, none had defined when planned actions are to be completed. Without guidance that addresses all five requirements, the military departments risk developing systems that, among other things, are overly complex and costly to maintain. DOD has efforts underway to improve its business enterprise architecture, but its information technology (IT) architecture is not complete. Specifically, DOD's business architecture includes content called for by the act. However, efforts to improve this architecture to enable the department to better achieve outcomes described by the act, such as routinely producing reliable business and financial information for management, continue to be in progress. In addition, DOD is updating its IT enterprise architecture, which describes, among other things, the department's computing infrastructure. However, the architecture lacks a road map for improving the department's IT and computing infrastructure for each of the major business processes. Moreover, the business and IT enterprise architectures have yet to be integrated, and DOD has not established a time frame for when it intends to do so. As a result, DOD lacks assurance that its IT infrastructure will support the department's business priorities and related business strategies. What GAO Recommends GAO is making six recommendations, including that DOD and the military departments establish time frames for, and issue, required guidance; and that DOD develop a complete IT architecture and integrate its business and IT architectures. DOD concurred with three and partially concurred with three recommendations. GAO continues to believe all of the recommendations are warranted as discussed in this report.
gao_GAO-18-270
gao_GAO-18-270_0
Background The IG Act establishes OIGs both at select major federal agencies, called establishments, and at some smaller agencies, called designated federal entities (DFE), to conduct oversight of their programs and operations. The IG Act also sets out, among other things, (1) the duties and responsibilities of each IG with respect to the entity within which its office is established; (2) how IGs are appointed, whether by the President with the advice and consent of the Senate, or by the head of the DFE; and (3) the processes for removing an IG. Duties, Responsibilities, and Authorities under the IG Act The IG Act established OIGs to be independent and objective units to (1) conduct and supervise audits and investigations relating to the programs and operations of government establishments; (2) provide leadership and coordination and recommend policies for activities designed to promote economy, efficiency, and effectiveness in the administration of and to prevent and detect fraud and abuse in such programs and operations; and (3) provide a means for keeping the head of the agency and Congress fully and currently informed about problems and deficiencies relating to the administration of such programs and operations and the necessity for and progress of corrective action. IGs covered by the IG Act have been granted broad oversight authority, including to conduct, supervise, and coordinate audits and investigations; directly access the records and information related to the applicable agency’s programs and operations; request assistance from other federal, state, and local government agencies; subpoena information and documents; administer oaths when conducting interviews; hire staff and manage their own resources; and receive and respond to complaints from agency employees, whose identities are to be protected. In addition to their duties, responsibilities, and authorities in conducting their oversight work, IGs derive independence through numerous provisions in the IG Act. These provisions include the following: the requirement that IGs be appointed without regard to political affiliation and solely on the basis of integrity and demonstrated ability; the authority to select, appoint, and employ OIG officers and employees, as noted above; the authority of IGs to report violations of law directly to the Department of Justice; the requirement for agency heads to transmit the IGs’ semiannual reports of their activities to Congress without alteration; the authority of IGs to perform any audit or investigation without interference from the agency head or others except under certain conditions specified by the act; and the requirement for the President or the agency head to communicate to Congress the reasons for removing an IG. IGs Established by the IG Act and the Appointment Process The IG Act establishes the basis on which an IG is to be appointed; which OIGs are required to have presidentially appointed, Senate confirmed (PAS) IGs; and which are DFE OIGs, with IGs appointed by the heads of the agencies. For the purposes of the IG Act, subject to some specifically enumerated exceptions, the head of the DFE is the DFE’s board or commission, or if an entity does not have a board or commission, any person or persons designated by statute as the head of the DFE. Of the 64 active IG offices established under the IG Act, 32 have PAS IGs and 32 have DFE IGs. Both PAS and DFE IGs are required to be appointed without regard to political affiliation and solely on the basis of integrity and demonstrated ability in accounting, auditing, financial analysis, law, management analysis, public administration, or investigations. See table 1 for a list of PAS and DFE agencies as designated by the IG Act. The process for appointing PAS IGs generally has three main steps: (1) President’s selection and nomination, (2) Senate’s evaluation and confirmation, and (3) President’s official appointment. CIGIE assists the White House Office of Presidential Personnel (OPP) in the vetting of candidates for the IG nomination process. According to CIGIE officials, CIGIE’s Candidate Recommendations Panel receives résumés for potential candidates in various ways, including submissions from interested candidates through a link on the CIGIE website. The CIGIE panel also proactively reaches out to potential candidates who members of this panel believe would be good choices for IG positions. According to a CIGIE official, during the prior administration, the panel reviewed résumés from potential IG candidates and sent the résumés of those most qualified to the White House OPP for its process. Under the current administration, the CIGIE panel conducts interviews of potential IG candidates in addition to reviewing résumés, and then refers those candidates that the panel deems the most qualified to the White House OPP. CIGIE’s panel assesses potential candidates’ leadership philosophy and skills, as well as their understanding of the independent, non-partisan role of an IG. PAS IGs may be removed from office by the President, who must communicate the reasons for removal in writing to both Houses of Congress not later than 30 days before the removal. A DFE IG is appointed by the head of the entity in accordance with the applicable laws and regulations governing appointments within that entity. DFE IGs do not require presidential appointment or Senate confirmation. DFE IGs may be removed from office by the agency heads, or for an entity led by a board or a commission, removal requires written concurrence of a two-thirds majority of the board or commission. Similar to the President removing a PAS IG, the head of the entity must communicate the reasons for removal in writing to both Houses of Congress not later than 30 days before the removal. After a PAS IG retires or otherwise leaves office, the Federal Vacancies Reform Act of 1998 (Vacancies Act) instructs the official previously serving as first assistant to the vacant position to perform the duties of that position in an acting capacity, absent other action by the President. For DFE OIGs, acting IGs may be appointed according to laws, regulations, and policies governing appointments for each agency. Neither the IG Act nor the Vacancies Act places limits on the authority of acting IGs (relative to that of officially appointed IGs) to carry out the statutory responsibilities of the IG. However, the IG Act’s requirement for congressional notification prior to removal of a permanent IG does not apply to an acting IG. IG Vacancies as of Fiscal Year 2017 and the Number and Duration of IG Vacancies for Fiscal Years 2007 through 2016 As of September 30, 2017, there were 12 IG vacancies in the 64 IG Act offices. Over the 10-year period covering fiscal years 2007 through 2016, the total number of IG vacancies varied with a low of 6 total vacancies as of the end of fiscal year 2007 to a high of 11 vacancies as of the end of fiscal years 2009, 2014, and 2016. In addition, some OIGs experienced prolonged continuous vacancies ranging from over 1 year to approximately 6 years. Twelve IG Positions Were Vacant as of September 30, 2017 As of September 30, 2017, there were 12 IG vacancies consisting of 10 vacancies in PAS IGs and 2 in DFE IGs, as shown in table 2. Two of these vacancies had presidential nominations that were awaiting Senate evaluation as of September 30, 2017. During fiscal year 2017, four OIGs had an IG position that became vacant: Small Business Administration, Federal Election Commission, Department of Housing and Urban Development, and Tennessee Valley Authority. Number of IG Vacancies Varied from Fiscal Years 2007 through 2016 For the 10-year period from October 1, 2007, through September 30, 2016, the total number of IG vacancies at the ends of the fiscal years ranged from 6 to 11 vacancies, as shown in figure 1. For the PAS IGs, the number of IG vacancies increased from 3 at the end of fiscal year 2007 to 9 at the end of fiscal year 2016. For DFE IGs, the number of IG vacancies ranged from 0 to 4 vacancies at the ends of the fiscal years during the 10-year period. The Cumulative Duration of IG Vacancies Ranged from Less Than 1 Month to Almost 6 Years for Fiscal Years 2007 through 2016 From October 1, 2006, through September 30, 2016, 53 of the 64 IG Act offices experienced vacancies, as shown in figure 2. Of the 32 PAS IGs, 26 experienced at least one vacancy during the 10-year period with the cumulative duration ranging from 25 days to 5 years and 258 days. Of the 32 DFE IGs, 27 experienced at least one vacancy during the 10-year period with the cumulative duration ranging from 13 days to 3 years and 67 days. Of the 26 PAS IGs that had vacancies during the 10-year period from fiscal years 2007 through 2016, 20 experienced at least one vacancy with a cumulative duration of more than 1 year, and for 11 of these IGs the cumulative vacancy period was over 3 years, as shown in figure 3. In addition, 5 of the 20 agencies with a cumulative IG vacancy of 1 year or more were the result of the agency experiencing two or more periods of IG vacancy over the 10-year period. The Department of State experienced the longest period of continuous PAS IG vacancy during the 10-year period, with 5 years and 258 days without a permanent IG. The Department of State IG vacancy began on January 16, 2008, and no nomination was made by the President until June 27, 2013. The nominee was confirmed by the Senate on September 17, 2013, and the vacancy ended on September 30, 2013. The Department of the Interior experienced the second longest PAS IG vacancy during the 10-year period, with 4 years and 273 days without a permanent IG as of the end of fiscal year 2016, and the vacancy remained as of the end of fiscal year 2017. The Department of the Interior IG vacancy began on January 1, 2012. The acting IG was nominated by the President on June 8, 2015. The nomination was received in the Senate and referred to the Committee on Energy and Natural Resources, which held a hearing on October 20, 2015. The nomination was returned to the President on January 3, 2017, under the provisions of a Senate rule that require nominations that are not confirmed or rejected during the congressional session be returned to the President. Once returned, the Senate will not consider the nomination until the President provides the Senate a new nominee. Other PAS IGs experienced several vacancies throughout the 10-year period. For example, the Department of Defense OIG had four periods of vacancy from fiscal years 2007 through 2016, two of them 1 year or longer, and one that began in January 2016 and remained vacant as of September 30, 2016. Of the 27 DFE IG offices that experienced IG vacancies during the 10- year period from fiscal years 2007 through 2016, 12 experienced at least one vacancy with a cumulative duration of more than 1 year as shown in figure 4. In addition, 5 of the 12 agencies with a cumulative IG vacancy of 1 year or more were the result of the agency experiencing two or more periods of IG vacancy over the 10-year period. The U.S. International Trade Commission (USITC) experienced the longest continuous DFE IG vacancy during the 10-year period, with 3 years and 67 days without a permanent IG. The position was filled and the vacancy ended on December 6, 2009. In fiscal year 2011, we reported that the USITC OIG lacked an appointed IG and adequate budget and staff resources for fiscal years 2005 through 2009, which contributed significantly to the OIG’s limited oversight of USITC. We recommended that the Chairman of USITC revise formal orientation information provided to the commissioners to include sections on, among other things, the responsibilities of the Chairman to maintain an appointed IG. USITC implemented these recommendations. The National Archives and Records Administration experienced the second longest DFE IG vacancy during the 10-year period, with 2 years and 190 days without a permanent IG. The vacancy started when the IG was placed on administrative leave, which lasted from September 14, 2012, until August 9, 2014. The National Archives and Records Administration was not able to replace the IG during this time. The position was eventually filled on March 23, 2015. Acting IG, OIG Employee, and Permanent IG Views on the Impact of IG Vacancies, and Permanent IG Suggestions for Improving the Appointment Process We surveyed the acting IGs and OIG employees who worked under an acting IG among the 64 active OIGs established under the IG Act and asked for their views on the impact that having an acting IG has on an OIG’s ability to carry out its duties and responsibilities. While overall the survey responses indicated that having an acting IG had no impact on the OIGs’ ability to perform their statutory functions, responses varied in areas related to (1) planning and conducting work, (2) interacting with agency management, and (3) managing the OIG and personnel. In addition, a number of survey responses also pointed to challenges or positive outcomes in their experiences of working under an acting IG, and certain permanent IGs provided suggestions for improvements in the IG appointment process. For details on our survey methodology, see appendix I. Acting IG and OIG Employee Views on the Impact of IG Vacancies Views on the Impact of IG Vacancies on the OIG’s Ability to Plan and Conduct Work Acting IGs: When asked whether, during their tenure as acting IGs, the vacancy had a positive impact, negative impact, or no impact on several areas related to the OIG’s ability to plan and conduct work, overall, at least eight of the nine acting IGs indicated that having an acting IG had no impact on the OIG’s ability to plan and conduct work. Table 3 summarizes the responses from the acting IGs related to the OIG’s ability to plan and conduct audit work. One of the nine acting IGs reported that the vacancy had a positive impact on developing comprehensive work plans for audits, investigations, and other OIG work, as well as addressing high-risk and high-priority issues. OIG employees: As shown in figure 5, the estimated percentage of OIG employees who worked under an acting IG who believe this has no impact ranged by question from 49 percent to 69 percent for the areas related to the OIG’s ability to plan and conduct audit work. In contrast, based on our survey results, almost a quarter of the OIG employees believed that working under an acting IG had a negative effect on their OIG’s ability to complete reports and other OIG work products in a timely fashion, issue high-visibility or high-risk reports, and address high-risk and high-priority issues. According to the survey results, from 6 percent to 13 percent of the employees found a positive impact in these areas. We also asked OIG employees to identify any additional challenges, in written comments, that they experienced in relation to their work under an acting IG. Four OIG employees provided responses related to the ability to plan and conduct work, specifically, on the timely completion of reports and other OIG work products, as noted in the following examples of individual comments: “However, seemed to struggle to ‘see the forest through the trees’ and the timeliness (and associated impact) of our work suffered significantly.” “Sometimes it would take longer to get a report out because were a review from the IG.” We also asked OIG employees to identify any positive outcomes or improvements based on their experiences with working under an acting IG. The following are some OIG employee written responses that were received regarding positive outcomes or improvements, which were related to the acting IG’s ability to plan and conduct work. The acting IG came from within the OIG. Thirteen OIG employees provided comments related to the acting IG coming from within the OIG ranks and having expertise in the agency issues, as noted in the following examples of individual comments: “Our acting IG was already a part of our OIG when appointed. Thus, they were already invested in the mission, our offices, and staff.” “The acting Inspector General had significant experience with agency management, and with our office processes and procedures, so products were issued timely.” “A positive is that the acting Inspector General usually comes with a wealth of knowledge about the OIG’s current practices and can hit the ground running to keep things moving along effectively.” “Because of the acting IG’s investigative background as well as his lack of interest in further political appointment I think we actually got more done than under the former and current IG.” Views on the Impact of IG Vacancies on the OIG’s Ability to Interact with Agency Management Acting IGs: When asked whether, during their tenure as acting IGs, the vacancy had a positive impact, negative impact, or no impact on the OIG’s ability to interact with agency management, seven of the nine acting IGs indicated that there was no impact on the OIG’s ability to interact with the agency. Other acting IGs indicated a positive impact in regard to responsiveness from agency management, meeting with senior agency leadership, responsiveness of agency to recommendations, and timely access to agency documentation. One of the nine acting IGs indicated a negative impact regarding responsiveness of the agency to recommendations, and another saw a negative impact in timely access to agency documentation, as summarized in table 4. While the majority of the acting IGs responded that there was no impact in interactions with agency management, in commenting about challenges faced during their acting IG tenure that affected their ability to carry out their responsibilities, one acting IG commented that agency managers failed several times to disclose relevant information that affected both the results and timeliness of the OIG’s audit work. In addition, one acting IG found that agency officials were more open to recommendations and more supportive of the OIG during the acting IG’s tenure than under the previous permanent IG tenure. OIG employees: As shown in figure 6, we estimate that 63 percent of the OIG’s employees working under an acting IG believed that there was no impact on the responsiveness from agency management and an estimated 65 percent believed that there is no impact on timely access to agency documentation. Based on our survey results, the estimates for positive impact ranged from 7 percent to 9 percent, and approximately 17 percent of the OIG employees believed that working under an acting IG has a negative impact on these two areas. Acting IGs: Responses of the acting IGs regarding their ability to manage the OIG and employees varied by question, as summarized in table 5. For example, regarding employee morale, four of the nine acting IGs indicated that an acting IG leading the office had a negative impact, three indicated that the vacancy had a positive impact, and one indicated that the vacancy had no impact. In written comments included in the survey, three acting IGs provided additional information regarding restructuring the office and developing or changing office policy. Specifically, two acting IGs indicated a reluctance to make changes that could not be easily reversed by an incoming appointed IG or to “shake up the organization” only to experience further changes once an IG was in place. The third acting IG identified constraints as typical for acting officials in making personnel, policy, or organizational changes, especially when the length of the tenure as the acting official is unknown. We also asked the acting IGs if they had faced any challenges during their tenure that affected their ability to carry out their statutory duties and responsibilities. Of the three acting IG respondents who answered “yes,” two provided written responses citing challenges in the area of OIG management and personnel, such as difficulty in promotions and hiring decisions and OIG employee resistance to changes. For example, one acting IG indicated that the acting IG needed to get a special delegation from the agency to approve certain office promotions and hiring decisions. Another acting IG indicated the agency’s Office of General Counsel had to resolve a matter involving an employee who refused to relinquish his or her duties after the acting IG’s decision to reassign the employee. OIG employees: As shown in figure 7, just over 50 percent of the OIG employees working under an acting IG believe that an acting IG had no impact or a positive impact on these two areas. We also estimate that about 36 percent of the OIG employees believed that working for an acting IG negatively affected employee morale and about 23 percent believed that it negatively affected the ability to attract and retain qualified employees. We asked OIG employees to identify any additional challenges they have experienced in relation to their work under an acting IG. Eighty-three employees provided written responses, and 65 of those responses were related to areas that affect the ability to manage the OIG and its personnel, which are summarized below. Strategic planning. Nineteen OIG employees provided comments related to difficulty in strategic planning, as noted in the following examples of individual comments: “An acting IG is a caretaker, someone internal who is expected to maintain the status quo. Therefore, having an acting IG in place for an extended period may have delayed the implementation of reforms or bold changes that would normally be expected from new leadership.” “Internal processes, which may need to be changed, may not change in anticipation of the new leadership.” “Certain decisions such as ‘strategic vision’ or filling high-level positions within the organization may be delayed pending appointment of a permanent IG.” “ are not as willing to make changes at the agency because it may not be what the new IG wants. [Acting IGs] are more stewards of the organization until the new IG arrives.” Uncertainty. Fifteen OIG employees provided comments related to the uncertainty within the OIG, as noted in the following examples of individual comments: “The ability to make long-term decisions is affected due to uncertainty incoming Inspector General will support the decisions made by the acting Inspector General.” “Waiting for a permanent selection and the uncertainty as to the future impact of the person selected is disconcerting. It also negatively affects employee morale and motivation.” “Working under an acting Inspector General creates a climate of uncertainty within the organization . . . . They hesitate to make a decision that would be contrary to the views and/or opinions of the new IG and put them in what they perceive to be a bad light.” “I think the biggest challenges we had were related to employee morale and the direction of the organization as a whole. Employees did not know who was going to permanently lead the organization, or when the decision would be made on this.” Staffing. Twelve OIG employees provided comments related to addressing staffing needs or issues with staffing, as noted in the following examples of individual comments: “There were several difficulties related to meeting human resource needs without the proper authority to make decisions such as removals, promotions and/or bonuses.” “Issues with staffing could not be finalized pending the appointment of a new IG.” “Everyone except a select few in the OIG senior staff was leaving.” Morale. Eight OIG employees provided comments related to morale issues, as noted in the following examples of individual comments: “Promotions were unnecessarily delayed under the acting IG. Not good for morale.” “Certain issues relating to personnel management were left unaddressed or dismissed (i.e., problem managers) morale to dip among staff members.” “The acting IG appeared to have the need to prove to the agency what power they had. This, in effect, caused a great discord amongst not only agency management and OIG, but also between the OIG and the rest of the agency that we are still working to overcome.” Lack of leadership and office structure. Eight OIG employees provided comments related to the lack of leadership and office structure, as noted in the following examples of individual comments: “ management organization was seemingly dysfunctional. In part, because alliances likely to change once permanent IG .” “There isn’t a sense of real structure without IG.” “Lack of guidance on ongoing audits at that time. The acting IG wore too many hats: Acting IG, Assistant IG for Audits, and Assistant IG for Investigations.” Acting IGs are risk-averse pending permanent IG nomination. Two OIG employees provided the following comments related to the pending IG nomination: “I think it’s fair to say, although granted, it is a generalization, that an acting IG is more likely to be tentative and risk-averse than a fully confirmed IG. Also, within the OIG itself, senior staff may likewise be tentative and risk-averse knowing that new leadership is in the wings.” “The acting IGs are always hesitant to make waves . . . . One of them was in the process of being nominated, so didn’t want to do anything that could be seen as controversial or unpopular with staff. It the status quo being continued until a new official is confirmed.” Negatively affects budget discussions. One employee provided the following comment related to budget discussions: “In budget discussions with Congress and the administration, there is no trust that the acting IG understands the will of Congress . . . or has administration support.” We also asked OIG employees to identify any additional positive outcomes or improvements, in written comments, based on their experience from having an acting IG. Sixty-five employees provided written responses, and 12 of those responses related to the acting IG’s ability to manage the OIG and personnel, which are summarized below. Higher morale. Twelve OIG employees provided comments related to higher morale with an acting IG, as noted in the following examples of individual comments: “ scores remarkably higher under .” “The acting IG, a career civil servant, established trusting relationships meant for the long haul with the leadership team and staff, and also members of the overseen agency, and with the Congress. Morale was high and productivity was exceptionally high.” “I believe that the morale and overall quality of work that I witnessed at OIG offices during the tenures of the two acting IGs that I worked for was superior to that of offices that I worked in under one or more Senate-confirmed IGs.” Acting IG, Permanent IG, and OIG Employee Views on the Impact of IG Vacancies on the Ability to Maintain Independence and Permanent IG Suggestions regarding Independence The following summarizes (1) responses from acting IGs, permanent IGs, and OIG employees regarding the impact, if any, of a prolonged vacancy on the OIG’s ability to maintain independence and (2) permanent IGs’ suggestions on how to improve independence. Acting IG Views on the Impact of IG Vacancies on the Ability to Maintain Independence We asked acting IGs if they felt that serving as an acting IG instead of a permanent IG created threats (such as self-interest threat or bias threat) to their independence of mind or independence in appearance, and eight responded “no” and one responded “yes.” The eight acting IGs who responded “no” to independence threats provided additional written comments to explain their answers, as noted in the following examples of individual explanations: “Because I’d been in the office since inception . . . I understood the importance of independence in all aspects.” “I was appointed to carry out the duties and functions of the IG and that is what I did to the best of my abilities. As an OIG employee, independence is always a factor, regardless of position and taking on additional duties and responsibilities did not impact that.” “I stated clearly and repeatedly to agency management and to Capitol Hill stakeholders that I was not interested in seeking the IG nomination on a permanent basis, in order to mitigate any concerns about independence or bias that could arise from seeking an appointment from officials I was charged with auditing/investigating.” “I declined the position of permanent Inspector General, in part to preserve my independence in the face of the potential conflict that could be perceived were I seeking the appointment. Serving in an acting capacity per se creates no threat to independence in fact or in appearance insofar as I am concerned based on my experience.” “Serving as acting IG had no threats to independence.” The acting IG that responded “yes” commented that there may be an appearance of independence problem if the acting IG is lobbying for the permanent position. We also asked the acting IGs if their independence was ever questioned by agency officials or others because of their role. Eight of the nine acting IGs answered “no,” while one acting IG answered “yes” and indicated that an external entity had questioned the independence of the acting IG. The acting IG further commented that certain Members of Congress had questioned the independence of acting IGs. Permanent IG Views on the Impact of IG Vacancies on the Ability to Maintain Independence We asked 52 permanent IGs whether they felt that an acting IG is inherently less independent than a permanent IG and whether an acting IG is less independent in appearance. While the majority of permanent IGs who responded did not think that acting IGs are inherently less independent, they did indicate by a similar majority that an acting IG is less independent in appearance than a permanent IG, especially in situations when the acting IGs are applying for the IG positions. Of the 49 IGs who responded to the question of whether an acting IG is inherently less independent, 13 said “yes,” 30 said “no,” and 6 responded that they had no basis for judgment, as shown in figure 8. Of the13 permanent IGs that answered “yes” to the acting IG being inherently less independent, 12 provided written comments as noted in the following examples of individual explanations. An acting IG who is a candidate for position. Six permanent IGs provided comments related to an acting IG who is seeking the permanent position, as noted in the following examples of individual comments: “If the selecting officials (or recommending officials) are also subject to audit or investigation by the acting , and the acting is interested in the permanent position they may actually be influenced to not report aggressively.” “They could be perceived as less independent if they are a candidate for the job and they often are.” “Generally speaking, the position of Inspector General would be a desirable promotion for an acting IG (sometimes the Deputy IG). An acting/Deputy IG, interested in the IG position and striving to impress the agency leadership/White House for consideration of the IG job, could be less aggressive (independent) in an effort to please the ‘hiring official’ (agency head/White House). Agency leaders/White House understand this dynamic, so in order to avoid/minimize any negative reports by the OIG, the agency heads can delay filling IG positions in order to have more ‘control’ over their acting IG.” Lack of Senate confirmation. Three permanent IGs provided comments in this category related to an acting IG having less authority to deal with agency officials and Congress than a permanent IG as the acting IG lacked Senate confirmation, as noted in the following individual comments: “Not having the full backing of the President, nor confirmation of the Senate, does not provide an even playing field when the IG negotiates with PAS agency heads and other PAS or senior level officials.” “First, because the agency knows that the acting IG is only temporarily in that position, the willingness of agency officials (particularly middle management and component leadership) to inappropriately respond to and challenge OIG oversight efforts increases. Second, an acting PAS IG (unlike a confirmed PAS IG) has not been approved for that position by the Senate and therefore doesn’t have that stamp of approval if there is a need to respond to inappropriate efforts by the agency to interfere with the OIG.” “In my experience, discussions between the Dept’s political leaders and the ‘permanent,’ politically-appointed IG (as well as between Congress and that IG) are different—more frank—in substance and tone.” Of the 30 permanent IGs that answered “no” to the acting IG being inherently less independent, 28 provided written comments as noted in the following examples of individual explanations. An acting IG has the same statutory authority as a permanent IG. Eight permanent IGs provided responses related to the acting IG having the same statutory authority as a permanent IG and the OIG structure having independence safeguards, as noted in the following examples of individual comments: “Because of the inherent structure of an OIG, with the independence safeguards that are derived from the IG Act, the Office of Inspector General should continue to be independent even if headed by an acting IG.” “An acting IG has the same independence protections as a ‘permanent IG’.” “ have the same statutory powers as an appointed IG to fulfill their role.” Having a permanent title should not be a factor in independence. Ten permanent IGs provided responses related to a permanent title not being a factor in independence as the acting IGs are held to the same standards and independence is driven by the acting IG’s character and background, as noted in the following examples of individual comments: “Independence is a matter of personal mindset and perceptions drawn by others based on individual/Office actions. Having the permanent title is not a key element required in order for the above to effectively exist.” “An acting IG can carry out his/her responsibilities as independently as a permanent IG; there are no inherent restrictions on their ability/capacity due solely to status. It boils down to the individual involved and their willingness/ability to do so in the context in which they operate.” “The independence resides in the position regardless of whether being occupied by an acting or permanent IG.” “The independence of an IG is largely driven by his or her character, background, and experience.” “Independence is obtained by the characteristics of the individual in the position of Inspector General. Just because the person occupying the position is ‘acting’ does not mean they are not independent.” An acting IG is usually a career OIG employee. Five permanent IGs provided comments related to the acting IG being a career OIG employee and knowing the importance of independence, as noted in the following examples of individual comments: “Career OIG employees place a high value on the independence of the office.” “Generally acting IGs come from within the OIG and have long service in the community and an understanding of and commitment to the role of the IG.” We also asked permanent IGs whether they felt that an acting IG is less independent in appearance than a permanent IG. Thirty of the 49 IGs who responded to this question answered “yes” and 13 answered “no,” as shown in figure 9. Of the 30 permanent IGs who answered “yes” to this question, 27 provided written comments, some of which are summarized below. An acting IG will be less independent in appearance if he or she is seeking the permanent position. Sixteen permanent IGs provided comments related to an acting IG being less independent in appearance if he or she is seeking the permanent position or perceived to be seeking the permanent IG position, as noted in the following examples of individual comments: “There will always be an appearance issue regarding the judgment of an acting IG if that individual is seeking the permanent position.” “There may be an appearance that an acting IG is less independent from the agency, particularly where he or she is seeking to become the permanent IG and needs the endorsement of the agency to move forward. This scenario could create an appearance of, or an actual, conflict of interest.” “If the incumbent aspires to the permanent appointment, I feel the designation as acting Inspector General carries the inherent risk that the incumbent may be vulnerable to political pressures, since the incumbent’s chances of being appointed as the permanent Inspector General may be adversely influenced by sensitive or controversial decisions made during the period that he/she served as acting Inspector General.” “An ‘acting’ may be reluctant to assert independence if the acting believes that he or she may be in the running for the vacant IG job. This may create a conflict under certain facts.” “Unfortunately, if an acting IG is interested in becoming the IG, people who are looking for reasons to find fault with their work can make an argument that they are pulling punches to better their chances of being selected. I don’t think this is true in most cases, but the argument is made.” An acting IG is also perceived as less independent. Six permanent IGs provided comments related to an acting IG being perceived as less independent by Congress, the public, and other organizations, as noted in the following examples of individual comments: “I am aware of at least one instance where the press and certain Members of Congress speculated or implied that an acting IG who wanted to be considered for appointment as the IG was lenient toward the agency.” “Congress and the public . . . have both expressed this concern.” “There is an inherent suspicion that the acting IG will pull his or her punches on audits and inspections in order to get nominated by the agency he is auditing.” “Some judge an acting IG for the actions they take or don’t take through the prism of partisan politics and often unfairly ascribe decisions to the acting IG’s interest in becoming an IG.” Of the 13 permanent IGs who answered “no,” 11 provided written comments, some of which are summarized below. Acting IGs have the same authority as permanent IGs. Three permanent IGs provided comments related to an acting IG having the same authority as a permanent IG, as noted in the following examples of individual comments: “The law doesn’t change and tenets such as independence are the same regardless of whether you are acting or not.” “An acting IG still heads an independent Office of Inspector General and as long as that office continues to act independently, there should be no appearance issue.” “The acting Inspector General has the same authority as a permanent IG.” Acting IGs should be able to perform their work independently. One permanent IG provided the following comment related to an acting IG performing his or her work independently: “I don’t necessarily think an acting IG has an appearance of lack of independence per se. Again, I think it depends on the acting IG, the agency, and the relationship between the OIG and the agency.” We also asked permanent IGs for suggestions on how the independence of the acting IG role could be improved. Although the majority of permanent IGs did not provide specific suggestions, the following summarizes the 12 written responses received: Expedite the appointment process (7 respondents). Make acting IGs ineligible for the permanent position (1 respondent). Establish a legislative solution for filling positions quickly (1 respondent). Specifically, there should be requirements that (1) acting IGs be named within 30 days of vacancy and the IG position filled within a certain amount of time; (2) DFE IG positions be filled within 180 days of a vacancy, and if not, the agency head should be required to report every 30 days to the agency’s oversight committees on the reason for delay; and (3) for PAS IG positions, a candidate should be nominated within 180 days. For visibility, make clear whether the acting IG is under consideration for the permanent position (1 respondent). The administration should do this for a PAS IG, and the agency should for a DFE IG. Extend statutory protection to acting IGs (1 respondent). “The independence of the acting Inspector General role could be improved by extending the same protections mandated for the Inspector General position to the acting Inspector General (as appropriately tailored for the temporary nature of the ‘acting’ role).” Rotate the individuals who will be in the acting IG position (1 respondent). In addition to views on the acting IG’s independence, we asked permanent IGs to provide additional comments and identify any challenges related to the acting IG role and prolonged IG vacancies. Thirty-one written responses were provided for this question, some of which are summarized below. Importance of permanent IGs. Six permanent IGs provided written comments related to the importance of the permanent IG and impediments in the role of acting IGs, as noted in the following examples of individual comments: “Prolonged IG vacancies are never good, and negatively impact the entire IG community and CIGIE because we need fully engaged IGs who can participate in IG and CIGIE business knowing that they will be in the position for the long-term and without wondering when and whether they will be replaced.” “IG vacancies have been allowed to be vacant for years. While the role of an acting IG may be filled successfully, it is important to each agency/department to have a permanent IG who is appointed by the appropriate process.” “Extended vacancies undermine the system of checks and balances.” “I generally believe that it is detrimental for an OIG to have a prolonged IG vacancy with an acting IG. I believe that acting IGs may be disinclined to take necessary agency actions because of their temporary status. In addition, the acting IG is vulnerable to attacks on his or her independence, particularly where he or she is seeking a permanent position and requires the agency’s endorsement.” Effect on strategic planning. Eight respondents pointed out challenges acting IGs face in long-term planning, as noted in the following examples of individual comments: “One of the biggest challenges to an acting IG may be the ability to make long-term plans for the organization.” “A prolonged vacancy creates a leadership gap for the OIG and the entity.” “Acting IGs do not feel empowered to take on new initiatives or projects on behalf of the office, and may feel inhibited in terms of management issues, including hiring.” Authority. Four respondents commented on the need for authority provided by permanent leadership, as noted in the following examples of individual comments: “Regardless of whether the discussion is focused on acting IG positions or any acting leadership position (within Mission or otherwise), there is some level of authority in terms of institutional impact and ability to effect change that comes from knowing those advancing mission have some level of anticipated continuity in service and ability to see things through.” “The acting did a remarkable job at getting the office through a very difficult time, but largely saw as a caretaker. [The acting IG] did not feel comfortable doing the things that I immediately recognized needed to be done. The Office’s work got little traction while the acting was in charge, in part because the Office was without a permanent leader and the agency did not feel compelled to pay attention to OIG recommendations.” “I believe the greatest challenge to anyone in an acting role has more to do with authority than it has to do with independence . . . . I believe it is often difficult for anyone in an acting position to think long-term and make decisions that have long-term implications because they (1) have no idea how long they will be acting and (2) may be overruled or have decisions reversed by a permanent appointee. So I think acting individuals tend to ‘keep the home fires burning’ as well as they can but don’t necessarily think in terms of leading the organization in the direction it needs to go in the future, especially since they don’t know what the future will bring.” OIG morale. Four respondents reported morale problems in OIGs without a permanent IG, as noted in the following examples of individual comments: “Prolonged vacancies in senior leadership positions, whether in an OIG or other government offices, can lead career employees to lose their focus and their dedication to fulfill the mission of the office. When new leadership is finally put into place, it often encounters stiff resistance to any changes because the employees have enjoyed being ‘home alone’.” “The prolonged vacancy at the agency diminished the stature of the office and did not make it an inviting place for experienced oversight staff to want to work.” IG vacancies seen as lack of support. Five respondents reported that prolonged vacancies are seen as a lack of congressional or agency support for the OIG, as noted in the following examples of individual comments: “Prolonged vacancies in the IG position . . . can be viewed by some as a lack of support for the IG oversight mission on the part of the Administration and Congress.” “Any individual serving in any position with the word ‘acting’ in front of it inherently carries less authority than the same individual in the same position serving in a permanent capacity. The longer an IG position is left vacant the greater the appearance that the agency does not want to have an IG providing oversight.” OIG Employees’ Views on the Impact of IG Vacancies on the Ability to Maintain Independence OIG employees’ views on the inherent independence of an acting IG as compared to the independence of a permanent IG are summarized in figure 10. Based on our survey, we estimate that 16 percent of the OIG employees believe that an acting IG is inherently less independent than a permanent IG. Of the employees who responded “yes,” 25 provided written explanations along with their answers, some of which are summarized below. The acting IG may be seeking a permanent position. Eleven OIG employees provided comments related to the acting IG seeking a permanent position, as noted in the following examples of individual comments: “If interested in permanent appointment, there is a risk that acting IG becomes more interested in being liked by and pleasing the agency, thus independence could be impaired.” “An acting Inspector General may be seeking an IG appointment. He/she wants the agency to like him, to support his nomination, and may kowtow to them. This dynamic may result in a ‘don’t rock the boat’ mentality.” “If the acting IG is going to be a candidate for the IG position, and is appointed by the head of the agency, they may stay away from reviewing sensitive issue areas.” The acting IG came from within the OIG. Three OIG employees provided comments related to the acting IG selected from within the OIG having preconceived notions, as noted in the following examples of individual comments: “Our acting Inspector General was previously the IG for Audits and Evaluation. As such, entered the position with substantial preconceived notions about the other directorates. In contrast, our permanent IG came to the position with limited preconceived notions. In the future, it would be better if the Acting IG came from another IG (as opposed to temporarily promoting from within).” “I believe that an acting IG is inherently less independent because he or she has no official term, may either receive an appointment as IG, or be replaced at the discretion of the President.” “Bring in an acting IG from another agency for independence reasons or ensure other acting positions are filled and the acting IG is not performing multiple roles.” Based on our survey, we estimate that 52 percent of the OIG employees believe that an acting IG is not inherently less independent than a permanent IG. Of the 71 employees who responded “no” to this question, 56 provided written explanations, some of which are summarized below. There is no difference between the permanent IG and an acting IG. Eighteen OIG employees provided comments related to the acting IG and permanent IG as having no difference, as noted in the following examples of individual comments: “We saw absolutely no difference in the independence of the acting IG the appointed IG.” “The acting title (as compared to a permanent IG title) is irrelevant. It ALL comes down to the specific individual occupying the position.” “The Inspector General is independent by law. The authority of the position is the same, whether it is filled by an acting IG or a permanent IG. . . . I have not encountered circumstances in which I felt the acting IG was inherently less independent.” “The acting IG at was the Deputy IG who is a strong ethical and principled leader. There was no change to our mission, focus, or independence, nor in our ability to conduct our work. To suggest that, merely because there was an acting IG, independence was inherently compromised is unfounded, bespeaks a lack of understanding of OIG standards and ethics, and is just wrong.” “The acting IG served as any IG would be expected to in the area of independence. No difference there.” An acting IG is independent. Nineteen OIG employees provided comments related to the acting IG’s independence, as noted in the following examples of individual comments: “Based on my experience, both acting IGs were career OIG employees understood and embraced independence.” “I felt the acting IG was very independent and did a fantastic job.” “All persons within the OIG are to be objective and independent, no matter their position.” “ acting IG the same level of independence that is expected of all IG employees.” “ acting IG is as independent as our previous and is not hesitant to report problems and weaknesses to Congress.” An acting IG and permanent IG follow the same independence standards. Six OIG employees provided comments related to the acting IG and permanent IG as having the same independence standards, as noted in the following examples of individual comments: “The acting is subject to the same standards.” “The acting IG is just as important and they adhered to all the laws and regulations as the IG.” “Acting or permanent, they are held to the same standards of independence.” An acting IG position is not less independent. Six OIG employees provided comments related to the acting IG position not being less independent and depending on the individual in the role, as noted in the following examples of individual comments: “Whether an acting IG is able to maintain independence is dependent upon the person holding the position and his or her confidence, strength of character, leadership capabilities and subject matter expertise. The same is true for IGs.” “It depends on the individual. If a particular acting IG is a strong person, who puts aside any desire to pander to the agency head in the hope of being made permanent, there would be no effect on his/her independence.” We also asked OIG employees to identify any additional challenges they experienced in relation to working under an acting IG. Overall, 83 employees provided written responses, and 4 of those responses were additional challenges related to OIG independence, as noted in the following examples of individual comments: “Having worked in OIGs and observed functioning in other OIGs, the acting IG issue seems serious. There are subtle pressures to go along with management. Few acting IGs deliberately decide to compromise their principles, but many seem to wind up doing so.” “Because the acting IG wanted to gain the support of others, was not independent.” “The one challenge I am concerned with an acting IG is if that person has applied for the IG position and will not commit to certain decisions that will negatively impact their opportunity to obtain the permanent position as IG.” We also asked OIG employees to provide suggestions on how the independence of the acting IG role could be improved. The majority of the 25 respondents who provided written comments to this question did not provide suggestions for improving the independence. The comments that provided suggestions are summarized below: Timely appoint an IG (4 respondents). Consult with other CIGIE IGs to help monitor and assess the acting IG based on clear criteria and expectations (1 respondent). Limit the amount of time an acting IG can serve (1 respondent). Bring in an acting IG from another agency for independence reasons or ensure that other acting positions are filled and the acting IG is not performing multiple roles (1 respondent). Suggestions from Permanent IGs for Improving the Appointment Process Prolonged IG vacancies have been the subject of congressional hearings because of the importance of these key oversight positions. Delays in the presidential nomination and Senate confirmation process for all positions filled by this process, including PAS OIGs, have also been the subject of recent academic studies. For example, a recent study that explored the failure of nominations and the delay in confirmation of successful nominations across recent administrations from 1981 to 2014, found that nominations for the IG position had about a 24 percent failure rate. Given that in recent years, certain OIGs have experienced prolonged IG vacancies, especially IGs that require presidential nomination and Senate confirmation, we asked the 52 surveyed permanent IGs to provide comments on their experience with the appointment process and any suggestions for improving the process and minimizing the duration of IG vacancies. Comments were provided by 45 permanent IGs in these areas, including eight suggestions to minimize the duration of IG vacancies, as noted in the following individual comments: “One thing that could be improved an agreement between the , Congress and on a format for information. I was required to provide essentially the same information (with small variations) three times. But the precise formatting and framing of the questions [asked of the nominees] was different in each case, taking time and creating the possibility of inconsistencies.” “A possible suggestion would be to improve the timeliness of the selection, vetting, and confirmation process of IGs, particularly given the current number of vacancies. IGs play a vital role in ensuring that government programs and operations are functioning efficiently and effectively, and greater emphasis on the part of the White House and Congress to nominate and confirm IGs in a timely manner would provide great benefit.” “I believe the process could be improved by streamlining the number of committees involved so that each nominee need only obtain approval from one committee.” “While I worked through the paperwork requirements efficiently, it was a tremendous lift and I wonder if all that is required is necessary and in the form it took. I found a good degree of duplication in what was asked of from the . . . and Senate. I think there are opportunities to streamline with better coordination.” “ a timeline from start to finish would be helpful. I also recommend that Congress prioritize IG confirmations above most other confirmations.” “Faster consideration and vote would be useful.” “The Senate be required to act on IG candidates within 90 days of their nomination by the President.” “Although I think it is very important for any IG to have a strong working relationship with the agency head, it seems inappropriate for the agency head to have a strong voice in selecting the nominee for a residentially appointed, Senate-confirmed IG who is supposed to provide independent oversight of the agency. I suggest changing the process to omit the pre-selection interview with the agency head and substitute instead a pre-nomination courtesy meeting.” Agency Comments and Our Evaluation We provided a draft of this report to CIGIE for comment and CIGIE shared the draft with the 64 OIGs active under the IG Act. CIGIE and the OIGs at the National Credit Union Administration and U.S. Election Assistance Commission provided written comments, which are discussed below and reprinted in appendixes II, III, and IV, respectively. CIGIE expressed appreciation for the review and analysis efforts that we conducted for the purposes of this report. CIGIE also noted some information regarding the Central Intelligence Agency IG and the Intelligence Community IG, which were outside the scope of our work. CIGIE stated that both IGs are PAS and that the Central Intelligence Agency IG position has been vacant for over 3 years. The National Credit Union Administration OIG stated that while it did not have a vacancy during the 10-year period we reviewed, it agreed that looking at this area to reduce IG vacancies is an important endeavor. The U.S. Election Assistance Commission OIG expressed concurrence with the facts as they pertain to its office and stated that the report will contribute to improving the appointment process for IGs. In addition, CIGIE and the OIGs at the Appalachian Regional Commission, Denali Commission, Department of Commerce, Department of Education, Department of Housing and Urban Development, Federal Deposit Insurance Corporation, General Services Administration, National Reconnaissance Office, and U.S. Election Assistance Commission provided technical comments, which we incorporated as appropriate. The remaining OIGs did not provide comments. We are sending copies of this report to the Executive Director of CIGIE and to the 64 IG Act offices listed in this report as well as interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to determine (1) the status of inspector general (IG) vacancies as of the end of fiscal year 2017, and the number and duration of the IG vacancies for fiscal years 2007 through 2016, and (2) the views of the IG community on the impacts, if any, of IG vacancies on the Offices of Inspector General’s (OIG) ability to effectively carry out their duties, including views on independence and permanent IG suggestions for improvements in the appointment process. To address these objectives we included in our scope the 64 active OIGs that were established under the IG Act of 1978, as amended (IG Act). To determine the status of IG vacancies as of the end of fiscal year 2017, we obtained the vacancy data from the 64 OIGs active under the IG Act, and documented any changes for fiscal year 2017. To identify IG vacancies and changes for fiscal years 2007 through 2016, we first obtained vacancy data from the Council of the Inspectors General on Integrity and Efficiency (CIGIE). We interviewed CIGIE personnel to obtain an understanding of issues related to IG vacancies and to discuss the reliability of the vacancy data. Data obtained from CIGIE included the resignation dates of the permanent IGs, vacancy start and end dates, names of the acting IGs, names of newly appointed IGs, and whether each IG was presidentially appointed, Senate confirmed (PAS) or appointed by the head of a designated federal entity (DFE). We also obtained nominations from Congress.gov, which included information on nominated IGs and the status of those nominations. As part of our data reliability procedures, we confirmed the vacancy data with the 64 OIGs established under the IG Act. We reviewed and summarized the IG vacancy data and documented any changes in IG vacancies for fiscal years 2007 through 2016. In 2014, the IG appointment structure for the IGs of the National Security Agency and National Reconnaissance Office was changed from DFE to PAS. For the 10-year period under review, these two OIGs experienced vacancies during both their DFE and new PAS status. However, to avoid duplicating the agencies, we only counted the number and length of vacancies for each agency under the PAS IGs. To obtain the views of the IG community—specifically, permanent IGs, acting IGs, and employees working under an acting IG—on the impact that a prolonged IG vacancy can have on the OIG’s ability to carry out its duties effectively, including any impact on independence, we conducted web-based surveys of 54 IG Act OIGs. These surveys included both multiple choice and open-ended questions for written responses to obtain the views of the IG community on the impacts of vacancies, if any, and views on independence, challenges, and positive outcomes. The surveyed groups were as follows: Fifty-two permanent IGs serving as of August 22, 2017.We used both multiple choice questions and open-ended questions to obtain their views on the impact that an IG vacancy could have on the OIG’s ability to conduct its oversight, including any independence issues presented by acting IG. We also asked the permanent IGs to provide any suggestions for improvements in the appointment process. The survey was administered on the web from August 22, 2017, through September 29, 2017. The survey response rate of permanent IGs was 96 percent: 50 of the 52 permanent IGs completed the survey. Two permanent IGs did not respond to the survey. Nine acting IGs who had served for over 365 days from fiscal years 2014 through 2016. We used both multiple choice questions and open-ended questions to obtain their views on the impact that a prolonged vacancy could have on the acting IG’s ability to carry out his or her duties, including any impact on independence. The survey was administered on the web from August 22, 2017, through September 29, 2017. The survey response rate of acting IGs was 100 percent. While 14 acting IGs met our selection criteria, 4 have either retired or have since left the government and were not surveyed. The National Reconnaissance Office’s acting IG was excluded because of concerns regarding sensitive personally identifiable information. Of the 9 remaining acting IGs, 2 are now permanent IGs but provided responses for their acting IG tenure, which were included with those of the 7 acting IGs. In this report, we refer to all nine as acting IGs. A stratified random sample of 185 OIG employees consisting of 39 Senior Executive Service (SES) employees and 146 non-SES OIG employees, from OIGs with an acting IG in place for over 365 days from fiscal years 2014 through 2016. We used both multiple choice questions and open-ended questions to obtain the employee views about challenges related to working under an acting IG as compared to a permanent IG. The web-based survey was administered from September 11, 2017, through September 29, 2017. We had a weighted survey response rate of 71 percent; 133 of the sample of 185 employees completed the survey. Because we followed a probability procedure based on random selections, our OIG employee sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. Estimates from the employee survey are generalizable to the population of employees from OIGs that had an acting IG in place for over 365 days from fiscal years 2014 through 2016. To minimize nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with permanent IGs, acting IGs, and employees. To ensure that we obtained a variety of perspectives on our survey questions, we randomly selected three permanent IGs, two acting IGs, and two employees for the pretests. Based on their feedback, we revised each survey in order to improve the clarity of the questions. An independent survey specialist within GAO also reviewed a draft of each survey prior to its administration. To reduce nonresponse, another source of nonsampling error, we followed up by e-mail or phone with the IGs, acting IGs, and employees who had not responded to encourage them to complete the survey. We did not survey a total of 10 IG Act OIGs. Nine OIGs were not surveyed because there was no permanent IG in position or the acting IG at the time of our survey did not meet our criteria of serving for more than 365 days from fiscal year 2014 through 2016. Those OIGs were at the U.S. Postal Service, Social Security Administration, Small Business Administration, Office of Personnel Management, National Security Agency, Federal Election Commission, Department of Housing and Urban Development, Department of Energy, and Department of Defense. In addition, one OIG, the National Reconnaissance Office, was not surveyed because of concerns regarding sensitive personally identifiable information. We also performed a two-step content analysis on the open-ended survey responses to summarize key ideas. In the first step, analysts read the respondents’ comments and jointly developed categories for them. In the second step, each open-ended response was coded by one analyst, and then those codes were verified by another analyst. Any coding discrepancies were resolved by the analysts discussing the comments and then agreeing on the code. In some cases, we edited responses for clarity or grammar. Views expressed in the open-ended questions may not be representative of all acting IGs, permanent IGs, or employees on given topics. We did not assess the merits of the individual comments or suggestions provided in response to the open-ended survey questions. We conducted this performance audit from February 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Council of the Inspectors General on Integrity and Efficiency Appendix III: Comments from the National Credit Union Administration Office of Inspector General Appendix IV: Comments from the U.S. Election Assistance Commission Office of Inspector General Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Elizabeth Martinez (Assistant Director), Carl Barden, Jason Kirwan, Christopher Klemmer, Jill Lacey, Won Lee, Yvonne Moss, and Lisa Rowland made key contributions to this report.
Why GAO Did This Study The IG Act established OIGs to conduct and supervise audits and investigations; recommend policies to promote economy, efficiency, and effectiveness; and prevent and detect fraud and abuse. The Inspector General Empowerment Act of 2016 included a provision for GAO to review prolonged IG vacancies during which a temporary appointee has served as the head of the office. This report addresses (1) the status of IG vacancies as of the end of fiscal year 2017, and the number and duration of IG vacancies for fiscal years 2007 through 2016, and (2) the IG community's views about how IG vacancies impact the OIGs' ability to carry out their duties effectively, including views on the impact on independence. GAO analyzed data related to IG vacancies; interviewed officials from the Council of the Inspectors General on Integrity and Efficiency (CIGIE); and conducted a web-based survey to obtain the views of (1) the 52 permanent IGs serving as of August 22, 2017; (2) 9 acting IGs who had served in OIGs that had vacancies of over 365 days during fiscal years 2014 through 2016; and (3) a stratified random sample of employees in OIGs with IG vacancies of over 365 days during fiscal years 2014 through 2016. Survey response rates ranged from 71 percent to 100 percent. CIGIE and nine OIGs provided technical comments, which were incorporated as appropriate. What GAO Found For the 10-year period covering fiscal years 2007 through 2016, 53 of the 64 IG Act OIGs experienced one or more periods of IG vacancy with the cumulative durations ranging from about 2 weeks to 6 years. Plan and conduct work. Overall, at least eight of the nine acting IGs responded “no impact” for the questions in this area. The estimated percentage of OIG employees who believed that working under an acting IG has “no impact” ranged by question from 49 percent to 69 percent, “negative impact” ranged from about 8 percent to 24 percent, and “positive impact” ranged from 6 percent to 13 percent. Interact with agency management. The responses of seven of the nine acting IGs and 63 percent to 65 percent of OIG employees indicated that an acting IG position had no impact in this area. Approximately 16 percent of the OIG employees believed that there was a negative impact on timely access to documentation, while 7 percent believed that there was a positive impact. Managing OIG and personnel. Four of the nine acting IGs and about 36 percent of OIG employees responded that an acting IG position had a negative impact on employee morale. An estimated 44 percent of employees believed that working under an acting IG had no impact on employee morale while about 10 percent believed it had a positive impact. Four acting IGs also responded that it had a negative impact on office restructuring. With regard to independence, GAO's survey of permanent IGs found that while the majority who responded did not think that acting IGs are inherently less independent, they did indicate by a similar majority that an acting IG is less independent in appearance than a permanent IG, especially when the acting IG is applying for the IG position.
gao_GAO-18-182
gao_GAO-18-182_0
Background BOP’s Roles and Responsibilities in Providing Mental Health Care to Incarcerated Inmates To identity inmates with mental illness, BOP screens inmates prior to designation to a facility by reviewing an inmate’s pre-sentence report and assigning preliminary medical and mental health screening levels. Once an inmate is designated to a BOP institution, the institution staff assesses inmates to provide an accurate mental health diagnosis and determination of the severity of any mental illness as well as determining their suicide risk. BOP also identifies the mental health needs of each inmate and matches the inmate to an institution with the appropriate resources. Institution mental health care levels range from 1 to 4, with 1 being institutions that care for the healthiest inmates and 4 being institutions that care for inmates with the most acute needs. Inmate mental health care levels are also rated in this manner from level 1 to level 4. After an inmate arrives at a BOP institution, during the admission and orientation process, every inmate receives information on mental health services available at that site. Table 1 identifies inmate mental health care levels and the percentage of all inmates by designated level. Throughout an inmate’s incarceration, BOP psychologists, psychiatrists, and qualified mid-level practitioners (i.e., a physician assistant or nurse practitioner who is licensed in the field of medicine and possess specialized training in mental health care) can determine a new mental health care level following a review of records and a face-to-face clinical interview. BOP’s Psychology Services Branch, which the Reentry Services Division oversees, provides most mental health services to inmates in BOP- operated institutions, including providing individualized psychological care and residential and non-residential treatment programs (Figure 1 shows BOP’s organization for providing mental health services). BOP’s Health Services Division manages psychiatry and pharmacy services. Most mental health treatment is provided in what BOP calls its mainline, or regular, institutions. Acutely ill inmates in need of psychiatric hospitalization, such as inmates suffering from schizophrenia or bipolar disorder, may receive these services at one of BOP’s five medical referral centers, which provide inpatient psychiatric services as part of their mission. At BOP institutions, psychologists are available for formal counseling and treatment on an individual or group basis. In addition, staff in an inmate’s housing unit is available for informal counseling. Psychiatric services available at the institution are enhanced by contract services from the community. BOP Criteria Used to Identify the Population of Inmates with Serious Mental Illness Prior to the passage of the 21st Century Cures Act, and at the beginning of our work, BOP defined serious mental illness in accordance with the agency’s program statement—which states that classification of an inmate as seriously mentally ill requires consideration of diagnoses; the severity and duration of symptoms; the degree of functional impairment associated with the illness; and treatment history and current treatment needs. In accordance with BOP’s program statement, BOP used this guidance along with other variables to develop six criteria to identify the population of inmates with serious mental illness who were incarcerated in fiscal years 2016 and 2017—the most recent fiscal years for which data on these criteria are available. The additional criteria to identify the population of inmates with serious mental illness are as follows: 1. Inmate was evaluated by BOP and assigned a mental health care level 3: An inmate requires enhanced outpatient mental health care such as weekly psychosocial intervention or residential mental health care. 2. Inmate was evaluated by BOP and assigned a mental health care level 4: An inmate requires acute care in a psychiatric hospital; the inmate is gravely disabled and cannot function in a general population environment. 3. Inmate was assigned a mental health study level 4: This indicated that the inmate was subject to a court ordered forensic study that required an inpatient setting. 4. Inmate was diagnosed to have one or more of 74 Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnoses, both active and in remission, that BOP considers a serious mental illness. 5. Inmate was evaluated by BOP and identified as having a chronic suicide risk, due to the inmate having a history of two or more suicide attempts. 6. Inmate was evaluated by BOP and assigned a psychology alert status. This designation was applied to inmates who were evaluated as having substantial mental health concerns and requiring extra care when changing housing or transferring institutions. On August 15, 2017, in a memorandum for the Comptroller General of the United States from the Acting Director of BOP, BOP defined “serious mental illness” for purposes of section 14016 of the 21st Century Cures Act as follows: Individuals with a serious mental illness are persons: Who currently or at any time during the past year, Have had a diagnosable mental, behavioral, or emotional disorder of sufficient duration to meet diagnostic criteria specified within the most current edition of the Diagnostic and Statistical Manual of Mental Disorders, That has resulted in functional impairment which substantially interferes with or limits one or more major life activities. The memorandum also stated that BOP may further operationalize this definition by identifying specific mental disorders which are to be classified as serious mental illness and providing examples of functional impairment specific to BOP’s settings and/or populations. BOP officials indicated that BOP’s program statement and the six criteria to identify the population of inmates with serious mental illness who were incarcerated in fiscal years 2016 and 2017 would coincide with the definition for “serious mental illness” provided in the memorandum for the Comptroller General of the United States for purposes of the 21st Century Cures Act and identify an identical set of BOP inmates with “serious mental illness” for fiscal years 2016 and 2017. Incarceration and Reentry Are Key Periods to Affect Recidivism The periods during incarceration in federal and state prisons and reentry into the community are considered to be key periods to implement interventions to reduce recidivism among individuals with serious mental illness, according to public health and correctional stakeholders. The Bureau of Justice Statistics has found that for all offenders, regardless of their mental health status, the highest rate of recidivism occurs during the first year after release from prison. Further, researchers have found that offenders with serious mental illness return to prison sooner than those without serious mental illness. Multiple factors may contribute to the cycle of repeated incarceration among individuals with serious mental illness. SAMHSA reports that individuals with mental illness face additional challenges upon reentering the community, including those associated with finding treatment providers, stable housing, and employment. Federal agencies have established interagency groups and other mechanisms to share information on how to address the challenges related to recidivism among offenders with serious mental illness. Examples of these information sharing mechanisms are described in appendix III. While the periods of incarceration and reentry are the focus of this review, there are other points in the criminal justice system where there are opportunities to intervene to prevent individuals with serious mental illness from becoming further involved with the system, such as during the initial law enforcement response or during court proceedings. Further, SAMHSA has identified connecting those in need of treatment to community mental health services before a behavioral health crisis begins as a way to prevent individuals with mental illness from becoming involved in the criminal justice system. The Type of Crimes Committed by Inmates with Serious Mental Illness Incarcerated by BOP and Selected States’ Departments of Corrections Vary BOP Inmates with Serious Mental Illness Were Incarcerated for Similar Crimes as BOP Inmates Without Serious Mental Illness, But Some Differences Exist About two-thirds of BOP inmates with a serious mental illness were incarcerated for four types of offenses—drug offenses (23 percent), sex offenses (18 percent), weapons and explosives offenses (17 percent), and robbery (8 percent)—as of May 27, 2017. As shown in figure 2, some differences in offenses exist between inmates with and without serious mental illness in BOP custody. Specifically, our analysis found that BOP inmates with serious mental illness were incarcerated for sex offenses, robbery, and homicide or aggravated assault at about twice the percentage of inmates without serious mental illness, and were incarcerated for drug and immigration offenses at about half or less the rate of inmates without serious mental illness. Additionally, we found some differences between BOP inmates with and without serious mental illness in the length and severity of sentences. Although a similar percentage of inmates with and without serious mental illness have life sentences (2.8 percent and 2.5 percent, respectively), a lower percentage of inmates with serious mental illness had sentences of 10 years or less (43.5 percent and 49.2 percent, respectively). About .06 percent (5 inmates) of inmates with serious mental illness and about .03 percent (52 inmates) of inmates without serious mental illness received a death sentence. See appendix I for additional information on the characteristics of BOP inmates with and without serious mental illness. The Most Common Types of Crimes Committed by Inmates with Serious Mental Illness Varied Among Selected States’ Departments of Corrections Based on our analysis of available data provided by selected states’ departments of corrections, the most common crimes committed by inmates with serious mental illness varied from state to state. The difference in types of crimes reported by states and BOP may be due to different priorities, laws, and enforcement priorities across the state and federal criminal justice systems, among other things. The federal and state governments also define serious mental illness differently, and they track different categories of crime in their respective data systems. The percentages and types of crimes committed by incarcerated inmates are shown in figures 3 through 5 below for three selected states’ departments of corrections. New York The New York State Department of Corrections and Community Supervision (DOCCS) cared for 2,513 inmates with serious mental illness out of a total of 51,436 inmates as of December 31, 2016. Figure 3 shows the categories of offenses committed by inmates defined by DOCCS as having serious mental illness. Three out of four inmates with serious mental illness under the care of DOCCS were incarcerated for violent crimes. According to DOCCS program descriptions, diagnostic criteria for serious mental illness are: (1) an inmate is determined by the New York State Office of Mental Health to have specified mental health diagnoses; (2) an inmate is actively suicidal or has made a recent, serious suicide attempt; or (3) an inmate is diagnosed with serious mental illness, organic brain syndrome, or a severe personality disorder that is manifested in significant functional impairment such as acts of self-harm or other behaviors that have a serious adverse effect on life or on mental or physical health. The Virginia Department of Corrections cared for 527 inmates with serious mental illness out of a total of 30,052 inmates as of September 29, 2017. Figure 4 shows the crimes committed by inmates that Virginia defined as having serious mental illness. About one quarter of the inmates with serious mental illness in Virginia committed rape, sexual assault, and other assault crimes. Virginia policy defines an inmate with serious mental illness as an offender diagnosed with a psychotic disorder, bipolar disorder, major depressive disorder, PTSD or anxiety disorder, or any diagnosed mental disorder (excluding substance use disorders) currently associated with serious impairment in psychological, cognitive, or behavioral functioning that substantially interferes with the person’s ability to meet the ordinary demands of living and requires an individualized treatment plan by a qualified mental health professional(s). The Washington Department of Corrections cared for 1,881 inmates with serious mental illness out of a total of 17,234 inmates as of June 30, 2017. Figure 5 shows the crimes committed by Washington inmates that Washington defined as having serious mental illness. About half of the inmates with serious mental illness in Washington committed assault or sex crimes. The Washington Department of Corrections defines serious mental illness as a substantial disorder of thought or mood which significantly impairs judgment, behavior, or capacity to recognize reality or cope with the ordinary demands of life within the prison environment and is manifested by substantial pain or disability. The Washington Department of Corrections’ definition does not include inmates who are substance abusers or substance dependent—including alcoholics and narcotics addicts—or persons convicted of any sex offense, who are not otherwise diagnosed as seriously mentally ill. BOP Does Not Track Costs Related to Inmates with Serious Mental Illness but BOP and Selected States Generally Track Costs Related to Treating Inmates with Mental Illness BOP Does Not Track Costs Related to Inmates with Serious Mental Illness According to BOP officials, the agency does not track costs specifically associated with inmates with serious mental illness due to resource restrictions and the administrative burden such tracking would require. BOP officials stated that BOP, unlike a hospital, is not structured to bill individual interactions; and noted that, generally, the correctional industry does not account for costs by tracking individual costs. BOP officials said that requiring BOP staff to gather individual cost data manually would be an extremely time consuming and burdensome process. In addition, BOP does not maintain the mental health care cost data necessary to calculate the individual inmate costs related to specific program areas (i.e., psychology and psychiatric services). BOP Tracks Some Costs Related to Treating Inmates with Mental Illness BOP tracks the costs associated with incarcerating its overall inmate population and with providing mental health care services to inmates system-wide and separately by institution. For fiscal year 2016, BOP’s institution-level data show that total incarceration costs vary by BOP institution (ranging from $15 million to over $247 million), for a number of reasons, including varying amounts of medical and mental health care available at each institution. Table 2 identifies BOP’s costs for mental health care services provided to all inmates (including inmates with serious mental illness) for fiscal year 2016, the last year for which BOP had complete data during our audit work. The costs below are the most readily available BOP-wide costs directly related to mental health care. BOP’s Psychology Services staff provides most inmate mental health services in BOP-operated institutions, including the provision of individualized psychological care. Psychotropic medication may be used to treat mental illness, although in some instances, BOP uses psychotropic medication to treat individuals with other kinds of health conditions. Residential Reentry Centers, also known as halfway houses, provide assistance to inmates nearing release, including some inmates with serious mental illness. BOP includes psychiatric treatment and services under medical care costs, but BOP does not track psychiatric costs separately. In July 2013, we reported that BOP also does not track its contractors’ costs of providing mental health services to the 13 percent of BOP inmates housed in privately managed facilities. The performance-based, fixed- price contracts that govern the operation of BOP’s privately managed facilities give flexibility to the contractors to decide how to provide mental health services. BOP tracks and maintains information on the number and types of inmate interactions with Psychology Services personnel. These interactions include clinical and non-clinical interactions between Psychology Services staff and inmates that may be crisis-oriented or routine, such as individual and group therapy. Based on our analysis of these data, in fiscal year 2016, BOP inmates with serious mental illness were more likely than other inmates to use 18 of the 20 services or programs tracked by Psychology Services. On average, we found that an inmate with serious mental illness had 9.6 clinical interventions compared to 0.24 clinical interventions for inmates without serious mental illness during fiscal year 2016. As a result, an average BOP inmate with serious mental illness was 40 times more likely to receive a clinical intervention than an average inmate without serious mental illness. BOP data do not capture the time and resources associated with any of the Psychology Services interactions; thus we cannot assign a cost value to differences between populations in receipt of these services. Appendix IV shows the extent to which BOP’s inmate population received specific types of psychology services in fiscal year 2016. Selected States’ Departments of Corrections Provided Estimated Costs for Inmate Mental Health Care The selected state departments of corrections provided us with estimates for different types of mental health care costs, but did not identify mental health care costs specifically for inmates with serious mental illness. Additionally, the states did not provide us with the total cost to incarcerate inmates with serious mental illness. For example, officials from one state said staff did not calculate costs separately for inmates with mental illness compared to inmates without mental illness as they did not believe an accurate comparison could be made. Officials from another state said that they did not track costs of incarceration or mental health services per inmate based on whether or not an inmate has mental illness, while officials from another state said they were not able to track costs for mental health services for inmates at the individual level. The selected state departments of corrections also used different methods to determine the costs of the mental health services they provided to their inmate population. For example: Two state departments of corrections provided us with the average per-inmate costs of incarceration for a mental health treatment unit or treatment center where some inmates with serious mental illness are treated, but these per-inmate costs also included incarceration costs for inmates without serious mental illness who were housed in these facilities. Another state department of corrections provided total psychotropic medication costs for all inmates and mental health care costs per offender. Mental health care costs per offender were averaged across all offenders, not exclusively those with serious mental illness. Two other states provided total costs for one budget item related to mental illness: total mental health program spending in one state, and psychiatric care expenditures in the other state. These costs were for all inmates, not exclusively for inmates with serious mental illness. Another state department of corrections provided an estimate for average mental health care costs per inmate with mental illness, but this estimate included all inmates diagnosed as having a mental illness, not exclusively those inmates diagnosed with serious mental illness. Targeting Treatments Based on Risk and Coordinating Transition Plans of Individuals with Serious Mental Illness Are among Strategies Identified by Federal and Selected State Agencies and Studies In 2012, the Council of State Governments Justice Center developed the Criminogenic Risk and Behavioral Health Needs Framework in collaboration with DOJ’s National Institute of Corrections and Bureau of Justice Assistance, SAMHSA, and experts from correctional, mental health, and substance abuse associations. The framework is an approach to reduce recidivism and promote recovery among adults under correctional supervision with mental illness, substance use disorders, or both. It calls for correctional agencies to assess individuals’ criminogenic risk (the risk of committing future crimes), substance abuse and mental health needs. The agencies are to use the results of the assessment to target supervision and treatment resources based on these risks and needs. Additionally, the framework states that individuals with the highest criminogenic risks should be prioritized for treatment to achieve the greatest effect on public safety outcomes. Mental health and substance abuse treatment There are a number of different approaches that can be tailored and combined to address an individual’s mental health and substance abuse treatment needs. Examples include: Psychopharmacology. Approach that aims to address dysfunctional thoughts, moods, or behavior through time-limited counseling. To help implement the principles set forth in the framework, SAMHSA developed additional guidance in collaboration with the Council of State Governments Justice Center, the Bureau of Justice Assistance and experts from correctional, mental health, and substance abuse associations. This guidance is for mental health, correctional, and community stakeholders, and uses the Assess, Plan, Identify, Coordinate model to provide procedural guidelines to reduce recidivism and promote recovery at different points during incarceration and reentry. Table 3 below describes selected guidelines and examples of strategies that were identified by BOP and the six selected states that correspond to each element of the model. A residential treatment program for individuals with both substance use and mental disorders that uses a peer community to address substance abuse, psychiatric symptoms, cognitive impairments, and other common impairments. who are in recovery and have previously been involved in the criminal justice system provide support to others who are also involved in the criminal justice system. Forensic intensive case management. A case manager coordinates services in the community to help clients sustain recovery and prevent further involvement with the criminal justice system. Forensic Assertive Community Treatment (FACT). Treatment is coordinated by a multidisciplinary team, which may include psychiatrists, nurses, peer specialists, and probation officers. FACT teams have high staff-to-client ratios and are available around-the-clock to address clients’ case management and treatment needs. Examples of Bureau of Prisons (BOP) and Selected State Strategies booking/intake process as feasible and throughout the criminal justice continuum to detect substance use disorders, mental disorders, co-occurring substance use and mental disorders, and criminogenic risk. Follow up with comprehensive assessment to guide program placement and service delivery. Assessment should include clinical needs, social support needs (e.g., housing, education, employment, and transportation), and risk factors. All six selected states and BOP have developed mental health assessments during the intake process. BOP officials stated that the agency is in the process of enhancing the predictive validity of its criminogenic risk assessment and expects to complete this project in 2018. One of the six selected states uses a multidisciplinary treatment team composed of a clinician, psychiatrist, and correctional counselor, to assess the treatment and programming needs of inmates with serious mental illness. In addition to mental health treatment, the multidisciplinary team assesses if the inmate is ready for and would benefit from institutional services such as academic and vocational education programs, work, or substance abuse counseling. These assessments occur at least annually, but may occur whenever an inmate’s treatment needs have changed. Studies Indicate Some Promising Strategies to Reduce Recidivism Among Offenders with Mental Illness To identify strategies to reduce recidivism among offenders with mental illness during incarceration and reentry, we searched for studies that analyzed the relationship between programs and recidivism among offenders with mental illness. Our search identified about 200 publications. We used a systematic process to conduct the review, which appendix II describes in more detail. We ultimately identified 14 studies that (1) assessed correctional institution or reentry programs for offenders with mental illness implemented in the United States, (2) contained quantitative analyses of the effect of a program on recidivism, and (3) used sufficiently sound methodologies for conducting such analyses. The studies examined different kinds of recidivism outcomes (e.g., re- arrest, re-incarceration, reconviction) and one study often examined more than one recidivism outcome. We categorize the findings for each study as follows: Statistically significant reduction in recidivism: the study reported that one or more outcome measures indicated a statistically significant reduction in recidivism among program participants; the study may also have one or more recidivism outcome measures that were not statistically significant. Statistically significant increase in recidivism: the study reported that one or more outcome measures indicated a statistically significant increase in recidivism among program participants; the study may also have one or more recidivism outcome measures that were not statistically significant. No statistically significant effect on recidivism: the study reported only outcomes indicating no statistically significant effect on recidivism among program participants. The statistical significance finding categories are based on the effect of the program as a whole and do not indicate if or how all individual elements of the programs impacted recidivism. For additional information on recidivism findings, see appendices V and VI. See appendix VII for a bibliography of the studies. The results of the literature review provide insights into factors that can affect recidivism among individuals with mental illness; however, the following considerations should be taken into account: (1) the type of mental illness of program participants varied within and across programs making it difficult to generalize results to individuals with all types of mental illness; (2) the studies may not provide a full description of the programs; (3) not all participants may have used available program services; (4) studies assessed the programs as a whole and did not determine to what extent different elements of the programs impacted recidivism; and (5) some studies used designs which cannot control for all unobserved factors that could affect the recidivism results. Nine of the 14 studies we reviewed found statistically significant reductions in recidivism. The studies that found statistically significant reductions generally involved programs that offered multiple support services, as shown in figure 6. Providing mental health and substance abuse treatment (8 of 9 studies), case management (5 of 9 studies), release planning (5 of 9 studies), housing (6 of 9 studies) and employment assistance (4 of 9 studies) were the most common services across the programs where studies we reviewed found statistically significant reductions in recidivism. In addition, more than half of the programs that resulted in statistically significant reductions in recidivism were coordinated with multidisciplinary stakeholders, such as mental health providers, correctional officials, substance use specialists, social workers, and peer support specialists (7 of 9 studies), and community corrections agencies, such as probation or parole offices (6 of 9 studies). However, other studies found that programs that offered multiple support services did not reduce recidivism, suggesting that other factors may also affect recidivism. Such factors may include the extent to which participants used services, as well as other unique programmatic factors, such as addressing criminogenic risk or criminal thinking. We further discuss examples of programs that did and did not reduce recidivism below. For example, study 9 examined Washington’s Dangerously Mentally Ill Program, in which a multidisciplinary committee determines which offenders meet the program criteria of having a mental illness and are at high risk of being dangerous to themselves or others six months prior to their release from prison. Members of the committee include representatives from the Department of Social and Health Services, Department of Corrections, law enforcement, and community mental health and substance abuse treatment agencies. Offenders designated for participation are immediately assigned a community mental health treatment provider and receive special transition planning prior to their release from prison. After release, and for up to five years, a variety of services are available to participants based on assessed needs. Services may include mental health and substance abuse treatment, housing and medical assistance, training, and other support services. Researchers found that program participants were about 42 percent less likely to be reconvicted of a new felony than similar offenders in the comparison group four years after release (recidivism rates were 28 percent and 48 percent, respectively). Two other studies (numbers 3 and 6) evaluated Colorado’s Modified Therapeutic Community, a residential program that was provided both as a 12-month prison program and 6-month reentry program after release from prison for offenders with co-occurring mental illness and substance use disorders. Participants may have participated in only the prison program, only the reentry program, or both. Both programs use a cognitive-behavioral curriculum designed to help participants recognize and respond to the interrelationship of substance abuse, mental illness, and criminality and to use strategies for symptom management. The reentry program was coordinated with the community corrections agency, which provided the residential facility and monitored medication and compliance with parole terms for both participants and the comparison group. The reentry program also assisted with housing placement and employment. Researchers found that both the prison program and the reentry program resulted in statistically significant reductions in recidivism among participants. Specifically, the studies found that at 12 months post- release, prison program participants had a 9 percent reincarceration rate versus a 33 percent rate for the comparison group that did not participate in either program; and reentry program participants had a 19 percent reincarceration rate versus 38 percent for the comparison group. Further, researchers found that those who participated in both the prison and reentry program experienced the greatest reductions in recidivism, with a reincarceration rate of 5 percent versus a rate of 33 percent for the comparison group that did not participate in either program 12 months after release from prison. Studies that did not find a reduction in recidivism also provide insights on factors that may affect recidivism. For example, study 10 examined a Washington program to help enroll inmates with severe mental illness in Medicaid prior to their release from prison and found that jail and prison stays were higher among program participants than non-participants. The researchers hypothesized that receiving mental health treatment may have led to more interaction with authorities, putting participants at a greater risk of being caught violating the terms of their parole than non- participants. There was some evidence to support this: they found that most of the difference in prison days between participants and non- participants was the result of noncompliance with conditions of parole (technical violations) rather than the commission of new crimes. Further, the researchers conclude that Medicaid benefits alone are not enough to reduce arrests or keep people with severe mental illness out of jail or prison. In addition, study 11 examined Minnesota’s release planning services for inmates with serious and persistent mental illness, which provided some of the same types of services as the programs that did reduce recidivism. For example, while incarcerated, inmates were provided pre-release planning to address vocational, housing, chemical dependency, psychiatric, disability, medical, medication, and transportation needs. However, this program did not result in any significant reduction in recidivism. The researchers conclude that including programming to target criminogenic risks and providing a continuum of care from the institution to the community, instead of only providing services in the institution, may make the program more effective at reducing recidivism. Agency Comments We provided a draft of this report to DOJ and HHS for review and comment. DOJ and HHS did not provide official written comments or technical comments. We are sending copies of this report to the Assistant Attorney General for Administration, Department of Justice, the Secretary of Health and Human Services, selected congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Characteristics of the Federal BOP’s Inmate Population with and without Serious Mental Illness, as of May 27, 2017 The population of Federal Bureau of Prisons (BOP) inmates with and without serious mental illness varies in several characteristics, see table 4. Appendix II: Objectives, Scope, and Methodology To address all three objectives, we reviewed documents, interviewed officials, and analyzed data obtained from the Federal Bureau of Prisons (BOP) and selected states’ departments of corrections. For objective 3, we also reviewed documents and interviewed officials from the Department of Justice’s (DOJ) Office of Justice Programs and the Department of Health and Human Services’ (HHS) Substance Abuse and Mental Health Services Administrations (SAMHSA) and the National Institute of Mental Health. We selected six state departments of corrections (California, New York, Ohio, Texas, Virginia, and Washington) based upon variation in the rate of incarcerated adults per capita to obtain a mix of states with high, medium, and low rates, specialist recommendations on data quality and quality of programs for inmates with serious mental illness, and variation in geography. We contacted officials from SAMHSA and the National Institute of Mental Health and representatives from correctional accreditation organizations, as well as subject matter specialists from Pew Charitable Trusts and the Treatment Advocacy Center that we identified through previous work and asked for their recommendations of states that, in their view, had reliable data sources on the number of incarcerated individuals with serious mental illness and the costs of providing mental health services, as well as noteworthy programming for inmates with serious mental illness. The results from these six states are not generalizable, but provide insights. For purposes of this review, we based our work on the definition(s) of serious mental illness that are provided by each of the selected federal agencies and selected states’ departments of corrections. We analyzed policies and guidance at BOP and the departments of corrections in selected states to determine how, if at all, the agencies define serious mental illness and the processes used to identify incarcerated inmates with serious mental illness. To determine the population of inmates with serious mental illness for the purposes of our work, BOP operationalized its definition of serious mental illness using six criteria, covering the required degree of mental health care, mental illness diagnoses, and suicide risk. BOP defined “serious mental illness” in accordance with the agency’s program statement, BOP Program Statement 5310.16, Treatment and Care of Inmates with Mental Illness, May 1, 2014. On August 15, 2017, in a memorandum for the Comptroller General of the United States from the Acting Director of BOP, BOP defined “serious mental illness” for purposes of section 14016 of the 21st Century Cures Act. BOP officials indicated that BOP’s program statement and the six criteria to identify the population of inmates with serious mental illness who were incarcerated in fiscal years 2016 and 2017 would coincide with the definition for “serious mental illness” provided in the memorandum for the Comptroller General of the United States for purposes of the 21st Century Cures Act and identify an identical set of BOP inmates with “serious mental illness” for fiscal years 2016 and 2017. BOP applied these criteria to inmate information in its SENTRY, Bureau Electronic Medical Record (BEMR), and Psychology Data System (PDS) data systems to identify inmates with serious mental illness. To assess the reliability of the these data, we performed electronic data testing for obvious errors in accuracy and completeness, and interviewed agency officials knowledgeable about these systems to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for identifying the population of BOP inmates with serious mental illness, for the purposes of this report. To determine what types of crimes were committed by inmates with serious mental illness who were incarcerated by the federal and selected state governments we analyzed available data from BOP and the departments of corrections in selected states on the most serious types of crimes for which inmates with serious mental illness were incarcerated during fiscal year 2017. BOP officials track and maintain information on the types of crimes for which inmates have been incarcerated via SENTRY. We interviewed officials from BOP’s Office of Research and Evaluation, Reentry Services Division, and Correctional Programs Division to discuss the number and types of crimes committed by BOP inmates with serious mental illness. To assess the reliability of BOP’s criminal offense data, tracked in BOP’s SENTRY system, we performed electronic data testing for obvious errors in accuracy and completeness, and interviewed agency officials from BOP’s Office of Research and Evaluation knowledgeable about BOP’s inmate tracking system to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for the purposes of this report. We also interviewed and received written responses from officials from the selected state departments of corrections to determine the challenges they faced in recording, tracking, and maintaining data on inmates with serious mental illness, but we did not independently assess the internal controls associated with the selected states’ data systems. We provided state level data as illustrative examples of the crimes committed by inmates with serious mental illness in selected states. To identify what is known about the costs to the federal and selected state governments to incarcerate and provide mental health services to incarcerated individuals with serious mental illness, we interviewed and received written responses from officials from BOP’s Reentry Services Division, Correctional Programs Division, Administration Division, Program Review Division, and Health Services Division, and the departments of corrections in selected states to discuss and obtain documentation on the processes and systems used to track the costs to incarcerate and provide mental health services to inmates with serious mental illness, and obtain their perspectives on the challenges faced, if any, in tracking such costs. We analyzed BOP obligation data from fiscal year 2016 for the following budget categories: Psychology Services, psychotropic medications, and Residential Reentry Center mental health care costs. We included these obligation categories as indicators of BOP mental health care costs because our prior work identified that these services were used by inmates with mental illness. To assess the reliability of BOP’s obligations data, we performed electronic testing for obvious errors in accuracy and completeness, and interviewed agency officials knowledgeable about BOP’s budget to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for the purposes of this report. In response to our inquiries, the selected states provided various data on costs to incarcerate and provide mental health care to inmates under their supervision. We did not independently assess the internal controls associated with the selected states’ data systems. We provided state level data as illustrative examples of the manner in which state correctional agencies tracked costs of incarceration and mental health care services for inmates under their supervision. Additionally, we obtained and analyzed BOP data from PDS on the extent to which inmates interacted with Psychology Services personnel and programs during fiscal year 2016, to calculate the average psychology services interactions (by category) per inmate during fiscal year 2016. To assess the reliability of BOP’s psychology services utilization services data, we performed electronic testing for obvious errors in accuracy and completeness, and interviewed agency officials knowledgeable about BOP’s psychology services to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for the purposes of this report. To determine what strategies for reducing recidivism among individuals with serious mental illness have been identified by the federal and selected state governments and in literature, we obtained and analyzed documents and interviewed officials from BOP and the selected states’ corrections departments, as well as from DOJ and HHS organizations that support research, training, and programs related to mental health and recidivism. These DOJ organizations included the National Institute of Corrections, within BOP, and the Bureau of Justice Assistance and National Institute of Justice, within the Office of Justice Programs. The Department of Health and Human Services (HHS) organizations included SAMHSA and the National Institute of Mental Health. We also interviewed subject matter experts from the Council of State Governments Justice Center, Pew Charitable Trusts, and the Treatment Advocacy Center, which we selected to obtain perspectives from researchers and mental health and criminal justice organizations. Literature Review Further, we conducted a literature review of studies that have sound methodologies and use primary data collection or secondary analysis to assess the impact of programs or interventions during incarceration or reentry on recidivism among adult offenders with mental illness. To identify relevant studies, we took the following steps: 1. A GAO research librarian conducted searches of various research databases and platforms including ProQuest, MEDLINE, PsycINFO, Social SciSearch, and Scopus, among others, to identify scholarly and peer reviewed publications; government reports; and publications by trade associations, nonprofits and think tanks from 2008 through 2017, a period chosen to identify a comprehensive set of relevant and timely research. 2. We identified and reviewed selected additional studies that were cited within literature reviews, meta analyses and studies referenced on information-sharing websites, including the Council of State Governments’ “What Works in Reentry” website, National Institute of Justice’s “Crime Solutions” website, and SAMHSA’s Registry of Evidence Based Practices and Programs, and other secondary sources published from 2000 through 2017. We chose this time period to ensure we identified key older, reliable studies we may have missed by virtue of our database search timeframe. We identified these secondary resources during the course of our audit through the previously discussed database search, interviews with agency officials and representatives from research, criminal justice, and mental health organizations, and by reviewing websites of relevant agencies. The literature search produced about 200 publications. To select studies that were relevant to our research objective two reviewers independently assessed the abstracts for each publication using the following criteria: 1. Program studied was implemented in the U.S. 2. Study described in the publication includes original data analysis to assess the impact of a program for adults with mental illness on recidivism. For those that met the above two criteria we obtained and reviewed the full text of the publication, using the same criteria. We also further categorized the studies that met the two criteria above into the following categories: 1) studies that evaluated programs implemented during the period of incarceration or reentry, 2) studies that evaluated programs meant to divert individuals with serious mental illness from jail or prison (e.g., mental health courts) and 3) other, for those interventions that did not fall into either of these categories. As our review focused on strategies to reduce recidivism during incarceration and reentry, we excluded the studies on diversion programs (the second category). We evaluated the 31 studies that fell into the incarceration and reentry and the other categories using a data collection instrument. The data collection instrument captured information on the elements of the program, the recidivism effects, and the study’s methodology. The data collection instrument was initially filled out by one individual and then verified for accuracy by another individual; any differences in the individuals’ assessments were discussed and reconciled. To determine if the findings of the 31 studies should be included in our review of the literature, the study reviewers conferred regarding each study and assessed if: 1) the study was sufficiently relevant to the objective; and 2) the study’s methodology was sufficiently rigorous. With regard to the study’s relevance, we included studies that evaluated: a program for individuals with mental illness incarcerated in prison or jail or provided directly upon release from prison or jail; or a program for individuals with mental illness that is not provided in a prison, jail, or directly upon release from prison or jail (e.g., in a psychiatric hospital or in the community after a psychiatric hospitalization), but is hypothesized to impact criminal justice involvement and could potentially be applied in a correctional setting. With regard to methodological rigor, two GAO methodologists used generally accepted social science standards to assess the design and analytic strategy of each study to ensure analyses were sufficiently sound to support the results and conclusions. Specifically, the methodologists examined such factors as how the effects of the programs were isolated (i.e., use of comparison groups and statistical controls); the appropriateness of treatment and comparison group selection, if used; and the statistical analyses used. As a result of this process, we found 18 studies within the scope of our review that used sufficiently sound methodologies. Some studies used a randomized controlled trial methodology or quasi-experimental research designs, and some studies used non-experimental designs to compare recidivism outcomes for a single population before and after the intervention. These studies used various recidivism measures, and some used more than one measure. For each of the 18 studies, we reviewed the study’s findings related to recidivism, and categorized the findings based on statistical significance as follows: Statistically significant reduction in recidivism: the study reported that one or more outcome measures indicated a statistically significant reduction in recidivism among program participants; the study may also have one or more recidivism outcome measures that were not statistically significant. Statistically significant increase in recidivism: the study reported that one or more outcome measures indicated a statistically significant increase in recidivism among program participants; the study may also have one or more recidivism outcome measures that were not statistically significant. No statistically significant effect on recidivism: the study reported only outcomes indicating no statistically significant effect on recidivism among program participants. For a list of the 18 studies, see appendix VII. We conducted this performance audit from February 2017 through February 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Federal Information Sharing Mechanisms to Address Recidivism among Individuals with Serious Mental Illness Federal agencies have established interagency groups and other mechanisms, such as web-based resources, to share information related to correctional mental health and reducing recidivism among individuals with serious mental illness, among other things. Examples of these information sharing mechanisms are described in table 5 below. Appendix IV: Federal Bureau of Prisons (BOP) Psychology Services Utilization Data for Incarcerated Inmates, Fiscal Year 2016 Appendix V: Findings of Studies Examining the Recidivism Effects of Non-Correctional Programs for Individuals with Mental Illness Our literature review also identified four studies that met the criteria of (1) containing quantitative analyses of the effect of a program for individuals with mental illness on recidivism, and (2) using sufficiently sound methodologies for conducting such analyses; but were in non-correctional settings, such as in a psychiatric hospital or in the community after a psychiatric hospitalization. While the findings from these studies may not be generalizable to a correctional setting, they may offer insights on effective strategies for reducing recidivism, as many of the program participants had a history of involvement with the criminal justice system. As shown in figure 7, half (2 of 4) of the studies found statistically significant reductions in recidivism. The non-correctional programs that were found to reduce recidivism included some of the same elements as the correctional programs that reduced recidivism, including mental health treatment (2 of 2 studies), substance abuse treatment (1 of 2 studies), case management (2 of 2 studies), release planning (1 of 2 studies), employment assistance (2 of 2 studies), housing assistance (1 of 2 studies), and multidisciplinary coordination among mental health providers, substance use specialists, social workers, and/or peer support specialists, for example (1 of 2 studies). However, similar to the literature on correctional programs, there were also studies that found that programs that offered multiple support services did not reduce recidivism, suggesting other factors may affect recidivism; such factors may include the extent to which participants used services, as previously noted, as well as other unique programmatic factors. We further discuss examples of programs that did and did not reduce recidivism below. For example, study 15 evaluated New York’s Assisted Outpatient Treatment, a court-ordered treatment program for individuals with mental illness and a history of multiple hospitalizations or violence toward self or others. Individuals entering the program are assigned a case manager and prioritized for enhanced services that include housing and vocational services. Researchers found that the comparison group who never received Assisted Outpatient Treatment had nearly double the odds (odds ratio of 1.91) of being arrested than program participants during and shortly after the period of assignment to the program. The programs that were found not to reduce recidivism also provide some insights into factors that affect recidivism. For example, study 18 evaluated a Pennsylvania-based modified outpatient therapeutic community treatment program for individuals with co-occurring substance use disorder and emotional distress or mental illness and found that it had no significant effect on recidivism. Researchers attributed this finding to the program’s emphasis on substance use rather than on addressing criminogenic risks. Appendix VI: Literature Review Findings for Selected Recidivism Measures The 14 studies we identified through our literature review that (1) assessed correctional institution or reentry programs for offenders with mental illness implemented in the United States (2) contained quantitative analyses of the effect of a program on recidivism, and (3) used sufficiently sound methodologies for conducting such analyses, used a number of different recidivism outcome measures, and some assessed more than one recidivism outcome measure. Tables 7, 8, and 9 below show the recidivism results for studies that measured reincarceration rates, reconviction rates, and number of days in jail or prison, which were reported by multiple studies. These do not represent all recidivism findings; some studies used other recidivism measures such as the number of arrests or convictions, odds ratio or hazard ratio of reincarceration, and self-reported criminal activity. Appendix VII: Bibliography This bibliography contains citations for the 18 studies we reviewed regarding programs for individuals with mental illness that may affect recidivism. (See appendix II for more information about how we identified these studies.) Following the citation we include the study numbers that we used to reference the study earlier in this report. Burke, C. and S. Keaton. San Diego County’s Connections Program Board of Corrections Final Report. San Diego, CA: SANDAG, June 2004. (Study 1) Chandler, D.W. and G. Spicer. “Integrated Treatment for Jail Recidivists with Co-occuring Psychiatric and Substance Use Disorders.” Community Mental Health Journal, vol. 42, no. 4 (2006):405-425. (Study 2) Compton, M.T., M.E. Kelley, A. Pope, K. Smith, B. Broussard, T.A. Reed, J.A. DiPolito, B.G. Druss, C. Li, and N.L. Haynes. “Opening Doors to Recovery: Recidivism and Recovery Among Persons With Serious Mental Illnesses and Repeated Hospitalizations.” Psychiatric Services, vol. 62, no. 2 (2016): 169-175. (Study 17) Cusack, K.J., J.P. Morrissey, G.S. Cuddleback, A. Prins, and D.M. Williams. “Criminal Justice Involvement, Behavioral Health Service Use, and Costs of Forensic Assertive Community Treatment: A Randomized Trial.” Community Mental Health Journal, vol. 46 (2010): 356-363. (Study 4) Duwe, G. “Does Release Planning for Serious and Persistent Mental Illness Offenders Reduce Recidivism? Results From an Outcome Evaluation.” Journal of Offender Rehabilitation, vol. 54, no. 1 (2015): 19- 36. (Study 11) Link, B.G., M.W. Epperson, B.E. Perron, D.M. Castille, and L.H. Yang. “Arrest Outcomes Associated with Outpatient Commitment in New York State.” Psychiatric Services, vol. 62, no. 5 (2011): 504-508. (Study 15) Mayfield, J. The Dangerous Mentally Ill Offender Program: Four-Year Felony Recidivism and Cost Effectiveness. Olympia, WA: Washington State Institute for Public Policy, February 2009. (Study 9) Morrissey, J.P., G.S. Cuddeback, A.E. Cuellar, and H.J. Steadman. “The Role of Medicaid Enrollment and Outpatient Service Use in Jail Recidivism Among Persons with Severe Mental Illness.” Psychiatric Services, vol. 58, no. 6 (2007):794-801. (Study 5) Morrissey, J.P., M.E. Domino, and G.S. Cuddeback. “Expedited Medicaid Enrollment, Mental Health Service Use, and Criminal Recidivism Among Released Prisoners With Severe Mental Illness.” Psychiatric Services, vol. 67, no. 8 (2016): 842-849. (Study 10) Sacks, J.Y., K. McKendrick, and Z. Hamilton. “A Randomized Clinical Trial of a Therapeutic Community Treatment for Female Inmates: Outcomes at 6 and 12 Months After Prison Release.” Journal of Addictive Diseases, vol. 31, no. 3 (2012): 258-269. (Study 7) Sacks, S., M. Chaple, J.Y. Sacks, K. McKendrick, C.M. Cleland. “Randomized Trial of a Reentry Modified Therapeutic Community for Offenders with Co-Occuring Disorders: Crime Outcomes.” Journal of Substance Abuse Treatment, vol. 42 (2012): 247-259. (Study 3) Sacks, S, K. McKendrick, J.Y. Sacks, S. Banks, M. Harle. “Enhanced Outpatient Treatment for Co-Occurring Disorders: Main Outcomes.” Journal of Substance Abuse Treatment, vol. 34 (2008): 48-60. (Study 18) Sacks, S., J.Y. Sacks, K. McKendrick, S. Banks, and J. Stommel. “Modified TC for MICA Offenders: Crime Outcomes.” Behavioral Sciences and the Law, vol. 22 (2004): 477-501. (Study 6) Taylor, N. An Analysis of the Effectiveness of Santa Clara County’s Mentally Ill Offender Crime Reduction Program. Anne Arbor, MI: ProQuest Information and Learning Company, May 2005. (Study 14) Theurer, G. and D. Lovell. “Recidivism of Offenders with Mental Illness Released from Prison to an Intensive Community Treatment Program.” Journal of Offender Rehabilitation, vol. 47, no. 4 (2008): 385-406. (Study 8) Van Stelle, K.R., and D.P. Moberg. “Outcome Data for MICA Clients After Participation in an Institutional Therapeutic Community.” Journal of Offender Rehabilitation, vol. 39 no.1 (2004): 37-62. (Study 12) Yates, K.F., M. Kunz, A. Khan, J. Volavka, and S. Rabinowitz. “Psychiatric Patients with Histories of Aggression and Crime Five Years after Discharge from a Cognitive-Behavioral Program.” The Journal of Forensic Psychiatry and Psychology, vol. 21, no. 2 (2010):167-188. (Study 16) Zlotnick, C., J. Johnson, and L.M. Najavits. “Randomized Controlled Pilot Study of Cognitive-Behavioral Therapy in a Sample of Incarcerated Women with Substance Use Disorder and PTSD.” Behavior Therapy, vol. 40 (2009): 325-336. (Study 13) Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Tom Jessor (Assistant Director); Frederick Lyles, Jr. (Analyst-in-Charge); Pedro Almoguera; David Blanding, Jr.; Billy Commons, III; Thomas C. Corless; Dominick Dale; Michele Fejfar; Eric Hauswirth; Valerie Kasindi; Heather May; Leia J. Dickerson; Sam Portnow; and Cynthia Saunders all made key contributions to this report.
Why GAO Did This Study In 2016, SAMHSA estimated that about 10.4 million adults in the United States suffered from a serious mental illness, which generally includes conditions such as schizophrenia and bipolar disorder. As of May 27, 2017, BOP was responsible for overseeing 187,910 inmates and 7,831 of these inmates were considered to have a serious mental illness. Research has shown that inmates with serious mental illness are more likely to recidivate than those without. The 21st Century Cures Act directed GAO to report on the prevalence of crimes committed by persons with serious mental illness and the costs to treat these offenders—including identifying strategies for reducing recidivism among these individuals. This report discusses (1) what is known about crimes committed by inmates with serious mental illness incarcerated by the federal and selected state governments; (2) what is known about the costs to the federal and selected state governments to incarcerate and provide mental health care services to those individuals; and (3) what strategies have the federal and selected state governments and studies identified for reducing recidivism among individuals with serious mental illness. GAO selected six states that varied in their adult incarceration rates and provided geographic diversity. At BOP and the six states' departments of corrections, GAO analyzed criminal offense and incarceration and mental health care cost data and interviewed officials about strategies for reducing recidivism for inmates with serious mental illness. The results from these six states are not generalizable, but provide insights. GAO also reviewed studies that analyzed the relationship between various programs and recidivism among offenders with mental illness. What GAO Found About two-thirds of inmates with a serious mental illness in the Department of Justice's (DOJ) Federal Bureau of Prisons (BOP) were incarcerated for four types of offenses—drug (23 percent), sex offenses (18 percent), weapons and explosives (17 percent), and robbery (8 percent)—as of May 27, 2017. GAO's analysis found that BOP inmates with serious mental illness were incarcerated for sex offenses, robbery, and homicide/aggravated assault at about twice the rate of inmates without serious mental illness, and were incarcerated for drug and immigration offenses at about half or less the rate of inmates without serious mental illness. GAO also analyzed available data on three selected states' inmate populations and the most common crimes committed by inmates with serious mental illness varied from state to state due to different law enforcement priorities, definitions of serious mental illness and methods of tracking categories of crime in their respective data systems. BOP does not track costs related to incarcerating or providing mental health care services to inmates with serious mental illness, but BOP and selected states generally track these costs for all inmates. BOP does not track costs for inmates with serious mental illness in part because it does not track costs for individual inmates due to resource restrictions and the administrative burden such tracking would require. BOP does track costs associated with mental health care services system-wide and by institution. System-wide, for fiscal year 2016, BOP spent about $72 million on psychology services, $5.6 million on psychotropic drugs and $4.1 million on mental health care in residential reentry centers. The six state departments of corrections each used different methods and provided GAO with estimates for different types of mental health care costs. For example, two states provided average per-inmate costs of incarceration for mental health treatment units where some inmates with serious mental illness are treated; however, these included costs for inmates without serious mental illness housed in those units. DOJ, Department of Health and Human Service's Substance Abuse and Mental Health Services Administration (SAMHSA), and criminal justice and mental health experts have developed a framework to reduce recidivism among adults with mental illness. The framework calls for correctional agencies to assess individuals' recidivism risk and substance abuse and mental health needs and target treatment to those with the highest risk of reoffending. To help implement this framework, SAMHSA, in collaboration with DOJ and other experts, developed guidance for mental health, correctional, and community stakeholders on (1) assessing risk and clinical needs, (2) planning treatment in custody and upon reentry based on risks and needs, (3) identifying post-release services, and (4) coordinating with community-based providers to avoid gaps in care. BOP and the six states also identified strategies for reducing recidivism consistent with this guidance, such as memoranda of understanding between correctional and mental health agencies to coordinate care. Further, GAO's literature review found that programs that reduced recidivism among offenders with mental illness generally offered multiple support services, such as mental health and substance abuse treatment, case management, and housing assistance.
gao_GAO-19-189
gao_GAO-19-189_0
Background JWST is envisioned to be a large deployable space telescope, optimized for infrared observations, and the scientific successor to the aging Hubble Space Telescope. JWST is being designed for a 5-year mission to find the first stars, study planets in other solar systems to search for the building blocks of life elsewhere in the universe, and trace the evolution of galaxies from their beginning to their current formation. JWST is intended to operate in an orbit approximately 1.5 million kilometers—or 1 million miles—from the Earth. With a 6.5-meter primary mirror, JWST is expected to operate at about 100 times the sensitivity of the Hubble Space Telescope. JWST’s science instruments are designed to observe very faint infrared sources and therefore are required to operate at extremely cold temperatures. To help keep these instruments cold, a multi-layered tennis court-sized sunshield is being developed to protect the mirrors and instruments from the sun’s heat. The JWST project is divided into three major segments: the observatory segment, the ground segment, and the launch segment. When complete, the observatory segment of JWST is to include several elements (Optical Telescope Element (OTE), Integrated Science Instrument Module (ISIM), and spacecraft) and major subsystems (sunshield and cryocooler). The hardware configuration referred to as OTIS was created when the Optical Telescope Element and the Integrated Science Instrument Module were integrated. Additionally, JWST is dependent on software to deploy and control various components of the telescope, and to collect and transmit data back to Earth. The elements, major subsystems, and software are being developed through a mixture of NASA, contractor, and international partner efforts. See figure 1 for the elements and major subsystems of JWST and appendix 1 for more details, including a description of the elements, major subsystems, and JWST’s instruments. For the majority of work remaining, the JWST project is relying on two contractors: Northrop Grumman and the Association of Universities for Research in Astronomy’s Space Telescope Science Institute. Northrop Grumman plays the largest role, developing the sunshield, the Optical Telescope Element, the spacecraft, and the Mid-Infrared Instrument’s cryocooler, in addition to integrating and testing the observatory. Space Telescope Science Institute’s role includes soliciting and evaluating research proposals from the scientific community, and receiving and storing the scientific data collected, both of which are services that it currently provides for the Hubble Space Telescope. Additionally, the Institute is developing the ground system that manages and controls the telescope’s observations and will operate the observatory on behalf of NASA. JWST will be launched on an Ariane 5 rocket, provided by the European Space Agency. JWST depends on 22 deployment events—more than a typical science mission—to prepare the observatory for normal operations on orbit. For example, the sunshield and primary mirror are designed to fold and stow for launch and deploy once in space. Due to its large size, it is nearly impossible to perform deployment tests of the fully assembled observatory, so the verification of deployment elements is accomplished by a combination of lower level component tests in flight-simulated environments; ambient deployment tests for assembly, element, and observatory levels; and detailed analysis and simulations at various levels of assembly. Schedule and Cost Reserves for NASA Projects We have previously found that complex development efforts like JWST face numerous risks and unforeseen technical challenges, which can often become apparent during integration and testing. To accommodate unanticipated challenges and manage risk, projects reserve extra time in their schedules, which is referred to as schedule reserve, and extra funds in their budgets, which is referred to as cost reserve. Schedule reserve is allocated to specific activities, elements, and major subsystems in the event of delays or to address unforeseen risks. Each JWST element and major subsystem has been allocated schedule reserve. When an element or major subsystem exhausts schedule reserve, it may begin to affect schedule reserve on other elements or major subsystems whose progress is dependent on prior work being finished for its activities to proceed. Cost reserves are additional funds within the project manager’s budget that can be used to address unanticipated issues for any element or major subsystem, and are used to mitigate issues during the development of a project. For example, cost reserves can be used to buy additional materials to replace a component or, if a project needs to preserve schedule reserve, reserves can be used to accelerate work by adding shifts to expedite manufacturing. NASA’s Goddard Space Flight Center— the NASA center with responsibility for managing JWST—has issued procedures that establish the requirements for cost and schedule reserves. In addition to cost reserves held by the project manager, management reserves are funds held by the contractors that allow them to manage program risks and to address unanticipated cost increases throughout development. We have previously found that management reserves should contain 10 percent or more of the cost to complete a project and are generally used to address various issues tied to the contract’s scope. JWST’s Use of Award Fees NASA’s cost-plus-award-fee contract with Northrop Grumman has spanned almost two decades, during which there have been significant variances in contractor performance. Cost-reimbursement contracts are suitable when uncertainties in the scope of work or cost of services prevent the use of contract types in which prices are fixed, known as fixed-price contracts. Award fee contracts provide contractors the opportunity to obtain monetary incentives for performance in designated areas identified in the award fee plan. Award fees may be used when key elements of performance cannot be defined objectively, and, as such, require the project officials’ judgment to assess contractor performance. For JWST’s contract with Northrop Grumman, these areas include cost, schedule, technical, and business management and are established in the contracts’ performance evaluation plans. In December 2013, the JWST program and the contractor agreed to replace a $56 million on-orbit incentive—incentives based on successful performance in space—with award fees. The award fees are to incentivize cost and schedule performance during development. This shift increased the available award fee for the entire contract to almost a quarter of a billion dollars. According to officials, restructuring the incentives gave NASA more flexibility to incentivize the contractor to prioritize the cost and schedule performance over exceeding technical requirements. In December 2014, we found that NASA award fee letters of award fee periods from February 2013 to March 2014 indicated that the contractor had been responsive to interim award fee period criteria provided by NASA and that contractor officials confirmed that they pay close attention to this guidance in prioritizing their work. For example, Northrop Grumman officials reported that they had made specific changes to improve communications in direct response to this guidance, which was validated by award fee letters from NASA. History of Cost Growth and Schedule Delays The JWST program has a history of significant schedule delays and increases to project costs, which resulted in replans in 2011 and 2018. Before 2011, early technical and management challenges, contractor performance issues, low levels of cost reserves, and poorly phased funding caused the JWST program to delay work. As a result, the program experienced schedule overruns, including launch delays, and cost growth. The JWST program underwent a replan in September 2011, and a rebaseline in November of that same year, and Congress placed an $8 billion cap on the formulation and development costs for the project. On the basis of the replan, NASA rebaselined JWST with a life- cycle cost estimate of $8.835 billion, which included additional money for operations and a planned launch in October 2018. Congress also required that NASA treat any cost increase above the cap according to procedures established for projects that exceed their development cost estimates by at least 30 percent. This process is known as a rebaseline. Congress must authorize continuation of the JWST program if formulation and development costs increase over the $8 billion cost cap. In June 2018, after a series of launch delay announcements due to technical and workmanship issues identified during spacecraft element integration, NASA notified Congress that it had again revised the JWST program’s cost and schedule estimates. NASA estimated that it now required $828 million in additional resources and 29 more months to complete beyond those estimates agreed to in the 2011 rebaseline. As of November 2018, NASA had funding to continue to execute the program and was waiting to see if Congress would authorize the program’s continuation and appropriate funds for the program in fiscal year 2019. Figure 2 shows the project’s history of changes to its cost or schedule and key findings from two external independent review teams and our prior work. As discussed above, various technical and workmanship errors drove some of the more recent delays. Examples of some of the workmanship issues we found in the past include: In October 2015, the project reported that a piece of flight hardware for the sunshield’s mid-boom assembly was irreparably damaged during vacuum sealing in preparation for shipping. The damaged piece had to be remanufactured, which consumed 3 weeks of schedule reserve. In April 2017, a contractor technician applied too much voltage and irreparably damaged the spacecraft’s pressure transducers, components of the propulsion system that help monitor spacecraft fuel levels. The transducers had to be replaced and reattached in a complicated welding process. At the same time, Northrop Grumman also addressed several challenges with integrating sunshield hardware. These issues combined took up another 1.25 months of schedule reserve. In May 2017, some of the valves in the spacecraft propulsion system’s thruster modules were leaking beyond permissible levels. Northrop Grumman determined that the most likely cause was the use of an improper cleaning solution, and the thruster modules were returned to the vendor for investigation and refurbishment. Reattaching the refurbished modules was expected to be complete by February 2018, but was delayed by one month when a technician applied too much voltage to one of the components in a recently refurbished thruster module. NASA and Northrop Grumman reported that resolving the thruster module issue resulted in a 2-month delay to the project’s overall schedule. In October 2017, when conducting folding and deployment exercises on the sunshield, Northrop Grumman discovered several tears in the sunshield membrane layers. According to program officials, a workmanship error contributed to the tears. The tears resulted in another 2-month delay to the project’s overall schedule. In addition, some first-time efforts took longer than planned. For example, in fall 2017, the project determined that it would need to use up to 3 months of schedule reserve based upon lessons learned from the contractor’s initial sunshield folding operation. This first deployment, or unfolding, took 30 days longer than planned. The sunshield has since undergone another deployment, and will be deployed twice more before launch. The IRB took into account these technical and workmanship errors, as well as other considerations, when it analyzed the project’s organizational and technical issues. The board’s final report, issued in May 2018, included 31 recommendations that addressed a range of factors. For example, the IRB recommended that the project: Conduct an audit to identify potential embedded design flaws— problems that have not been detected through analysis, inspection, or test activities and pose a significant risk to JWST schedule, cost, and mission success; Establish corrective actions to detect and correct human mistakes during integration and test; Establish a coherent, agreed-upon, and factual narrative on project status and communicate that status regularly across to all relevant stakeholders; and Augment integration and test staff to ensure adequate long-term staffing and improve employee morale. In its response to the IRB’s report, NASA stated that it accepted the report’s recommendations and had already begun implementing action in response to many of them. Further, project officials told us that some of the actions were underway before the IRB completed its review. NASA Revised Schedule and Cost Commitments to Reflect Prior and Ongoing Technical Challenges To develop a new schedule for JWST’s 2018 replan, NASA took into account the remaining integration and test work and added time to the schedule to address threats that were not yet mitigated. This includes 5.5 months to address an anomaly that occurred on the sunshield’s cover in 2018. The project also replenished its schedule reserves—which we found in February 2018 had been consumed—so that they now exceed the recommended levels. Both the project and IRB conducted schedule risk assessments that produced similar launch dates. The project relied on the replan schedule to determine its remaining costs because the workforce necessary to complete the observatory represents most of the remaining cost. Following is additional information on the schedule and cost considerations. Schedule: JWST’s revised launch readiness date of March 2021 reflects a consideration of the hardware integration and test challenges the project has experienced, including adding time to: Add snag guards for the membrane tensioning system—which helps deploy the sunshield and maintain its correct shape—to prevent excess cable from snagging, Repair tears of the sunshield membrane, Deploy, fold, and stow the sunshield, and Mitigate contractor schedule threats. In addition, the project added extra time to the schedule to complete repairs to the membrane cover assembly, which did not perform as expected during acoustics testing in April 2018. The membrane cover assembly shown in figure 3 is used to cover the sunshield membrane when in the stowed position to provide thermal protection during launch. After the anomaly occurred, the project halted spacecraft element testing, investigated the anomaly, and found that the fasteners had come loose due to a design change made to prevent the fasteners from damaging the sunshield membrane. The design change caused the nuts to not lock properly. According to project officials, due to the design of the membrane cover assembly, the project was not able to conduct flight-like, stand- alone testing on the cover prior to spacecraft element testing. As a result, the project did not discover the design issue until the hardware came loose while installed on the spacecraft element. The project determined that the repairs would take approximately 5.5 months. The project’s replan also reflected schedule reserves above the level required by Goddard Space Flight Center policy, which would have been approximately 5 months at that time. The new schedule includes a total of 293 days or 9.6 months of schedule reserves leading up to its committed launch readiness date of March 2021. NASA approved a JWST launch date of March 2021, but the project and the contractor are working toward a launch date in November 2020. Figure 4 shows the project’s new schedule following the 2018 replan, including how the project distributed its schedule reserves through different integration and test activities. As part of its May 2018 study, the IRB reviewed the project’s schedule and recommended a launch date of March 2021, which was subsequently reflected in NASA’s new schedule for the program. In reviewing the project’s schedule, the IRB found that the project had robust scheduling practices for ensuring that the schedule represented a complete and dynamic network of tasks that could respond automatically to changes. This schedule also passed a standard health check with minimal errors indicating that it was well constructed. However, the IRB noted that this schedule does not account for certain types of unknown risks to the program such as integration and test errors which can take many months to resolve, or the potential need to remove a science instrument from the observatory, which can have about a 1 year impact. As a result, the program could experience additional delays if a risk of this magnitude is realized. Cost: The project’s new $9.7 billion life-cycle cost estimate is principally driven by the schedule extension, which requires keeping the contractor’s workforce to complete integration and test longer than expected. Specifically, the project determined that almost all of the hardware had been delivered and the remaining cost was predominantly the cost for the workforce necessary to complete and test the observatory. For the past 3 years, we have reported that Northrop Grumman’s ability to decrease its workforce was central to JWST’s capacity to meet its long- term cost commitments. However, Northrop Grumman’s actual workforce continued to exceed its projections. This was because it needed to maintain higher workforce levels due to technical challenges, including problems with spacecraft and sunshield integration and test. It also needed to keep specialized engineers available when needed during final assembly and test activities. In developing the cost estimate supporting the 2018 replan, the project used a Northrop Grumman workforce profile that is higher than previous projections because Northrop Grumman now plans to maintain personnel longer during integration and test. According to project officials, the planned reduction of Northrop Grumman’s workforce is now more gradual and conservative than the prior plan. For example, the Northrop Grumman workforce will not start to significantly decline until the observatory ships to the launch site, which is expected to occur in August 2020. As shown in Figure 5, the JWST workforce assembling the observatory declines and the government and contractor workforce necessary to manage and operate the observatory remains after the internal launch readiness date of November 2020. As seen in the above figure, the Space Telescope Science Institute workforce, the contractor responsible for operating JWST, will remain generally flat between fiscal years 2021 to 2026 when it operates the observatory. The NASA civil service and support contractor will remain relatively flat through November 2020 launch date and then decline. In addition, the new cost estimate also took into account $61 million for implementing the IRB recommendations and mission success enhancements, funding for project cost reserves, and operations costs. In June 2018, the NASA associate administrator—who is the project’s decision authority—approved the project to proceed with its replan with a March 2021 launch date and $9.7 billion in life-cycle costs based on the Agency Program Management Council review and replan documents. The associate administrator did not require the project to conduct an updated Joint Cost and Schedule Confidence Level (JCL) analysis for this replan. A JCL is an integrated analysis of a project’s cost, schedule, risk, and uncertainty whose result indicates the probability of a project’s success of meeting cost and schedule targets. NASA policy states that a JCL should be recalculated and approved as a part of the rebaselining approval process, but it is not required. In its replan decision memo, NASA’s associate administrator explained that he did not require the project to update the JCL because project costs are almost entirely related to the workforce and most of the remaining planned activities will be performed generally in sequence. Therefore, according to NASA’s associate administrator, the total cost would be driven almost entirely by the schedule because the workforce levels will remain the same through delivery of the observatory. Both the project and independent estimators used multiple schedule estimating methods to analyze the schedule for the remaining work, and NASA’s associate administrator said these analyses returned consistent, high confidence launch dates. Project Has Used Some Schedule Reserve from Its 2018 Replanned Schedule with Challenging Integration and Test Work Remaining The project’s ability to execute to its new schedule will be tested as it progresses through the remainder of challenging integration and test work. The project has yet to complete three of five integration and test phases. The remaining phases include integration and test of OTIS, the spacecraft element, and the observatory. Our prior work has shown that integration and testing is the phase in which problems are most likely to be found and schedules tend to slip. For a uniquely complex project such as JWST, this risk is magnified as events start to become more sequential in nature. As a result, it will continue to become more difficult for the project to avoid schedule delays by mitigating issues in parallel. As of November 2018, the project is about a week behind its replanned schedule because repairs on the membrane cover assembly took longer than planned. Completing the membrane cover assembly repairs and returning the spacecraft to vibration testing was a key event for the project to demonstrate that it could execute to its new schedule. When the project developed its 2018 replanned schedule, it had planned to complete the membrane cover assembly repairs and reinstall the assembly onto the sunshield and restart spacecraft element integration and test activities by November 6, 2018. The project allocated 4 weeks of schedule reserves specifically for these repairs. However, the membrane cover repairs proved more difficult than anticipated. For example, the program had to address unanticipated technical challenges on the membrane cover assemblies, including repairing tears and pin holes in the covers discovered after the covers were removed. The project also had to allot time to install bumpers, which are kapton tubes, to the assembly to protect the composite material on a sunshield structure during launch. The project identified the need to add the bumpers during subassembly vibration testing. As a result, as of November 2018, the project had used about 4.5 weeks of schedule reserves to cover delays associated with these activities. The use of reserves beyond what the project had planned for the repairs pushed the restart of spacecraft element integration and test activities out about a week to November 14, 2018. Figure 6 compares the project’s initial membrane cover assembly schedule in June 2018 to the actual schedule in November 2018. While the project repaired the membrane cover assembly, it also used this time to conduct risk mitigation activities on OTIS. For example, the project worked to mitigate a design issue on the frill connections. The frill is composed of a single layer of blankets placed around the outside of the primary mirror used to block stray light (see figure 7). A combination of modeling and inspections revealed that most of the frill sections did not have as much slack as expected at the near-absolute zero cryogenic temperatures of space. This caused shrinkage that put stress on the edges of the outer ring of mirrors, which could affect the stability of the optical mirror and image quality. The project loosened these outer connections by adding a ring to the connecting points. As of November 2018, project officials said they were in the process of verifying the fix through inspections. Examples of technical issues and risks that the project continues to face during the remaining phases of integration and test include: The project is working to mitigate a design issue on the sunshield membrane tensioning system—which helps deploy the sunshield and maintain its correct shape. In our February 2018 report, we found that Northrop Grumman was planning to modify the design of the membrane tensioning system after one of the sunshield’s six membrane tensioning systems experienced a snag when conducting folding and deployment exercises on the sunshield in October 2017. The project and Northrop Grumman determined that a design modification was necessary to fully mitigate the issue, which includes modifying clips used to progressively release the cable tension and adding guards to control the excess cable. The project identified a concern that the depressurization of trapped air in the folded sunshield membrane when the fairing separates to release the JWST observatory may overly stress the membrane material. The project is working with Arianespace—the company responsible for operating JWST’s launch vehicle—and experts at the Kennedy Space Center to resolve this concern. Officials estimated that a design solution would be in place in mid-2019. However, if the project determines that it needs to reinforce the membrane covers to survive excessive residual pressure as it works on this design solution, a multi-month schedule delay could occur. As of November 2018, the project has mitigated 21 of its 47 hardware and software risks to acceptable levels, and reviews these risks monthly for any changes that might affect the continued acceptability of the risk. Five of these 21 risks are related to the project’s more than 300 potential single point failures—several of which are related to the deployment of the sunshield. The project is actively working to mitigate the remaining 26 risks to acceptable levels or closure prior to launching. The project also has several first-time and challenging integration and test activities remaining. For example, the project must integrate OTIS and the completed spacecraft element and test the full observatory in the final integration phase, which includes another set of challenging environmental tests. See figure 8 for an image of OTIS and the spacecraft element prior to being integrated. As previously discussed, the project also has two remaining deployments of the sunshield, and prior deployments have taken longer than planned. To help mitigate the risks associated with the deployments, the project added additional time for deployments in the 2018 replanned schedule based on lessons learned from prior deployments. The two remaining deployments are to occur after spacecraft element integration and test and again after observatory integration and test. The JWST project office is required to evaluate whether the project can complete development within its revised cost and schedule commitments at its next major review—the system integration review—planned for August 2019. This review is to occur after the project has completed two major tasks—OTIS and spacecraft element integration and test. The review is to evaluate whether the project (1) is ready to enter observatory integration and test, and (2) can complete remaining project development with acceptable risk and within its cost and schedule constraints. NASA guidance does not require projects to conduct a JCL at this review. However, project officials said that they plan to conduct another schedule risk analysis in the future. They do not intend to complete a new JCL for the same reasons they did not complete one for the 2018 replan— because costs are almost entirely related to the workforce and can be derived from a schedule that takes into account known risk. While not required, conducting a JCL prior to the system integration review would inform NASA about the probability of meeting both its cost and schedule commitments. If the project proceeds with its plan to conduct only a schedule risk analysis, NASA would be provided only with an updated probability of meeting its schedule commitments. Our cost estimating best practices recommend that cost estimates should be updated to reflect changes to a program or kept current as it moves through milestones and as new risks emerge. In addition, government and industry cost and schedule experts we spoke with noted that integration and testing is a critical time for a project when problems can develop. These experts told us that completing a JCL is a best practice for analyzing major risks at the most uncertain part of project execution. Conducting a JCL at system integration review—a review that occurs during the riskiest phase of development, the integration and test phase— would allow the project to update its assumptions of risk and uncertainty based on its experiences in OTIS and spacecraft element integration and test. The project could then determine how those updated assumptions affect overall cost and schedule for the JWST project. As noted above, the project has many risks to mitigate, technical challenges to overcome, and challenging test events to complete, which could affect the project’s schedule and risk posture. Further, the project has an established history of significant cost growth and schedule delays. In its June 2018 letter notifying an appropriate congressional committee of its updated cost and schedule commitments, NASA acknowledged that recent cost growth for the project will likely impact other science missions. Conducting a JCL at system integration review would provide NASA and Congress with critical information for making informed resource decisions on the JWST project and its affordability within NASA’s portfolio of projects more broadly. NASA Is Augmenting Oversight of Contractor and Project Performance, and Identified the JWST Project Manager as Responsible for Sustaining Changes NASA has taken steps to augment oversight of the contractor and project following the discovery of the embedded design flaws and workmanship errors that contributed to the project’s most recent schedule delays and cost increases. See table 1 for examples of changes NASA has made to contractor and project oversight—some of which NASA self-identified and others that were in response to IRB recommendations. The IRB made 31 recommendations that ranged from improving employee morale to improving security during transporting JWST to its launch site. NASA has also used award fees to try to incentivize Northrop Grumman to improve its performance. In a July 2018 hearing on the JWST program before the House Science, Space, and Technology Committee, Administrator Bridenstine stated that NASA had reduced the available award fee through commissioning by $28 million out of a total of about $60 million. Northrop Grumman also did not earn its full award fee in the two most recent periods of performance that NASA assessed. For the performance period of April 1, 2017 to September 30, 2017, Northrop Grumman earned approximately 56 percent of the available award fee. Reasons that NASA cited for its evaluation of award fees in this period included workmanship errors on the propulsion system, schedule delays, as well as issues with schedule execution, management, and quality control. For the period of October 1, 2017 to March 31, 2018, Northrop Grumman earned none of the available award fee. Northrop Grumman’s overall score was driven by an “unacceptable” rating in schedule and cost due to delays and in anticipation of exceeding the project’s $8 billion cost cap. Northrop Grumman received an “excellent” rating under the technical category, but the evaluation noted ongoing issues with quality controls, which resulted in delays. For example, the process steps for applying voltage to the spacecraft’s pressure transducers were not clear enough, which resulted in technician error and irreparable damage to the hardware. According to Northrop Grumman officials, the contractor has started to take action to try to improve its quality assurance processes. Officials described actions that ranged from rewriting hardware integration and test procedures to starting efforts to change aspects of the company’s culture that contributed to quality control issues. For example, in July 2018, Northrop Grumman initiated a JWST mission assurance culture change campaign to increase focus on product quality and process compliance. This effort includes having inspectors affirm by signature that they have personally inspected, verified, and confirmed that all aspects of an activity meet quality standards. According to the form instructions, if the inspector is uncertain on compliance or if instructions are unclear, workers are to halt work, investigate and assess the situation, and request help to resolve the situation. Project and Northrop Grumman officials provided an example of these changes working. During a manual deployment of a radiator panel, a Northrop Grumman employee discovered that a flap used as thermal protection for a radiator was installed incorrectly and reported the error. Northrop Grumman technicians found that this flap had been swapped with another flap in the process of moving them to be installed and corrected the problem before work proceeded. Further, NASA and Northrop Grumman are conducting audits to try to minimize the risk of failures during the remaining phases of integration and test. These audits are conducted on items that have not been fully tested, are in workmanship-sensitive areas, or have had a late design change. The first phase of the audit was completed in September 2018 and found no major design issues or hardware rework required. The project plans to audit other areas through at least spring 2019, but will add audits if needed. The JWST oversight structure includes a number of positions that could be responsible for ensuring that the recent augmentations to contractor and project oversight are sustained through launch (see table 2). In response to our review, NASA officials clarified that the project manager has sole responsibility for ensuring that these improvements are sustained through launch. Further, these officials stated that the project office is responsible for monitoring these changes at the project level and at Northrop Grumman. The project manager’s continued focus on these efforts will be important because: The project is implementing a wide span of improvement efforts, ranging from more on-site coverage at the contractor facility to cultural improvements, which will now need to be sustained for an additional 29 months. The project has had recurring issues with effective internal and external communication as well as defining key management and oversight responsibilities, both of which are important to sustaining oversight. For example, the Independent Comprehensive Review Panel identified communication problems—between the JWST project and Science Mission Directorate management as well as between NASA and Northrop Grumman—and that the project’s governance structure lacked clear lines of authority and accountability. In December 2012, we found the JWST project had taken several steps to improve communication—such as instituting meetings that include various levels of NASA, contractor, and subcontractor management— but the IRB’s findings in 2018 indicate that communication and governance issues have resurfaced in some areas. For example, the IRB found that communication with key stakeholders including the science community, Congress, and NASA leadership, has been variable and at times inconsistent. The project may encounter new schedule pressures as it proceeds through integration and test. A senior NASA official with expertise in workmanship issues told us that schedule pressure is a key reason for increased quality problems on projects. For example, this official said that companies tend to give experts leniency to operate without the burden of quality assurance paperwork when schedule pressures arise, which can lead to workmanship errors. While JWST project officials told us they do not view this as applicable to their project, the perspective regarding potential schedule pressures and workmanship is important to keep focus on given the magnitude of technical challenges and delays the project has faced. We will continue to monitor the project’s efforts at maintaining these oversight augmentations in future reviews, given that less than a year has passed since the project began implementing many of them. Moreover, the project may find that some actions will be required of officials outside the project, particularly since the communication problems identified by the IRB may well extend to headquarters’ interaction with stakeholders from the science community, industry, and the Congress. Conclusions JWST is one of NASA’s most expensive and complex science projects, and NASA has invested considerable time and resources on it. The project first established its cost and schedule baseline in 2009. Since then, the project made progress by completing two of five phases of integration and test, but has also experienced significant cost growth and schedule delays. However, the project did not complete a JCL analysis as part of its second replan. Between now and its system integration review planned for August 2019, the JWST program will have to continue to address technical challenges and mitigate risks. Conducting a JCL would better inform decision makers on the status of the project as they determine whether the project can complete remaining project development with acceptable risk and within its cost and schedule constraints. Given the project is now on its third iteration of cost and schedule commitments, conducting a JCL is a small step that NASA can take to demonstrate it is on track to meet these new commitments. Recommendation for Executive Action We are making the following recommendation to NASA: The NASA Administrator should direct the JWST project office to conduct a JCL prior to its system integration review. (Recommendation 1) Agency Comments and our Evaluation We provided a draft of this report to NASA for comment. In written comments, NASA agreed with our recommendation. NASA expects to complete the JCL by September 2019, prior to the system integration review. The comments are reprinted in appendix II. NASA also provided technical comments, which have been addressed in the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the NASA Administrator, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Elements and Major Subsystems of the James Webb Space Telescope (JWST) Observatory Appendix II: Comments from the National Aeronautics and Space Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Molly Traci (Assistant Director), Karen Richey (Assistant Director), Jay Tallon (Assistant Director), Brian Bothwell, Daniel Emirkhanian, Laura Greifner, Erin Kennedy, Jose Ramos, Sylvia Schatz, Roxanna Sun, and Alyssa Weir made key contributions to this report.
Why GAO Did This Study JWST, a large, deployable telescope, is one of NASA's most complex projects and top priorities. The project has delayed its planned launch three times since September 2017 due to problems discovered in testing. In June 2018, NASA approved new cost and schedule estimates for JWST. Since the project established its cost and schedule baselines in 2009, the project's costs have increased by 95 percent and the launch date has been moved back by 81 months. Conference Report No. 112-284, accompanying the Consolidated and Further Continuing Appropriations Act, 2012, included a provision for GAO to assess the project annually and report on its progress. This is the seventh report. This report assesses (1) the considerations NASA took into account when updating the project's cost and schedule commitments and (2) the extent to which NASA has taken steps to improve oversight and performance of JWST, among other issues. GAO reviewed relevant NASA policies, analyzed NASA and contractor data, and interviewed NASA and contractor officials. What GAO Found In June 2018, the National Aeronautics and Space Administration (NASA) revised the cost and schedule commitments for the James Webb Space Telescope (JWST) to reflect known technical challenges, as well as provide additional time to address unanticipated challenges. For example, the revised launch readiness date of March 2021 included 5.5 months to address a design issue for the cover of the sunshield (see image). The purpose of the sunshield is to protect the telescope's mirrors and instruments from the sun's heat. NASA found that hardware on the cover came loose during testing in April 2018. The new cost estimate of $9.7 billion is driven by the schedule extension, which requires keeping the contractor's workforce on board longer than expected. Before the project enters its final phase of integration and test, it must conduct a review to determine if it can launch within its cost and schedule commitments. As part of this review, the project is not required to update its joint cost and schedule confidence level analysis—an analysis that provides the probability the project can meet its cost and schedule commitments—but government and industry cost and schedule experts have found it is a best practice to do so. Such analysis would provide NASA officials with better information to support decisions on allocating resources, especially in light of the project's recent cost and schedule growth. NASA has taken steps to improve oversight and performance of JWST, and identified the JWST project manager as responsible for monitoring the continued implementation of these changes. Examples of recent changes include increasing on-site presence at the contractor facility and conducting comprehensive audits of design processes. Sustaining focus on these changes through launch will be important if schedule pressures arise later and because of past challenges with communications. GAO will follow up on the project's monitoring of these improvements in future reviews. What GAO Recommends GAO recommends NASA update the project's joint cost and schedule confidence level analysis. NASA concurred with the recommendation made in this report.
gao_GAO-18-653
gao_GAO-18-653_0
Background Following terrorist attacks against the U.S. embassy in Beirut, Lebanon, in 1983, State began an embassy construction program—known as the Inman program—to protect U.S. personnel. However, as we’ve previously reported, State completed only 24 of the 57 planned construction projects, in part due to poor planning, systemic weaknesses in program management, difficulties acquiring sites, schedule delays, cost increases, and subsequent funding limitations. Following the demise of the Inman program in the early 1990s, State initiated very few new embassy projects until after the two 1998 embassy bombings in Kenya and Tanzania. Following the bombings in Africa, the Secure Embassy Construction and Counterterrorism Act of 1999 required State to develop and report a list of diplomatic facilities scheduled for replacement based on their vulnerability to terrorist attack. One of the congressional findings in the Secure Embassy Construction and Counterterrorism Act of 1999 was that unless embassy vulnerabilities are addressed in a sustained and financially realistic manner, the lives and safety of U.S. employees in diplomatic facilities will continue to be at risk from further terrorist attacks. State subsequently initiated the CSCP to construct new embassies. The CSCP is administered by OBO, which in April 2018 had about 1,135 direct-hire civil service personnel, U.S. Foreign Service officers, and personal services contractors stationed in Washington, D.C., and overseas. The Secure Embassy Construction and Counterterrorism Act of 1999 calls for new diplomatic facilities to be sufficiently sized to ensure that all U.S. government personnel at a post are located on a single secure site and that those facilities are set back not less than 100 feet from the site’s perimeter boundary. Before constructing a new embassy, State must certify to Congress that, among other things, the facility’s design incorporates adequate measures for protecting classified information and activities as well as personnel working in the facilities. OBO contracts with architectural and engineering firms (design firms) to develop bridging or full designs meeting security and other project requirements. These design firms submit their designs for reviews by OBO and Diplomatic Security to ensure conformance with building code and security standards, respectively. Diplomatic Security, in consultation with the Office of the Director of National Intelligence, must certify that the design meets security standards prior to the start of construction. While this certification occurs during the design phase of a project, Diplomatic Security also has other roles in the process, such as participating in site selection, ensuring OBO contractors have necessary security clearances, and ensuring facilities are securely constructed. After passage of the Secure Embassy Construction and Counterterrorism Act of 1999, State determined that embassies at 180 posts—out of 260 posts at the time—needed to be replaced to meet security standards. State adjusted this milestone—to building 150 embassies by 2018—in 2005, when it worked with the Office of Management and Budget (OMB) to establish the Capital Security Cost-Sharing Program (cost-sharing), with a primary goal of accelerating the replacement of embassies. Under cost-sharing, nearly 30 U.S. agencies with a presence in U.S. embassies were to provide a total of $17.5 billion for constructing the 150 new embassies by 2018—12 years sooner than had been projected without cost sharing. In justifying its cost-sharing approach, State emphasized that, among other things, requiring agencies to pay for overseas staff would make them more likely to closely assess the need for each overseas position, thereby rightsizing overseas staffing levels. Standard Embassy Design (SED) OBO sought to expedite construction and control CSCP costs through adoption of the SED and streamlined construction through a design-build delivery method. The SED was a set of documents providing prototypical plans for a medium-sized embassy including specifications and design criteria, and explaining how to adapt those to a particular site and project. The SED was not a complete design but rather a standardized template for the structural, spatial, and security requirements of a new embassy compound to guide a contractor’s final design. Compound elements described by the SED generally included the main office building; U.S. Marine security guards’ living quarters; a warehouse; a utility building; compound access control buildings and perimeter walls; and parking facilities. The SED also allowed for the standardization of building components such as security windows and doors. Figure 1 shows the prototypical facilities defined by the SED. OBO combined the SED with the design-build delivery method, which integrates completion of the design as well as all construction responsibilities into a single contract. Under this model, the design-build contractor is responsible for both design and construction and thus generally bears the risks, such as added cost, for any design problems because the contractor hires the design firm to bring the design to completion. Under the SED approach, OBO hired its own design firms beforehand to conduct project development activities such as planning surveys, site studies, and other analyses needed to inform the project’s design. OBO would utilize these design firms to develop a scope of work and provide the design-build contractor a concept or schematic design showing how OBO expected the office chancery and supporting embassy facilities to be arranged on the site using the SED prototypical design to include standard site and building plans, technical specifications, design criteria, and instructions for its adaptation for a particular project and contract requirements. The contractor’s design firm would then use the SED documentation to develop a 100-percent completed design adapted for a site at a particular post. Figure 2 provides an overview of the embassy construction process under OBO’s implementation of design- build utilizing the SED. Transition to Excellence In 2006, we reported that the SED approach and design-build delivery method had enabled OBO to make significant progress in completing new embassies and had helped to reduce the average time to complete projects to about 3 years (36.7 months). This was nearly 3 years faster than embassies built during the Inman era. However, while the SED approach enabled OBO to accelerate the construction of new embassies, some stakeholders raised concerns about the aesthetics, quality, location, and functionality of those facilities. Criticisms included that the SED embassies had a “fortress-like” appearance that detracted from their symbolic value in presenting American ideals of openness and innovation; that the emphasis on speed and cost control resulted in poorer-quality buildings and removal of functional elements such as warehouses; that the 10-acre lot specified by the SED required siting embassies too far from urban centers where foreign government offices are located; and that the standardized aspects of its design were difficult to adapt to unique site conditions and post needs. To address some of these criticisms, OBO began to use design-build with bridging (bridging) as a delivery method in 2008 with the first construction project awarded in 2009. Generally under this method, OBO first contracts with a design firm (the bridging architect) to develop a project- specific, partial design package (bridging design) that conveys State’s design vision and a higher level of detail for key design requirements. Such details that State might convey in a bridging design could include the selection of specific building systems (e.g., the types of structural foundation systems to be used for each building on the site) or post- specific security features (e.g., location, types, and heights of security walls and bollards to be used around and within the site). Unlike the SED, each bridging design is project-specific, customized, and separately executed by an outside design firm contracted by OBO. The extent of each bridging design varies by project but generally approximates an overall 35- to 50-percent completed design, according to OBO officials. OBO’s procedure is to then separately contract with a construction contractor (and its own design firm) to complete the design and build the project. Figure 3 provides an overview of the embassy construction process under bridging. Although customized, OBO’s bridging designs continued to use the SED as a starting point for several years after OBO adopted bridging in 2008. However, criticisms aimed at the underlying SED elements continued, for instance, that the standardized design sometimes hindered adaptation of designs in response to different climates, countries, or unique post functions. In 2011, OBO initiated the Excellence approach, which placed greater emphasis on custom designs for each project. OBO subsequently phased out the SED as the basis for embassy designs, and according to OBO officials, SED specifications, standards and guidance were incorporated into OBO’s Design Standards and Design Guide. According to OBO officials, by 2014, design firms hired by OBO to develop bridging designs no longer used the SED as a starting point. In addition, OBO shifted to greater use of the design-bid-build delivery method alongside bridging. Generally under design-bid-build, OBO first solicits and contracts with a design firm to develop a 100-percent design. Under this method, OBO then uses the completed design to solicit bids from prospective construction contractors. According to OBO documentation, OBO selects a project’s delivery method, either bridging or design-bid-build, based on an evaluation of a project’s local context, complexity, construction factors, and urgency. Figure 4 provides an overview of the embassy construction process under design-bid-build. Under both bridging and design-bid-build, OBO generally bears greater risk than it did under strict design-build in the SED approach. That is because if design errors impact construction, the contractor may seek additional costs and schedule relief from OBO for needed corrections and changes it attributes to problems with the design provided by the government. Additionally some stakeholders have expressed concern that the added design-work inherent to the Excellence approach may add to the cost to construct embassies and slow the rate of moving personnel into more secure facilities. However, OBO has maintained that greater design control under Excellence will improve embassies’ functionality, quality, operating costs, and their overall public impact in representing the United States. State’s Project Delivery Pace Has Been Slower Than Projected, as Unforeseen Building Requirements and Inflation Have Affected Progress Although State has built 77 new embassies since 1999 and at the end of fiscal year 2017 had another 21 under construction, the CSCP’s project delivery pace has fallen short of State’s 2005 target of constructing 150 new embassies by 2018. This is due, in part, to unexpected building requirements and the effects of inflation. In 2012, recognizing the erosion of purchasing power as a result of inflation, the Benghazi Accountability Review Board (ARB) recommended State work with Congress to increase the CSCP’s annual funding level from $1.4 billion to approximately $2.2 billion in fiscal year 2015 and for up to 10 years thereafter. OBO plans to begin construction of 25 embassies in fiscal years 2018–2022 and nearly 50 more beyond fiscal year 2022, but it is unclear whether OBO can maintain its average pace of 5 new embassy contract awards per year—particularly as State has not defined the overall capital cost and potential timeframes needed to achieve this goal, nor does it currently expect to seek year-to-year adjustments for inflation. State Will Not Meet Its Original Project Delivery Goals Although State has made progress in constructing more secure embassies, State’s CSCP will not achieve the target of constructing 150 new embassies by 2018, a milestone that the 2005 cost-sharing was intended to facilitate. From fiscal year 1999 through 2017, State completed 77 new embassies and had 21 under construction. In fiscal year 2017, State also forecast a potential need for 72 additional embassies beyond those completed or under construction. Of those 72, State planned to begin construction on 25 new embassies in fiscal years 2018 through 2022, at an estimated pace of 5 new starts per year. The remaining 47 locations were identified by State as candidates for new embassy compounds beyond 2022. Figure 5 shows the status of CSCP embassy projects as of the end of fiscal year 2017. Total CSCP funding from 1999 through 2017 reached approximately $24.2 billion (in nominal dollars). Figure 6 shows the cumulative progress in completing the 77 embassies along with year-to-year cumulative funding from fiscal year 1999 through fiscal year 2017. State’s CSCP will not achieve the 2005 target of constructing 150 new embassies by 2018. To achieve this target, State would have had to complete an average of about 10 embassies per year. Instead, on average, State has completed 5 new embassy compounds each year since cost-sharing was authorized in 2005. If State’s project delivery pace remains unchanged, it would take more than 15 years to complete the 72 new embassies identified in State’s CSCP planning schedule at the end of fiscal year 2017. The CSCP Has Had to Cover Unforeseen Additional Building Requirements and Has Received One Inflation Adjustment The pace of CSCP has been affected by unexpected building requirements and inflation. Beyond the 77 completed embassies and the 21 under construction, the $24 billion for CSCP since 1999 has also funded additional building requirements that State had not originally envisioned. According to State, these unforeseen requirements included: 1. On-compound staff housing at some posts, such as Beirut; 2. New or reopened posts, such as Kabul, 3. Marine security guard quarters on some new and existing compounds, such as Monterrey, in response to a recommendation in the 2012 Benghazi ARB report and as State revised its policy governing the presence of U.S. Marines at some posts. 4. New security requirements at high threat posts—such as taller perimeter walls, guard towers, and unique security support spaces. 5. Office annexes; for example OBO is now building new annexes in Kampala, Uganda and Nairobi, Kenya, posts where new embassies were completed in 2001 and 2006 respectively. From 1999 through 2017 State completed 28 annex office buildings under the CSCP— such as for U.S. Agency for International Development—or acquired buildings and upgraded them for use as an embassy. Figure 7 shows completed annex projects along with embassy completions. OBO officials told us that unforeseen requirements continue to affect the CSCP. Over time, CSCP funding has also been subject to the effects of inflation. The 1999 ARB following the bombings of U.S. embassies in Tanzania and Kenya recommended that embassy construction and other security improvements be funded at $1.4 billion per year over 10 years. With the introduction of cost-sharing in 2005, State set an annual CSCP funding goal of $1.4 billion, as the 1999 ARB had recommended, as well as the goal of completing 150 new embassies by 2018 for a projected funding total of $17.5 billion. However, State officials indicated that when the program was established, no provision was made for potential inflation over the life of the program. Therefore, while CSCP funding generally increased from 2005 through 2010, OBO officials stated that CSCP funding gradually purchased less than anticipated due to the lack of an inflation adjustment. This absence of inflation as a built-in factor in program planning is in contrast to OBO’s cost estimates for individual new embassy projects. Those project-level cost projections account for inflation and recognize that the projects will typically take at least 3 years to build. If annual CSCP program-level funding is held constant as individual project costs generally increase over time, fewer projects can be funded in later years of the program resulting in a slower pace of project delivery. In 2012, recognizing the erosion of purchasing power as a result of inflation, the Benghazi ARB recommended that State work with Congress to restore the CSCP capacity to its earlier level by increasing its annual funding level to approximately $2.2 billion starting in fiscal year 2015 and for up to 10 years thereafter. Based on State data, that recommended funding level was not met in 2015, but was generally met in fiscal years 2016 and 2017 due to the provision of additional Overseas Contingency Operations funding. In general, according to OBO, such funding is used to support State requirements in high-threat locations, which, according to OBO, are subject to the highest rates of project cost change. State generally considers this funding to be non-enduring and supplemental to funding through State’s regular budgets. Figure 8 shows State funding data representing the total annual CSCP funding from fiscal year 1999 through 2017—including cost-sharing, supplemental, and Overseas Contingency Operations funding—compared with State’s 2005 CSCP funding goal ($1.4 billion annually) and the 2012 Benghazi ARB annual funding recommendation ($2.2 billion annually), proposed for implementation in fiscal year 2015. State Has Not Estimated Total CSCP Cost and Time Frame and Has Not Planned for Future Inflation Lack of Reliable Data on the Number of Staff Moved into New Embassies The number of U.S. government staff moved into more secure facilities has been a reported performance measure for the Capital Security Construction Program (CSCP) since the time of the Standard Embassy Design approach. For example, the U.S. Department of State (State) reported moving over 30,000 people (out of more than 86,000) into more secure facilities from 2000 through 2014. We attempted to assess CSCP performance on this measure on a project-by-project basis but found it unreliable for the purpose of establishing how many staff have been moved into newly constructed facilities. State’s Bureau of Overseas Buildings Operations (OBO) officials explained that the “number of staff moved” metric was based on the projected desk and non-desk positions within each embassy construction contract. However, OBO never established a policy or procedure on how these data should be collected, managed, or validated. The data for this metric were informally tracked within OBO’s Office of Construction Management. As a result, information for this performance measure is inconsistent, precluding a progress assessment of the CSCP using this metric. For example, totals for some years included data for major renovation projects of existing buildings while other years’ data may have included acquired buildings purchased by State (and built by others). In 2017, we found State’s one strategic CSCP-related performance indicator—the relocation of staff into more secure and functional facilities—provides no performance assessment on the extent to which Excellence facilities are any more functional, sustainable, or effective in supporting U.S. diplomacy. We recommended State determine whether this measure is still appropriate or needs to be revised. According to OBO officials, this metric is being revisited as part of a broader evaluation of OBO’s performance measures. See GAO-17-296. Although the CSCP schedule for fiscal year 2017 identifies nearly 75 embassies still requiring replacement, the overall capital cost and likely time frame expected to achieve the program’s goal are unknown, as OBO has not made such estimates. According to OBO officials, State is not focused on replacing a set number of embassies within an estimated total capital investment cost (e.g., 150 embassies for $17.5 billion, as planned in 2005) or by a given end-date (e.g., 150 embassies by 2018, as planned in 2005). Rather, OBO’s approach is to request $2.2 billion annually in accordance with the Benghazi ARB’s recommendation. According to these officials, this approach allows agencies that contribute to cost- sharing to consistently plan for a predictable funding level, and OBO will work to complete as many projects as soon as possible within this annual funding level. Further, they noted that State does not intend to seek annual inflation adjustments for the CSCP. In general, according to OBO policy, the CSCP is guided by Diplomatic Security’s annual Security Environment Threat List of security rankings for posts, from which OBO develops a “Top 80” list of the 80 most at-risk posts needing a new embassy. OBO uses the Top 80 list to develop and adjust the CSCP schedule, which presents planned embassy awards for the current fiscal year and for each of the next 5 fiscal years. For example, the November 2016 CSCP schedule (current at the end of fiscal year 2017) listed the 5 posts slated for awards in fiscal year 2017. In addition, it listed the 25 posts slated for awards in fiscal years 2018 through 2022, grouped by the specific fiscal year when OBO anticipated being able to award the relevant construction contracts. The nearly 50 embassies planned for beyond fiscal year 2022 were broadly categorized in an “out-year” category in the November 2016 CSCP schedule. According to leading practices in capital decision-making we have previously identified, agencies’ long-term capital plans should provide insight into likely funding and other resources and time frames needed to achieve organizational mission goals. We also noted in our guide to leading practices that, while out-year cost estimates are preliminary, they help provide decision makers with an overall sense of funding needs and that such long-range planning assists in developing both current and future budgets. OBO’s fiscal year 2017 CSCP schedule does not identify estimated costs, either at the project or aggregate level. According to OBO officials, scope, cost, and size estimates are communicated on a project-specific basis to stakeholders through briefings and each fiscal year’s congressional notifications listing projects to be implemented in the coming years. According to these officials, the CSCP schedule is intended to be a flexible way to communicate a snapshot of OBO’s prioritization of posts to receive embassy awards over the next 5 fiscal years, emphasizing that the exact list can change. For example, a new embassy project might be advanced sooner than originally planned due to a change in State’s security or policy priorities. Conversely, a project may be moved out to a later fiscal year due to challenges that OBO believes may be posed by the host government or other challenges identified during or after site acquisition. Although the CSCP does track the projected timing of some specific projects, State lacks a strategic planning document that estimates longer term CSCP resource needs. For example, the CSCP schedule contains no estimated 5-year program cost for the next 25 embassies OBO plans to build, nor does it provide stakeholders an estimate or cost range for the total capital investment and feasible time frames needed to address the 47 embassies that OBO has identified for replacement beyond the next 5 years. Additionally, guidance from OMB indicates that when developing budget estimates agencies should consider the effect that economic or other changes can have on program levels beyond the budget year. OMB guidance further states that agencies should be prepared to discuss the impact that program levels and changes in methods of program delivery will have on program operations and administration. OMB guidance states that for discretionary programs, agencies may include an allowance for the full rate of anticipated inflation, less than the full rate, or no allowance for inflation. The guidance recognizes that agencies must make trade-offs between budget increases for inflation versus other increases for programmatic purposes. Given that it contains no cost information, the CSCP schedule is not meant to be a tool to forecast and convey to stakeholders the long-term effects of inflation on program capacity. Therefore, considering the 72 embassies yet to be replaced, past inflationary effects, and the CSCP’s pace thus far, it is unclear what pace OBO will be able to maintain without some level of inflation adjustment to its funding goal of $2.2 billion per year. Without information on the projected pace of construction and estimated effects of inflation, stakeholders’ may lack complete information to make fully-informed budget decisions. Completed Embassy Projects Have Generally Stayed within Budgeting and Planning Allowances While cost growth occurred on a majority of completed embassy projects and durations averaged about 36 months, these were generally within budgeting and planning allowances. We could not assess cost and schedule performance of projects begun under the Excellence approach because none had been completed by 2017. OBO maintains that the greater upfront investment in more customized designs under this approach will yield long-term benefits in embassies’ functionality, quality, and operating costs, as well as in their appearance in representing the United States. While an assessment of those potential benefits cannot be made at this time, we did find examples of Excellence and Excellence-like projects illustrating how innovative designs can increase upfront project costs. Contract Costs for Most Completed Projects Have Increased but Generally Stayed within Contingency Allowances While construction contract costs increased after award for most of the 22 completed projects we reviewed, the increases were generally less than contingency allowances, and most projects were completed within their contingency budgets. State reserves a contingency amount in its project budget—ranging from 5 to 10 percent of the contract value at award—to cover unforeseen project changes and cost increases. OBO’s overall project budgets also include funding for other nonconstruction costs and contracts, such as planning, design, and on-site project management and security. For the 22 completed embassy construction projects we reviewed, 16 (almost 75 percent) were finished within 10 percent or less of the original contract value at award, and 3 of these 16 projects finished under the original contract value at award. Six of the projects (over 25 percent) exceeded the original contract value at award by over 10 percent. For the 6 projects whose final costs were more than 10 percent over the original contract value at award, some of the cost increases were due to events unrelated to original design or construction issues. For example, in Khartoum, Sudan, OBO project documentation indicates that the contract increase was due, in part, to host government restrictions on the importation of needed construction materials and having to restart the project. In other instances, as discussed earlier, additional building requirements increased project costs. For example, OBO officials noted that a U.S. Agency for International Development office annex was added to the embassy project in Kyiv, Ukraine, and Marine security guard quarters were added to the projects in Monterrey, Mexico; Mbabane, Swaziland; and Vientiane, Laos. Table 1 shows the original construction contract value at award and the final or current contract value for the 22 completed projects as of the end of fiscal year 2017. Of these 22 projects, 16 were SEDs; four were “Excellence-like,” meaning they were transition projects awarded after OBO’s 2011 decision to institute Excellence but before OBO finished implementing Excellence in 2014; and 2 were not based on the SED template but predated the Excellence initiative. Contract value for some completed projects may change, in part due to outstanding requests for costs from the contractor or legal claims. Our cost assessment of the 22 completed projects included no Excellence projects, as none had been completed as of the end of fiscal year 2017. For the 21 ongoing construction projects that we reviewed, 14 (including 7 Excellence projects and 4 Excellence-like projects) had experienced some cost growth beyond the original contract value at award as of the end of fiscal year 2017. Because these were ongoing projects and 6 of the 21 had been awarded in fiscal year 2017 and therefore had not substantially progressed, we could not determine whether they would finish within their budget contingency, nor could we compare cost increases of Excellence projects—none of which had been completed— with cost increases of SED projects. See appendix II for the cost status of these ongoing projects as of the end of fiscal year 2017. Innovative Designs Can Increase Project Costs OBO maintains that its greater upfront investment in unique designs under Excellence will yield long-term benefits in embassies’ functionality, quality, and operating costs, as well as their appearance in representing the United States. Critics of the Excellence program assert that aspects of unique designs, such as buildings’ shapes and layouts, construction materials, or the architectural products used, are often expensive to design, build, and maintain. For example, some Excellence or Excellence-like designs specify stylized, custom-built architectural facades that are to be installed on the buildings’ exteriors. These can include cantilevered roofs; customized windows; architectural screens; glass curtain-wall systems; or very specific stone, brick, or concrete work. Some critics have also raised concerns about some aspects of buildings’ interior architectural features. For example a project official reported to us that State could have saved nearly $950,000 had it utilized an aluminum handrail—rather than a bronze handrail—for one embassy’s main staircase. The bridging design called for all metal site furnishings and railings to be a bronze tone in color. The bridging designer specifically indicated the use of bronze color throughout the design was intended to relate to the local metal craft of the region. An OBO official we spoke with indicated that while he understood there might be some savings for changing the handrail to aluminum, he felt the designer’s intent in specifying the use of a bronze handrail was clear and was approved by OBO during the design review process, and thus he did not feel it would be appropriate to make a change. Figure 9 depicts the more custom and stylized Excellence exterior designs alongside more standardized SED projects. In reviewing our case study projects, we found instances of custom exteriors that had led to greater construction costs. For example, OBO project documentation shows the use of a customized glass exterior wall designed for the Jakarta, Indonesia, embassy significantly impacted cost and schedule after contract award, adding at least $18 million to the cost and 180 days to the schedule. According to project documentation as well as OBO and contractor officials, OBO’s decision to employ a unique glass curtain-wall system for that project and subsequent questions raised by Diplomatic Security about the design, led OBO to modify the contract to add (1) $2.2 million and 180 added days to explore alternative designs and conduct redesign work in order to obtain Diplomatic Security certification; (2) $13.3 million, which OBO told us was for a dedicated facility to be established in the United States to securely fabricate the glass curtain wall before secure shipment to the site; and (3) $3 million to have cleared American workers install portions of the wall. OBO had not previously employed such a system in a completed embassy project and could not provide us with documentation analyzing the risks of such a feature to cost and schedule—which might have included potential delays to get Diplomatic Security’s approval of the design—compared with conventional concrete construction. Figure 10 shows this glass curtain wall under construction. Additionally, on the Hyderabad, India, project, OBO project documentation shows the initial design of the unique exterior screen concerned OBO management, leading to more design development by the contract architect, further review by OBO’s design staff, and added cost. Senior management expressed concerns about the appearance of the screen, mainly that the screen was too traditional compared with the spirit of the design of the building and the rest of the campus and that the pattern of the screen needed more variation for daylight and views. To respond to this concern, OBO issued two contract modifications to OBO’s architect for additional design work for the exterior screen. OBO told us that subsequent design development for three alternatives for the screen contributed an additional design cost of about $750,000, raising the final bridging design cost to approximately $10.5 million. That figure excludes roughly $816,000 for support services during construction, of which OBO reports a minor portion was attributable to ensuring that the construction contractor achieved the design intent for the exterior screen. Figure 11 shows schematic design renderings of the approved screen design. In our 2016 survey of OBO staff, several staff indicated that unique Excellence project designs can impact costs. Table 2 provides examples of such comments. Construction of Completed Projects Averaged Around 36 Months For the 22 completed embassies we reviewed, the average time to completion was just over 36 months, though with some distinctive outliers. To assess schedule, we compared embassy construction durations with a benchmark of 36 months. We used that planning allowance because, in the past, OBO has maintained that a SED would generally take no more than 36 months to construct and that construction durations would not be any different under Excellence. For the 22 completed construction projects, 14 (about 64 percent) were completed in 36 months or less, including one Excellence-like project. The remaining 8 projects (36 percent) were completed in over 36 months, including 4 SED projects and 3 Excellence-like projects. Construction durations can be affected by factors not controlled by the U.S. government, such as host government relations, adverse security conditions, or border/port closures. For example, one schedule outlier was due to a work stoppage and restart in Khartoum, Sudan, where the short schedule does not capture the construction activities performed under an earlier 2005 contract. Other events extending construction duration included, as referenced earlier, the addition of U.S. Marine security guard quarters to the projects in Monterrey, Mexico and Mbabane, Swaziland, as well as delays related to host government permitting issues in Bishkek, Kyrgyzstan, according to State project documentation. Figure 12 summarizes schedule performance on the basis of construction duration for these 22 completed embassy construction projects. Our schedule assessment of 22 completed projects included no Excellence projects, as none had been completed as of the end of fiscal year 2017. We did not assess the final schedule performance of the 21 construction projects ongoing at the end of fiscal year 2017 because there were at different stages of construction. As a result, it is too early to draw conclusions regarding schedule performance of individual Excellence projects compared with SED projects. See appendix II for the schedule status of these projects as of the end of fiscal year 2017. Staffing Workload and Contractor Collaboration Challenges Impede Efficiency of Project Delivery After shifting to the use of more customized designs under Excellence, it is unclear if OBO’s staffing levels, particularly in its Office of Design and Engineering (Design and Engineering), are sufficient to execute its full workload. Staffing workload challenges were cited by program stakeholders across the organization, but no strategic workforce analysis exists to fully assess OBO’s human capital capacity against the full range of its real property responsibilities, including the CSCP. With regard to project implementation, formal partnering between OBO and its construction contractors could help avoid adversarial relationships that inhibit swift resolution of issues. It Is Unclear If OBO’s Staffing Is Commensurate with Its Workload under the Excellence Approach OBO Faces Staffing Workload Challenges in Design and Other Offices According to OBO officials, OBO’s workload and responsibilities exceed its available staff. In April 2018, OBO officials told us the bureau’s authorized federal staffing level—including both domestic and overseas positions—is 1,415 positions. However, according to OBO officials roughly 280 (about 20 percent) were vacant due to both attrition and State’s recent hiring freeze. OBO federal staff at that time consisted of approximately 1,135 people, including direct-hire civil service and Foreign Service staff, as well as personal services contractors (PSC) whom OBO defines as individuals who have direct employment contracts with State. In addition, OBO is supported by nearly 300 individuals who are employed by companies that provide those individuals to OBO as supplemental staff. Those 300 individuals are referred to by State as third-party contractors because their employment contracts are not with State but rather with their respective companies. Design and Engineering is one of the key offices supporting Excellence. According to OBO budget planning documents and the Managing Director of the directorate that includes Design and Engineering, this office has faced workload and staffing challenges for several years. Some OBO officials told us that the office’s need for more staff has been ongoing since 2014, which roughly corresponds with OBO’s full implementation of Excellence. In 2015, the staff within Design and Engineering conducted a workload and workforce review in preparation for the office’s annual, internal budget planning process. Based on that review, the Director of Design and Engineering briefed OBO’s Director and Deputy Director that some critical functions were not being performed or had been diminished, including quality design reviews (insufficient depth of review); advanced planning (master planning, feasibility studies); project analysis (scenario planning, life cycle analysis); and guidance to design firms (limited interactions). In the 2015 briefing, the Director of Design and Engineering proposed two courses of action to OBO’s Director and Deputy Director: (1) workload prioritization or (2) workforce increase. The first approach sought to identify critical workload responsibilities—such as new embassy construction—that the existing staff should prioritize over other responsibilities that may need to be addressed with additional staffing or outsourced to private industry. The second approach, increasing the workforce, proposed that OBO hire more Design and Engineering staff to support all the office’s responsibilities. According to a senior OBO official, OBO’s Deputy Director at the time determined the best course of action was to implement a workforce increase, and in 2016 he instructed Design and Engineering’s Director to plan to increase the office’s authorized staff from approximately 150 to 250 people over several years. However, OBO officials told us this decision was a goal at that time and did not reflect any formal staffing authorization by OBO or State; for that reason, it was not reflected in any OBO human capital staffing assessment or plan. In the interim, until Design and Engineering could get authority and funding for more federal direct-hire or PSC positions, OBO planned to make increased use of third-party contractors. Since 2015, direct-hire authorized staffing levels for this office generally have not increased. In April 2018, OBO officials indicated that Design and Engineering needed about 300 staff to meet the office’s workload responsibilities. Design and Engineering’s internal 2018 budget planning documents show that since fiscal year 2015, the office has had 154 authorized civil service and PSC positions. However, in April 2018 OBO reported to us that Design and Engineering had filled only 108 of the 154 authorized positions, amounting to a vacancy rate of roughly 30 percent. OBO also reported that Design and Engineering was using 31 temporary third-party contractors, for a total combined on-board federal and contractor staffing level of 139 positons. Design and Engineering’s internal fiscal year 2018 budget planning documents show that the office proposed to increase its authorized staff level from 154 positions to 304 positions by 2020, effectively increasing by 50 positions each fiscal year. According to senior OBO officials, requests for increased federal staffing for Design and Engineering and other OBO offices have generally not been approved since at least fiscal year 2015, in part, because of general budgeting and fiscal constraints. OBO officials indicated the denials of staffing requests were generally executive-level decisions made at different stages during the budget planning process within OBO, State, and OMB. In general, OBO officials characterized those decisions as common when agencies are under pressure to control program costs. Design and Engineering Announcements The following are examples of third-party contractor job announcements for positons intended to support the Office of Design and Engineering in the Bureau of Overseas Buildings Operations (OBO): Senior Architect – fills OBO “fluctuating skill needs and gaps” in architecture design, project planning, building code analysis, and construction design reviews; reviews plans, specification and technical reports; mentors more junior architects. As previously noted, Design and Engineering is utilizing private-sector companies to hire temporary third-party contractors in order to execute its workload and, in part, until OBO can receive authority to hire additional direct-hire staff. According to OBO officials, OBO in the past has primarily used third-party contractors to meet needs that were genuinely of a temporary nature, such as to conduct planning surveys and staff overseas projects during construction. More recently, however, OBO has begun to rely more on third-party contractors to provide key professional capabilities, as evidenced by some recent contractor hiring announcements for positons intended to support Design and Engineering (see sidebar). senior expert on interior design and space planning; reviews construction submittals; advises on contract bids, change orders, schedule extensions, cost increases; coordinates on planned embassy spaces with Diplomatic Security, the intelligence community, and OBO’s construction contractors. We previously reported that new embassies are state-of-art facilities that have unique security features and whose designs must be certified by State as meeting security standards prior to the start of construction. Design reviews to assess proposed project designs in accordance with State standards and building codes are a key responsibility of Design and Engineering. Such reviews are important to the success of a construction project because insufficient design reviews by agency staff can lead to design errors and omissions that can affect project cost and schedule. Federal Facilities Council Study on Design Oversight The Federal Facilities Council report indicated that to provide effective design oversight an agency’s interest is best served if the in-house staff can fulfill the functions of a “smart buyer,” whereby the agency retains in-house staff that understands the agency’s mission, its requirements, and customer needs. The council noted that if the agency does not have the staff capacity to operate as a smart buyer, an agency risks project schedule and cost overruns, as well as facilities that do not meet performance objectives. The Federal Facilities Council also reported that uncontrollable circumstances have resulted in nearly all agencies’ engineering functions being contracted to outside consultants at one time or another. As long as sufficient skills are retained in-house to meet the smart buyer approach, according to the council report, there does not appear to be any greater risk from contracting out a broader range of design review functions including construction document reviews and code compliance. However, complex projects that include unique and specialized features of high mission relevance, such as high- security facilities, were an exception cited by the council. When federal agencies are building such unique facilities, the council advised that they retain key expertise in- house as core competencies, with design review a primary in-house responsibility. quality, and performance. The council also concluded that effective design review processes result in more comprehensive and accurate design and construction documents that, in turn, lower project costs. (See sidebar for additional information on the council’s report.) Construction contractors we spoke with expressed concerns about the quality of OBO’s design reviews and capabilities to manage the amount of questions from construction contractors about OBO’s Excellence designs. Two contractors believe OBO is using more third-party contractors to perform design reviews than it did previously and that some may lack specialized knowledge of embassy standards and security measures. The two contractors said that this may lead to lack of design consistency and continuity across projects. One construction contractor also indicated OBO takes more time to resolve design issues because it typically will consult with OBO’s contracted Excellence design firm before answering a construction contractor’s design-related question or approving a design change that may arise during construction. In our 2016 survey, several OBO staff raised concerns regarding OBO’s capability to perform design oversight with existing staff. Table 3 lists some of those selected staff comments. OBO senior management stated that similar staffing challenges compared with workload also exist in OBO’s Construction, Facility, and Security Management directorate. According to OBO documentation the Office of Construction Management is authorized 111 direct hire Foreign Service Construction Engineers worldwide but as of March 2018, it had 86, amounting to about a 20 percent vacancy rate. Those direct-hire Foreign Service engineers typically serve overseas as Project Directors (PD) for an embassy construction project. Other Third-Party Contractor Job Announcements We found examples of third-party contractor job announcements intended to support the Bureau of Overseas Buildings Operations Office of Construction, Facility, and Security Management directorate: Construction Management Program Analyst – reports on projects’ problem areas for resolution; monitors projects’ financial progress; prepares change requests and contract modifications; documents scope, cost, or schedule changes; provides guidance and training to lower level analysts; ensures internal controls and data integrity. According to the Office of Construction Management’s 2018 internal budget planning documents, the office sought to covert 50 third-party contractors deployed overseas to direct-hire PSCs. Those positions— typically civil, electrical, or mechanical engineers—serve as on-site technical staff under the PD, to oversee construction activities and respond to construction contractors’ questions or proposed changes. Facility Manager – serves on an interim basis at posts lacking a facility manager; deals with unusual or emergency facility- related conditions that may impact embassy operations; oversees the day- to-day safe operation and maintenance of embassy facilities; manages post’s building maintenance staff; performs design reviews. Similarly, OBO’s Office of Facility Management reported to us that, for fiscal year 2018, it expected that it may be unable to fill 33 (about 15 percent) of its 224 authorized Foreign Service Facility Manager positions, at both newer and existing legacy embassies. Those positons serve as the single U.S. facilities officer overseeing primarily locally hired embassy staff that operate and maintain embassy building systems. As of March 2018, OBO reported it was trying to cover these positions through temporary staff assignments for 2 to 3 months. As with Design and Engineering, we found examples of positions within Construction, Facilities, and Security Management—including Facility Managers— where OBO was relying on third-party contractors to provide key professional capabilities (see sidebar). Physical Security Specialist – reviews design plans, especially for sensitive embassy spaces; oversees transit security plans for sensitive project materials; determines on-site construction security staffing needs; serves on interagency security committees; prepares responses to State’s Inspector General, GAO, and Congressional inquiries. Despite OBO-wide workload and staffing challenges, OBO cannot precisely quantify these challenges or their effects because it lacks a strategic workforce assessment of OBO-wide staffing levels and workload capacity needed to support the CSCP under Excellence. According to OBO, Excellence is a holistic effort to improve every aspect of OBO’s operations, including real estate acquisition, security methods and technologies, cost management, construction management, and facilities management. OBO’s “Guiding Principles for Excellence in Diplomatic Facilities” conveyed that delivering Excellence would be a comprehensive process that seeks to utilize the best methods, technologies, and staff abilities and that each office, person, and action in OBO contributes to the realization of this goal. However, OBO’s 2011 decision memo approving the shift to Excellence did not identify possible effects to OBO-wide workload, staff levels, and personnel costs, including likely costs to hire either more federal staff or third-party contractors. In addition, the decision memo did not address whether the new design-centric program might affect the staffing needs to manage other OBO responsibilities, such as renovations and security upgrades to existing embassies. As we have previously reported, the use of bridging and design-bid-build under Excellence entails a time and cost investment in design on the project’s front-end. When two contracts are utilized by OBO—one for design and one for construction—additional administrative and programmatic effort is needed to develop, award, and manage multiple contracts. Diplomatic Security officials also reported to us that reviewing customized Excellence designs increased their workload. In 1999, OPM published a five-step model that suggests agencies should define their strategic direction, assess their current and future workforces, and develop and implement action plans for closing identified gaps in future workforce needs. Further, according to GAO human capital best practices, strategic workforce planning addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. Without an OBO-wide analysis of workload capacity and existing staffing, State senior managers and key program stakeholders will lack essential information to make decisions about workload priorities, staffing resources, and budget needs pertaining to CSCP and OBO’s Excellence approach. Formal Partnering during Construction Could Help Avoid Collaboration Challenges That Affect Efficiency of Project Delivery Working collaboratively as a team to efficiently deliver new embassies has been a challenge for OBO and some of its construction contractors. OBO officials said some construction contractors selected to build new embassies have struggled to deliver projects, in part because they had less experience in terms of the number of embassies they had built, or were new to the embassy program. Construction contractors have to learn a great deal of information very quickly—to include State security standards, design specifications, and operating procedures—and many do not succeed, according to these OBO officials. Of the six contractors involved with our nine project case studies, four of the five that we spoke with relayed concerns about poor working relationships with some OBO on-site Project Directors (PD) and that OBO was a difficult business partner, similar to concerns raised about OBO that we have previously reported. Formal construction partnering (partnering) is a recognized construction industry best practice to foster improved collaboration and problem solving and continues to be utilized by major federal construction agencies. On-Site OBO-Contractor Relationship Is Important to Project Collaboration Contractor and OBO officials stressed the importance of the on-site relationship between the OBO PD and the contractor in successfully completing projects. According to State policy, OBO’s PDs are the Contracting Officer’s Representative at the site and have primary responsibility for overseeing the contractor. The PD serves as State’s principal technical contact for the construction contractor and reviews all change proposals. Per OBO guidance, the PD (under advisement with Design and Engineering) renders interpretations of the contract plans and specifications and acts as arbiter of any technical disputes with the contractor. In cases where the recommended proposal amount exceeds the PD’s dollar-value authority for changes, the PD makes a recommendation for action to State’s Contracting Officer in Washington, D.C. OBO and contractor officials indicated that OBO’s PDs are critical to the success of embassy projects and noted that while some PDs make an active effort to collaborate with contractors, other PDs do not. Our interviews with OBO and contractor officials reflected that PDs who do not collaborate well can have a challenging relationship with the contractor that makes it difficult to reach timely solutions to project and contract issues. In addition, contractor officials stated that strained relationships with some PDs may be further exacerbated because OBO headquarters often takes too long to make decisions—in support of their PDs in the field—on proposed changes and additional work that State is considering or that contractors propose as being needed. Three of the five contractors we spoke with cited such concerns about PDs and OBO headquarters as a long-standing or systemic issue. Officials from one contractor indicated that when PD-contractor disagreements arise and combine with delays by OBO headquarters, an issue that could be resolved at a lower cost or schedule impact can become a critical problem leading to greater cost and schedule impacts for the government, the contractor, or both. Senior OBO officials acknowledged differing styles and capabilities among OBO’s PDs, as well as the need to improve response times in OBO headquarters. With regard to PDs, senior OBO officials stated that some PDs’ working styles are more proactive in cooperatively seeking to resolve issues face-to-face and through meetings with the contractor’s on-site team; conversely, other PDs’ styles are more geared to corresponding with the contractor’s team through written communication and contractual correspondence. One of these officials stated that he did not believe the latter style was as effective, but that it is sometimes needed when contract issues cannot be solved by the two sides. Another OBO official stated that OBO needs to look beyond individuals’ technical engineering or architectural skills and experience and examine their “soft skills”—such as communication abilities, problem-solving skills, and how they work with others—to better assess who might excel when OBO assigns staff to projects. According to OBO officials, it can be very challenging to determine whether and to what degree a PD is reasonably enforcing the contract and doing their best to collaborate with the contractor to resolve project issues that arise. OBO Headquarters-Contractor Response Times Are Also a Collaboration Issue Regarding response times, senior OBO officials stated that OBO is working to improve turnaround on proposed changes during construction. However, they emphasized the necessity of PDs frequently having to go back to OBO headquarters to ensure an on-site change proposal is in accordance with OBO’s contracted designer’s intent as well as State’s design and security standards. In addition, they stated that lack of timely responses can sometimes be the fault of the contractor, particularly if the contractor is less experienced with embassy construction or new to the program and unfamiliar with OBO’s process requirements. OBO now allows for more time to resolve contractor requests for equitable adjustment that involve increases to contract cost or schedule than it had in the past. In 2008, OBO guidance called for State to acknowledge in writing contractors’ requests for equitable adjustments— due to cost or schedule changes—within 3 days and to seek to evaluate the merits of such requests and make final decisions within 55 days. In 2016, State changed its guidance to allow 15 days to acknowledge contractors’ requests for equitable adjustments and 90 days for State to make a final decision. OBO documentation indicates that the process can take even longer than 90 days if State determines that the contractor has not provided enough information for State to assess the merits of the request for additional time or cost. Two of the contractors we spoke with stated that excessive delays in responding to a request for an equitable adjustment can increase the likelihood of contractor-initiated litigation. With regard to changes initiated by State, contractors were also frustrated when OBO issues a request for proposal to a contractor—to provide a price and schedule for the prospective change—and then OBO does not make a timely decision as to whether it wants to implement the change. OBO officials said they are trying to shorten the time it takes them to make decisions concerning contractors’ requests for information, proposals for equitable adjustments to contract price or schedule, and OBO proposals to undertake additional work. In April 2018, OBO officials noted that they recently expanded the scope included in OBO’s generic statement of work for “construction phase services” that it requests from contracted design firms. The responsibilities added to the statement of work are an effort to utilize and leverage the design firms to provide more support to OBO’s PDs in the field, enabling the PDs to respond to OBO contractors more quickly. Case Studies Contained Examples of Both Adversarial and Cooperative Relationships Three of our nine case-study projects (involving six contractors) had adversarial relationships between OBO and three of its contractors. In our discussions with both OBO and contractor officials, as well as a review of OBO and contractor documentation, we found that those relationships were characterized by poor on-site collaboration and claims of delays in acting on proposed changes that affected project efficiency. In all three cases, both parties took the position that it was the performance of the other party when dealing with challenges and changes that most impacted the project’s progress. The federal contractor performance evaluations for these projects also reflected the strained relationships between OBO and the contractors. Two of these situations involved contractors less experienced with the CSCP. In five of our nine case studies, OBO PDs and the contractor generally had cooperative relationships that responded effectively to project issues and resolved conflicts successfully. Four of these five projects involved the CSCP’s two long-standing contractors. The third contractor, who reported a positive contractual relationship with OBO on one of our project case studies, indicated it had very poor relationship with OBO on its only other CSCP project. Information on four of our case studies—two that had more adversarial relationships between State and the contractors and two that exhibited more cooperative relationships—is included in the text box, and appendix III contains more information on these projects and our other five construction case studies. Example Case-Study Relationships between Overseas Buildings Operations (OBO) and Its Contractors Bishkek, Kyrgyzstan: A poor working relationship between OBO and the contractor inhibited resolution of a variety of disagreements. These disagreements included responsibility for obtaining zoning approvals and building permits with the host government, and whether the contractor could remove a satellite dish in the construction zone. Issues regarding timeliness of decision-making by OBO headquarters and quality of contractor submissions were also raised on this project. Jeddah, Saudi Arabia: This project was challenged by errors and omissions in the design provided to the contractor, according to OBO and contractor officials. Both OBO and the contractor acknowledged that a difficult working relationship slowed efforts to deal with such challenges. Disagreement also arose regarding timely response to proposed changes; the contractor maintained that OBO headquarters was delaying work due to slow decision-making, while OBO maintained that the contractor failed to mitigate schedule delays for which the contractor was responsible. The Hague, Netherlands: Both OBO and the contractor said they had a good working relationship and indicated that OBO’s Project Director and his on-site project architect enabled OBO to more collaboratively and effectively react to technical inquiries from the contractor. Both OBO and the contractor noted that the two sides worked cooperatively to resolve environmental issues and permitting issues raised by the local government. Kyiv, Ukraine: Both OBO and the contractor observed that each side worked very cooperatively on-site and at the headquarters level to swiftly accommodate and mitigate the cost and schedule impact resulting from the addition of an office annex for the U.S. Agency for International Development. Partnering Is a Recommended Practice Intended to Foster More Effective Project Collaboration According to the Federal Facilities Council, facility acquisition traditionally has been an adversarial environment between facility owners and construction contractors. The council also indicated conflicting interests between the parties can result in poor communication, poor problem solving, and poor results. Further, the council has reported that when multiple organizations make a commitment to work cooperatively toward a common objective utilizing teambuilding techniques on building projects, the practice is called “partnering.” OBO does not utilize formal partnering, though State’s supplement to the Federal Acquisition Regulations System acknowledges that partnering may be used in the context of alternative dispute resolution. According to the State supplement, this partnering involves an agreement in principle to share the risks involved in completing a project, and to establish and promote a partnership environment. It notes that partnering itself is not a contractual agreement and it does not create any legally enforceable rights. Instead, partnering seeks to create a new cooperative attitude in completing government contracts. The three basic steps in partnering identified by State’s supplement are as follows: 1. Establish the new relationship through personal contact among the principals for the government and the contractor before the work begins. 2. Prepare a joint statement of goals establishing common objectives in specific detail for reaching the goals. 3. Identify specific dispute prevention processes designed to head off problems, evaluate performance, and promote cooperation. Both the General Services Administration (GSA) and the U.S. Army Corps of Engineers call for partnering as a preferred management process on all major projects as a cooperative approach with their contractors to resolve problems and reduce conflicts, litigation, and claims. For example, GSA recommends formal partnering for all construction projects developed by its Public Building Services in excess of $10 million in estimated construction costs. One GSA executive official we spoke with cited partnering as a best practice that can mitigate cost growth and schedule delays by providing a more collaborative process to reach fair and equitable decisions faster to the benefit of both the government and the contractors. The Corps of Engineers has recommended formal and professionally facilitated partnering as an integral element on designated “mega projects,” which generally are those costing in excess of $200 million, have schedules that exceed 2 years, or have national or international significance, among other considerations. The Corps of Engineers reports that partnering is an organized process that can remove organizational impediments to communication and is consistent with the government’s implicit duty to act in a fair and reasonable manner. Long-Standing CSCP Contractors May Be Better Able to Informally Partner with OBO Than Less Experienced Contractors As of the end of fiscal year 2017, there were five construction contractors building new embassies under the CSCP. For the 21 ongoing new embassy construction projects, 18 (approximately 85 percent) were under contract with 2 construction contractors who have historically received the majority (60 percent) of OBO construction contract awards since 1999. The 3 other projects were each being built by one of the contractors less experienced with embassy construction. Two of our case studies— Jeddah and Jakarta—included work begun by two earlier contractors that had been terminated by State, according to OBO officials. Further, the two long-standing OBO construction contractors were awarded 9 of the 10 new embassy construction projects in fiscal years 2016 and 2017. In our discussions with OBO officials, they recognized that they have had persistent challenges in bringing new contractors into the CSCP but retain an interest in expanding OBO’s contractor pool. Left unaddressed, some contractors’ frustrations with OBO projects may be a factor in shrinking State’s contractor pool. Three of the five contractors we spoke with (all less experienced with embassy construction) indicated they will not be pursuing future embassy projects because they believe State has not acted as a fair partner in overseeing its embassy construction projects. Examples of negative perceptions some of the contractors cited from their perspective included that State had not been fair in working with the contractors to resolve challenges such as design-related issues, security-related issues, government- directed changes, or unique issues posed by the countries in which the projects were located. Contractors said that such issues affected their and State’s costs and schedules. OBO officials acknowledged that OBO’s relationships with some contractors have posed challenges and saw both parties as bearing some responsibility. They also acknowledged that two long-standing OBO contractors continue to build most new embassies, and they expressed an interest in expanding OBO’s contractor pool. We reported similar collaboration concerns raised by OBO contractors in 2009 when we examined contractor participation in the embassy construction program and the decline in the number of contractors participating in the program. In 2009, 10 of 17 construction contractors rated State as a poor or fair business partner. Of the 17 construction contractors surveyed in 2009, 3 had received about 62 percent of OBO’s construction contract awards from 2001 through 2007. In general, many contractors at that time told us they were not making as much profit as anticipated and most contractors also expressed concerns about State’s management of the program and State’s on-site PDs. Executive officers from one of OBO’s two long-standing contractors stated that due to the firm’s experience with CSCP, it is able to carry out informal partnering with OBO to better address project challenges. This firm knows OBO’s requirements and processes, as well as who within OBO headquarters to call to discuss specific issues, if the firm believes that such issues are not being addressed in a timely manner by the PD or contracting officer. For example, this long-standing contractor said it was generally able to overcome a variety of project issues and disagreements in Pristina, Kosovo, through the firm’s knowledge of, and informal partnering relationship with, OBO headquarters. The OBO PD and contractor’s on-site project manager could not reach resolution on the cost or schedule impacts of a variety of issues including (1) the extent to which State’s approving the contractor’s local-hired construction workers was delayed, (2) the timing and responsibility for bringing permanent power to the site, and (3) the extent to which there were differing site conditions requiring the contractor to excavate unsuitable soils and existing foundations. In January 2018—after our September 2017 site visit to Pristina—this contractor’s headquarters office was able to resolve some issues with OBO and State’s Office of Acquisitions that otherwise had not been resolved by their on-site project manager and OBO’s PD. The firm’s executive officers stated that contractors with personnel less experienced with the CSCP—including with OBO and Diplomatic Security requirements and procedures—do not have this in-depth knowledge and may experience greater challenges on their first few CSCP projects as a result. They suggested that OBO should utilize more formal partnering to bring new contractors into the program. They also cited the need to have both formal and informal processes for elevating and resolving issues in order to provide accountability at all levels and ensure that issues are addressed in a timely manner. The contractor believes that formal partnering could lessen the learning curve for new firms, reduce the conflicts between OBO PDs and the contractors’ project managers on- site, and keep more firms in the program. Formal partnering, particularly with firms that have less experience with embassy construction, could help avoid adversarial working relationships between OBO and contractors that inhibit swift resolution of, or even exacerbate, challenges experienced on already-complex projects. Where more cooperative project relationships—informal partnering—occurred on our case-study projects, either on-site between OBO’s PD and the contractor or between OBO headquarters and the contractor, we found that this dynamic helped to more easily resolve challenges and facilitate project efficiency. In discussing the possible use of partnering on OBO projects, one senior OBO official reported that while OBO utilized formal partnering to a limited extent on some the CSCP’s early projects— Nairobi, Kenya; Tunis, Tunisia; and Conakry, Guinea—he commented that he did not think it was very valuable. In his view, it seemed like the contractors at that time were using the partnering agreement to claim that OBO was not partnering properly. However, other OBO officials stated that they understood the practice of partnering by some federal agencies has evolved since that time. OBO officials agreed that although OBO had not considered formal partnering recently, it could potentially be useful, particularly if tried on smaller projects with new contractors or those less experienced with embassy construction. They added that such piloting would have to be done judiciously to determine how it might best work. We note that two of those three early projects OBO identified as having used formal partnering included the current two long-standing contractors that have completed most of the CSCP embassy projects. Conclusions In passing the Secure Embassy Construction and Counterterrorism Act of 1999, one of the congressional findings was that, unless embassy vulnerabilities are addressed in a sustained and financially realistic manner, the lives and safety of U.S. employees in diplomatic facilities will continue to be at risk from further terrorist attacks. State’s CSCP made steady progress through fiscal year 2017, completing 77 new embassies and starting construction on 21 more at a cost of just over $24 billion. However, State will not achieve its 2005 forecast for building 150 embassies by 2018 because progress has been hampered, in part, by unforeseen building requirements and inflation that were not originally factored into CSCP funding levels. These issues will affect State’s progress as it continues to replace embassies that do not meet security standards. Because State does not currently intend to seek annual inflationary adjustments for CSCP, although individual projects do address inflation to some extent, the CSCP’s pace of embassy production is likely to be reduced over time. As State continues to work with the Congress to chart the future course, priorities, and funding levels for the program, regular information on the effects of cost inflation would be helpful as stakeholders reassess the CSCP program’s funding level from year to year. Moreover, State plans to begin the construction of 25 new embassies within the next 5 years and has identified the need to replace nearly 50 additional embassies in later years. While the CSCP schedule identifies future embassy replacements, it does not address the projected cost and time needed to achieve the CSCP’s ultimate critical goal of replacing embassies that do not meet State’s security standards. Recognizing that precise estimates cannot be easily made for later years, even a notional long-term estimate of the CSCP’s overall capital funding investment and time frames, along with an assessment of risk factors such as inflation, would strengthen State’s ability to support and sustain its funding needs, encourage dialogue with congressional committees, and promote consensus by decision makers in the executive and legislative branches on funding levels and expectations for program progress. Additionally, State’s shift to the Excellence approach was predicated on the idea that customized designs would produce embassy compounds that are more innovative, functional, and sustainable than those built using the SED approach, and would also be at least as secure and more cost efficient to operate. It is too early to tell whether this greater upfront investment in design will yield cost and schedule benefits during construction of Excellence embassies or over their life cycle. While past embassies have generally been completed within expected cost and schedule allowances, given the number of embassies yet to be built to meet urgent security needs amid a constrained resource environment, it remains incumbent upon State to realistically assess its ability under Excellence to deliver embassies as efficiently as possible. By comprehensively evaluating its human capital needs against CSCP priorities and other workload demands, OBO can provide program stakeholders—including State, OMB, and Congress—the ability to make fully informed choices as to the capacity of OBO’s design and construction organization to support embassy production of these embassies in the near and longer term. Furthermore, formal partnering could provide OBO with a tool to enhance collaboration both on-site and between contractors and OBO headquarters. This could mitigate the unforeseen issues that arise on all of these challenging overseas projects, but which may be more complicated to resolve for Excellence projects because each one is unique. Piloting a partnering program, particularly with newer or less experienced construction firms could also provide one option to facilitate State’s long-standing goal of expanding its contractor base. Recommendations for Executive Action We are making the following four recommendations to State: The Secretary of State should determine the estimated effects of cost inflation on planned CSCP embassy construction capacity and time frames and update this information for stakeholders on a regular basis, such as through the annual budgeting process. (Recommendation 1) The Secretary of State should provide an analysis for stakeholders identifying those embassies that still need to be replaced to meet State’s security standards and estimating total CSCP costs and projected time frames needed to complete those projects. (Recommendation 2) The Secretary of State should ensure that the Director of OBO conducts an OBO-wide workforce analysis to assess staffing levels and workload capacity needed to carry out the full range of OBO’s mission goals, to include the CSCP. Such an assessment could provide a basis for broader stakeholder discussion of OBO’s human capital needs and potential prioritization of activities. (Recommendation 3) The Secretary of State should ensure that the Director of OBO pilots formal construction partnering for the CSCP, particularly with construction firms that are new or less experienced with the program. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to State for comment. State provided written comments that are reprinted in appendix IV. In its letter, State concurred with our four recommendations and described actions planned to address each of them. In addition, State made several observations, including that it has moved beyond Excellence to pursue several new initiatives that aim to lower project and long-term operations and maintenance costs. We acknowledge the continued evolution of State’s CSCP. However, our recommendations transcend the pros and cons of any particular delivery method and will be helpful to State, and stakeholders, as it works to improve the design and construction of new embassies. State also provided technical comments on the draft, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of State. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact either Brian M. Mazanec at (202) 512-5130 or at mazanecb@gao.gov or Lori Rectanus at (202) 512-2834 or at rectanusl@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines (1) the pace of the Department of State’s (State) Capital Security Construction Program (CSCP) in constructing new embassies, (2) the cost and schedule performance of State’s recent embassy construction projects, and (3) factors that have affected State’s ability to deliver construction projects efficiently. To conduct this review, we obtained information from agency planning, funding, and reporting documents and interviewed State officials within the Bureau of Overseas Buildings Operations (OBO); the Bureau of Diplomatic Security (Diplomatic Security); and the Office of Acquisitions Management. Within OBO, we spoke with officials from offices responsible for site acquisition, planning, project development, design and engineering, cost management, construction management, facility management, policy and program analysis, and financial management. We also interviewed officials from construction contractors that have constructed embassies for State. To examine the pace of the CSCP, we reviewed OBO project completion data for projects awarded from fiscal year 1999 (after the two embassy bombings in Africa) through the end of fiscal year 2017. We then compared these data against the goals of the program as reported in State documentation, such as past budget justifications and long-term planning reports. We also compared completion data against CSCP funding levels since fiscal year 1999, and further compared those funding levels with recommendations in the Accountability Review Board reports from 1999 and 2012 (following terrorist attacks against U.S. facilities). We also examined OBO’s CSCP schedule outlining embassies planned to begin construction through fiscal year 2022 and other embassies identified beyond that time frame. We further consulted GAO’s guide to leading practices in capital decision-making as well as budget guidance from the Office of Management and Budget (OMB). We also attempted to assess CSCP performance in moving U.S. government staff into secure facilities but found State’s data unreliable for this purpose. To examine the cost and schedule performance of State’s recent embassy construction projects, we selected projects awarded from fiscal year 2008 through 2017. We chose fiscal year 2008 because that year OBO modified its Standard Embassy Design (SED) delivery program to allow for more bridging design to better tailor the SEDs to specific sites. This time frame would also capture Excellence-like projects awarded between the introduction of Excellence in 2011 and the full implementation of Excellence in 2014, as well as pure Excellence projects awarded in 2014 and later. Of the embassy construction projects awarded since fiscal year 2008, we identified 22 completed projects and another 21 underway. To assess the cost performance of these projects, we used cost data drawn from the Federal Procurement Data System and back-checked against OBO-provided contract data, which we found to be sufficiently reliable for our purposes. We then compared any increases in cost from the contract value at award to OBO’s general cost contingency for unforeseen changes on embassy construction projects, which ranges from 5 to 10 percent. To assess schedule performance, we compared construction durations from contract documentation with a benchmark of 36 months. We used that benchmark because, in the past, OBO has maintained that a SED would generally take no more than 36 months to construct and that construction durations would not be any different under Excellence. This benchmark was further informed by past GAO reporting. We did not assess the cost or schedule performance of the 21 projects still ongoing at the end of fiscal year 2017. Because these were ongoing projects at different stages of construction, we could not determine whether they would finish within their budget contingency, nor could we assess their final schedule performance. Furthermore, because no pure Excellence projects had been completed by the end of fiscal year 2017, we could not compare cost increases or schedule performance of Excellence projects with SED projects. To examine factors that have affected State’s ability to deliver construction projects efficiently, we selected nine construction case studies out of our universe of projects awarded in fiscal year 2008 through fiscal year 2015, and funded through CSCP. Criteria for selection included projects with construction contract cost increases, actual or estimated, of more than 5 percent over the life of the contract projects, as well as projects whose construction duration exceeded, or was estimated to exceed, 36 months. We also sought to include as many different contractors, delivery types (e.g., design-bid-build), and construction approaches (e.g., Excellence) as possible. Our final nine construction case studies included projects in Kyiv, Ukraine; Monterrey, Mexico; Santo Domingo, Dominican Republic; Bishkek, Kyrgyzstan; Jakarta, Indonesia; Jeddah, Saudi Arabia; The Hague, Netherlands; Pristina, Kosovo; and Port Moresby, Papua New Guinea. Because many of OBO’s pure Excellence projects were more recently awarded, we also reviewed the design contracts for Hyderabad, India, and Beirut, Lebanon. For each case study, we examined Federal Procurement Data System data, OBO project data and documentation, as well as official contract documentation—including modifications that involved changes in cost or schedule. Additionally, for each of our case studies, OBO compiled information from its Office of Project Development and Coordination, Office of Construction Management, Office Cost Management, and Office of Financial Management into project narratives. Each narrative was then cleared by project managers, project directors, office directors, and managing directors of the affected directorates. In general, we attribute information from these narratives to OBO. We also interviewed relevant OBO and contractor officials involved with the projects, including on-site personnel from both completed and ongoing projects. In September 2017, we conducted fieldwork in Jeddah, Jakarta, The Hague, and Pristina to observe and discuss construction progress with on-site U.S. embassy and contractor officials. U.S. embassy officials we spoke with included those responsible for construction, facilities maintenance, post management, and security. To further explore issues arising from our case studies we obtained information from OBO planning, funding, and staffing documents and also interviewed State and contractor officials in Washington. We also reviewed the results of our 2016 survey of OBO staff. Specifically, we have included narrative responses from that survey commenting on issues we encountered during our audit work for this report. In some cases we edited responses for clarity or grammar. Views expressed in the survey may not be representative of all OBO staff views on given topics. We conducted this performance audit from April 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Cost and Schedule Status of Ongoing Embassy Construction Projects as of the End of Fiscal Year 2017 This appendix contains contract values and schedule information for 21 embassy construction projects that were ongoing as of the end of fiscal year 2017. Table 4 shows contract values for these projects, while figure 13 illustrates schedule information. Appendix III: Embassy Construction and Design Case-Study Projects This appendix contains information on selected U.S. Department of State (State) Bureau of Overseas Buildings Operations (OBO) case-study projects included in our review. Nine studies focus on the construction phase of the projects, and two are design case studies. For each case study, we examined Federal Procurement Data System data, OBO project data and documentation, as well as official contract documentation—including modifications that involved changes in cost or schedule. We also interviewed relevant OBO and contractor officials involved with the projects, including on-site personnel from both completed and ongoing projects. For details on our selection of the projects and our case-study methodology, see appendix I. For the nine construction case studies, we include timelines showing dates for notices to proceed, the original estimated completion dates, and either (a) the actual substantial completion dates (for projects completed as of end of September 2017) or (b) the scheduled completion date (for ongoing projects as of the end of September 2017). The start and end points by which State measures the schedule performance of a project are, respectively, the date when State issues a notice to proceed and the date when it issues a notice of substantial completion. During the course of a project, State may grant schedule extensions for reasons such as (1) changes (i.e., change orders); (2) government-caused delays (e.g., delays in issuing a notice to proceed to the contractor); (3) differing site conditions than represented in the contract; or (4) excusable delays (e.g., for circumstances that could not reasonably be foreseen or avoided). Similarly, while the value of construction contracts increased for all of the construction case-study projects we reviewed, State typically reserves a contingency amount in its project budgets—ranging from 5 to 10 percent of the contract value at award—to cover unforeseen project changes and cost increases. OBO’s overall project budgets also include funding for other nonconstruction costs and contracts, such as planning, design, and on-site project management and security. We are not reporting overall project budgets. As of August 2018, the only ongoing case studies that State reported to us for which they notified Congress of the need to reprogram funding to cover additional costs were Jakarta and Jeddah. For each of the following case studies, OBO compiled information from its Office of Project Development and Coordination, Office of Construction Management, Office of Cost Management, and Office of Financial Management into project narratives. Each narrative was then cleared by project managers, project directors, office directors, and managing directors of the affected directorates. In our case studies, we generally attribute information from these narratives to OBO. For contract value at award and contract value as of the end of fiscal year 2017 we relied upon data from the Federal Procurement Data System. We used OBO narratives and project data and documentation as the basis for our description of the contract delivery type, the date of award, dates of issuance for notice to proceed and substantial completion, and original estimated completion date. The discussion in the following case studies of notable contract actions, such as modifications, requests for equitable adjustment, and terminations is based upon OBO project narratives and contract documentation, as well as statements by government and contractor officials. In some but not all cases, we had relevant contract documentation available to compare against OBO project data and documentation or what OBO officials told us. Table 5 lists the case-study projects described in this appendix, ordered by the fiscal years in which the construction or design contracts were awarded. Construction Case Study: U.S. Embassy in Kyiv, Ukraine Project Overview State established the U.S. Embassy in Kyiv in 1991 upon the dissolution of the Soviet Union. According to OBO, State originally redeveloped and rehabilitated an existing leased facility, built in 1950, to serve as the chancery office building, while post consular services, public diplomacy and the Marine security guard quarters were in leased facilities off-site. OBO reported that while security improvements were made at these locations over the years, none of these buildings fully satisfied the Secure Embassy Construction and Counterterrorism Act of 1999 security standards, such as the 100-foot setback from the street. In September 2008, State purchased a 10-acre site to build a new embassy compound. A $209.9 million design-build contract for the new embassy was awarded in September 2008. The project was based on the Standard Embassy Design (SED). As of September 2017, the contract value was $238.6 million, an increase of $28.7 million or 13.7 percent. According to OBO, State issued a notice to proceed in March 2009, and the original estimated completion date was November 2011. Substantial completion was in September 2011, 2 months early. Figure 14 shows two views of the new embassy and the timeline for the original schedule compared with the final schedule. Contributors to Contract Cost or Schedule Changes According to OBO, the most significant change to the contract was the addition of an annex for the U.S. Agency for International Development (USAID), at a cost of $28 million. OBO reports that State granted the contractor a 12-month extension for adding the annex; however, the project was completed 2 months ahead of the original estimated contract completion date. Both State and contractor officials observed that each side worked very cooperatively to mitigate cost and schedule effects of adding the USAID office annex. The other major cost driver was a $4.6 million contract modification to address utility issues. One State official reported that site utility and below grade infrastructure requirements were challenging given the cold climate. Construction Case Study: U.S. Consulate in Monterrey, Mexico Project Overview According to OBO, State had occupied the former U.S. consulate in Monterrey since 1969, and further, the facility did not meet security standards set by the Secure Embassy Construction and Counterterrorism Act of 1999. State documented shortcomings with the building’s air conditioning and electrical systems. OBO documentation indicated the former site also lacked the space to accommodate staff growth in U.S. agencies’ offices and the consulate’s functions. OBO reported that consular demand for services had increased significantly—from 12 to 65 consular windows—and the consulate overall had added desks for 60 U.S. staff and 132 locally engaged staff. In September 2009, State awarded a $101.9 million design-build contract for the new consulate based on a bridging design. The bridging design was based on the standard embassy design. As of September 2017, the contract value was about $125 million, an increase of $23.1 million or about 23 percent. According to OBO, State issued the notice to proceed for the project in April 2010 and the original estimated completion date was January 2013. Substantial completion occurred in May 2014, 16 months after the original estimated completion date. Figure 15 shows two views of the new consulate and the timeline for the original schedule compared with the final schedule. Contributors to Contract Cost or Schedule Changes According to State, the scope of work in the original contract for the compound included a consulate office building, vehicle maintenance building, access control facilities, recreational facility, parking structure, mail screening facility, site perimeter barrier, and associated security features as well as off-site roadway construction and improvements. OBO documentation shows the primary cost and schedule increase on the project was due to the addition of Marine security guard quarters; that contract modification increased the value of the contract by $16.3 million and also extended the length of the contract by 337 days. OBO reported that another contract modification—adding a photovoltaic power system to the project—increased the contract value by $2.3 million. Construction Case Study: U.S. Embassy in Santo Domingo, Dominican Republic Project Overview According to State, it built the former U.S. embassy in 1950 to accommodate 75 staff. Prior to the construction of the new embassy, the U.S. mission comprised 17 U.S. government agencies employing hundreds of people working in eight office buildings throughout the city. OBO reported that most of those buildings did not meet security setback standards that Congress established in 1999 or fire and life safety codes. According to OBO, in September 2010, State awarded a $148.8 million design-build contract for the new embassy based on a bridging design. The project was based on the SED. As of September 2017, the contract value was about $150.4 million, an increase of about 1 percent. According to OBO, State issued the notice to proceed for the project in January 2011 and the original estimated completion date was October 2013. Substantial completion occurred in May 2014, 7 months after the original estimated completion date. Figure 16 shows an architectural rendering and photograph of the new embassy office building, along with the timeline for the original schedule compared with the final schedule. Contributors to Contract Cost or Schedule Changes Five contract modifications totaling about $1.6 million accounted for most of the cost increase. The largest modification totaled over $600,000, which OBO told us resulted from a need to increase switchgear capacity. According to OBO, in the several years since substantial completion, the contract has not been closed, and unexpended funds remain. As a result, the final contract value may change. OBO reported that the contractor had submitted one outstanding request for equitable adjustment for about $450,000. Further, on its side, the U.S. government is withholding around $7 million for liquidated damages, punch list deficiencies, and warranty items. Construction Case Study: U.S. Embassy in Bishkek, Kyrgyzstan Project Overview According to OBO, State established the U.S. embassy in Bishkek in 1991 after the dissolution of the Soviet Union. Specifically, a pre- engineered factory-manufactured building was shipped to Bishkek and assembled on an 11-acre, U.S. government-owned site in 1996. OBO reported that by 2008, the U.S. diplomatic mission had outgrown the 1996 facility, and in 2009 it became clear that, in addition to new facilities, significant security upgrades were needed to meet current security standards. The new embassy project was to include a chancery (office annex), utility building, Marine security guard quarters, compound access control facilities, support buildings (warehouse and shops), and surface parking. A $116.8 million design-build with bridging contract for the new embassy was awarded in April 2011. The project was based on the SED. As of September 2017, the contract value was about $123.3 million, an increase of $6.5 million, or about 5.6 percent. According to OBO, State issued the notice to proceed for the project in July 2012, and the original estimated completion date was December 2014. Substantial completion occurred in March 2017, 27 months after the original estimated completion date. Figure 17 shows the 1996 chancery office building, the new chancery office building, and the timeline for the original schedule compared with the final schedule. Contributors to Contract Cost or Schedule Changes A number of factors contributed to increases in contract cost and schedule for this project. For example, according to OBO, off-site electrical power upgrades required a $2 million contract modification for switchgear installation and building a redundant power line to a substation 3 kilometers away from the new embassy compound. OBO also reported that one contract modification extended the schedule by 37 days and added $3.4 million to the contract because State temporarily halted the contractor’s work in some areas of the building due to Bureau of Diplomatic Security (Diplomatic Security) requirements. According to State and contractor officials, challenges to the project included (1) a six- phase construction plan to accommodate building on an operational compound; (2) frequent staff changes (including four on-site OBO project directors, two OBO construction executives at headquarters, and many contractor staff changes) and poor relations between OBO and the contractor; and (3) disagreement regarding OBO and contractor roles and responsibilities (including, for example, responsibility for obtaining zoning permits). Construction Case Study: U.S. Embassy in Jakarta, Indonesia Project Overview According to OBO, the U.S. government-owned chancery in the Indonesian government center in Jakarta was built in the 1950s, and its mechanical, electrical, and plumbing systems are outdated, inefficient, and expensive to operate. State decided to build a new, secure embassy on the current embassy site. When completed, the embassy compound will include a chancery, Marine security guard quarters, support facilities, preserved historic structures, community facilities, and parking. A $302 million design-build with bridging contract for the new embassy was awarded in September 2012. The project was guided by State’s Excellence in Diplomatic Facilities principles but was awarded before OBO fully implemented Excellence in 2014. As of September 2017, the contract value was $339 million, an increase of $37 million, or 12 percent. According to OBO, State issued a notice to proceed for the project in December 2012, and the original estimated completion date was December 2017. OBO reported to us that by September 2017 the scheduled completion date had been extended to February 2019, 14 months after the original estimated completion date. Figure 18 shows a model of the embassy compound, the new chancery office building under construction and the timeline for the original schedule compared with the schedule as of the end of September 2017. Contributors to Contract Cost or Schedule Changes Before the current construction contract for the new embassy on the existing embassy compound, State separately contracted for the construction of temporary office buildings to relocate staff during construction. According to OBO, the work on the temporary office buildings fell behind the contracted schedule and would not be completed before the new embassy contractor’s arrival on-site. Consequently, State terminated the first contract for temporary buildings and awarded the remaining work to the current contractor. OBO encountered significant challenges due to its decision to employ a glass curtain-wall system for the new embassy’s chancery office building. OBO project documentation shows the use of the customized glass exterior wall designed for the embassy significantly impacted cost and schedule after contract award, adding at least $18 million to the cost and 180 days to the schedule. OBO’s decision to employ a unique glass curtain-wall system for this project and subsequent questions raised by Diplomatic Security about the design led OBO to modify the contract to add (1) $2.2 million and 180 days to explore alternative designs and conduct redesign work in order to obtain Diplomatic Security approval, (2) $13.3 million so that a dedicated facility could be established in the United States to securely fabricate the glass curtain wall before secure shipment to the site, and (3) $3 million to have cleared American workers install portions of the glass curtain wall. OBO had not previously employed such a system in a completed embassy project and could not provide us with documentation analyzing the risks of such a feature to cost and schedule—which might have included potential delays to get Diplomatic Security’s approval of the design—compared with conventional concrete construction. As of the end of September 2017, OBO reported that State and the contractor had agreed to extend substantial completion to February 2019 after settling the contractor’s request for equitable adjustment, which had claimed that five events delayed construction: (1) the late turnover of unimpeded access to the early site work; (2) the redesign of compound access facilities; (3) the redesign of portions of controlled areas of the building; (4) additional time for the certification of security requirements, specifically related to the curtain-wall system; and (5) design changes to the curtain-wall system itself. Post officials also expressed concerns that this new embassy compound was originally planned to accommodate only the U.S. embassy to Indonesia. Subsequently, State opened a permanent mission to the Association of Southeast Asian Nations in Jakarta to be collocated within the new embassy. Because of this and other staff growth, U.S. embassy officials told us that the new embassy will have little to no room for future growth. Construction Case Study: U.S. Consulate in Jeddah, Saudi Arabia Project Overview According to OBO, the current consulate built in 1952 served as the chancery before the U.S. embassy moved to Riyadh in 1984. In 2004 an attack on the consulate resulted in the deaths of five employees and wounded many more. The new Jeddah compound will include a consulate office building, staff housing, ambassador’s residence, consul general’s residence, Marine security guard quarters, and various supporting facilities. OBO reported that construction of the new consulate started under a design-build contract awarded in 2007, but the construction contractor was terminated-for-default in 2012, leaving State with a partially built project. In September 2012 State awarded a $100.5 million construction contract for the new compound to a second contractor. The project was based on the SED. As of September 2017, that second contract value was $131.3 million, an increase of $30.8 million, or 30.6 percent. According to OBO, State provided notice to proceed in October 2013 and the estimated completion date was October 2015. According to State documentation, this completion date was subsequently extended to February 2017. State and contractor officials told us that, at the end of September 2017, a modification was pending that would further extend the schedule to January 2018, 27 months after the original estimated completion date. Figure 19 shows the existing consulate, the new consulate office building under construction, and the timeline for the original schedule compared with the schedule as of the end of September 2017. Contributors to Contract Cost or Schedule Changes According to OBO, State hired a design firm—previously a subcontractor of the first construction contractor—to finish the design so that contract bids could be solicited from new contractors to complete the project. In doing so, State effectively changed the project delivery method from design-build to design-bid-build, whereby it directly contracted the design firm to finish the construction documents and then contracted a construction firm to build the project. Both State and contractor officials reported to us that this project was consistently challenged by design errors and omissions. According to OBO, approximately $14 million of the nearly $31 million cost increase— and 131 calendar days—were due to issues with this project’s design. According to State and contractor officials, the project was generally completed in March of 2017, which both sides termed “virtually substantially complete.” However, they stated that significant issues with the consulate building’s cooling and fire suppression systems effectively prevented OBO from contractually accepting the project as complete and allowing consulate staff to move in. As of September 2017, State and the contractor could not provide a firm date for when they expected consulate staff to be able to occupy the new compound. Both OBO and contractor officials acknowledged that a difficult working relationship slowed efforts to deal with project challenges. For example, they stated the project had at least four different OBO project directors. One OBO official characterized the collaboration on the project by State, the contractor, and State’s designer as “having a lot of conflicts” and said that as problems with the project arose during construction, all parties “dug their heels in.” In September 2017, one official indicated the then temporary Project Director had improved the working relationship with post and the contractor and was doing his best to work through the current issues and delay. Disagreement also arose regarding timely response to proposed changes; the contractor maintained that OBO headquarters was delaying work due to slow decision-making, while OBO maintained that the contractor’s proposals did not meet requirements. The functionality of the completed compound may also be affected by several issues. According to post officials, after the February 2015 closure of the U.S. Embassy in Yemen, State relocated some of those staff to Jeddah, requiring the conversion of five newly built apartments into office space. Post officials also reported that the original plan for the staff apartments was predicated on the post remaining an unaccompanied duty assignment whereby U.S. staff may not bring family members. Those officials expressed concern that space would become limited because family members are now allowed to accompany Foreign Service Officers to Jeddah. An additional concern was that the consulate was originally intended to provide consular services only for U.S. citizens but was now authorized to issue nonimmigrant visas for Saudis seeking to travel to the United States, which post officials predicted would increase consular traffic flow beyond the new building’s intended volume. Construction Case Study: U.S. Embassy in The Hague, Netherlands Project Overview According to OBO, the previous U.S. embassy in The Hague was located on a downtown square opposite the Netherlands Parliament. Completed in 1959, the chancery sat directly adjacent to a major road and sidewalks and did not meet State security standards set by the Secure Embassy Construction and Counterterrorism Act of 1999. The new embassy compound is located within the municipality of Wassenaar, adjacent to The Hague. The compound includes a chancery office building, Marine security guard quarters, support buildings, and parking. According to OBO, the design phase included a lengthy site planning, landscape design, and architectural design period due to local ordinances and stringent permitting requirements. OBO reported that this design contract was awarded in November 2012, and the design was completed in July 2013. The project delivery method was design-bid- build. A $125 million construction contract for the new embassy was awarded in September 2013. As of September 2017, that contract value was $131.7 million, an increase of about $6.7 million, or approximately 5 percent. According to OBO, State issued a notice to proceed for the project in June 2014, and the estimated completion date was June 2017. In September 2017, OBO reported that it and the contractor had extended the contract completion date to July 2017. In addition, as of September 2017, OBO and the contractor were negotiating over further cost and schedule changes. Figure 20 shows a historical photo of the 1959 embassy, the new embassy under construction, and a timeline showing the original schedule compared with the schedule as of the end of September 2017. Contributors to Contract Cost or Schedule Changes According to OBO, the official permit for construction was received in August 2013, with an effective date of September 2013. However, the permit was issued with a number of conditions that OBO reported took approximately 9 months for State to resolve and resulted in a delay of full notice to proceed until June 2014. Both OBO and the contractor said that the two sides worked cooperatively to resolve permitting issues raised by the local government. Based on OBO reporting, these issues contributed, in part, to over $1 million in cost modifications on the contract. Further, technical omissions that were not discovered during design review resulted in changes to sprinklers, fire alarms, security window treatments, and classified data interconnections. According to OBO, these late changes resulted in further requests for time extensions from the contractor. In addition, according to OBO, State did not plan for the colocation of one tenant agency onto the compound (8 people) and a second tenant agency increased its staffing by approximately 40 percent (19 people). Because of those staffing changes, post officials reported that there is no additional space for future growth in the new compound. Construction Case Study: U.S. Embassy in Pristina, Kosovo Project Overview According to OBO, State established this post in 1999 as a U.S. liaison office during the military intervention in Kosovo by North Atlantic Treaty Organization forces. When the U.S. government opened the post, OBO reported that it assembled a number of contiguous residential properties under short-term leases and closed the adjacent streets. State designated the post as an embassy in 2008. Figure 21 shows some of the existing houses that State converted for use as the embassy. In September 2014, State awarded a $158.4 million design-build contract for the new embassy under a bridging design. The new embassy is one of the first projects fully designed and constructed under the Excellence approach. As of September 2017, that contract value was $159.6 million, an increase of less than 1 percent. According to OBO, State issued a notice to proceed for the project in December 2014, and the original estimated completion date was October 2017. As of September 2017, completion was scheduled for January 2018, 3 months after the original estimated completion date). Figure 22 shows an architectural rendering of the new embassy, a photo of it under construction, and a timeline for the original schedule compared with the schedule as of the end of September 2017. Contributors to Contract Cost or Schedule Changes According to OBO, the largest change in cost resulted from State adding additional security cameras to improve monitoring of the compound and its facilities. In a separate change, OBO also granted the contractor a schedule extension of 98 days to account for changes in security requirements at project startup and funds to include adjustments made by State to the locations of the recreation facility, pool, and other items relative to the perimeter security wall. As of September 2017 in our interviews with them, the OBO Project Director and contractor’s on-site Project Manager could not reach resolution on the cost or schedule impacts of a variety of issues. These included (1) the delay in State’s approving the contractor’s locally hired construction workers, (2) the timing of and responsibility for bringing permanent power to the site, and (3) site condition issues related to unsuitable soils and existing foundations. Construction Case Study: U.S. Embassy in Port Moresby, Papua New Guinea Project Overview According to OBO, the current U.S. embassy is housed in a building constructed in 1970 in Port Moresby’s business district. The lease will expire in September 2020. Furthermore, the facility is overcrowded, functionally deficient, and does not meet the latest security standards. Also according to OBO, in 2009 the U.S. government acquired a 7.26- acre site for a new embassy compound through a long-term lease from the government of Papua New Guinea. State planned for the new embassy to be a standard secure mini-compound and awarded a construction contract in late 2011 with an estimated completion date in mid-2014. However, according to a State official, because the embassy requirements changed, State decided to terminate the contract for the convenience of the government. The project delivery method is design-bid-build. A $95 million construction contract for the new embassy was awarded in September 2015. As of September 2017, that contract value was $102.5 million, an increase of $7.5 million, or about 8 percent. According to OBO, State issued the notice to proceed for the project in March 2017, after a delay of about a year due to another prospective contractor disputing the contract award. As of September 2017, the estimated completion date remained unchanged at September 2019. Figure 23 shows an architectural rendering of the new embassy, an aerial view of the embassy under construction, and the timeline for the schedule as of the end of September 2017. Contributors to Contract Cost and Schedule Changes According to OBO, in 2013, after the initial contractor had completed approximately 40 percent of the project, State changed the project scope: (1) Staffing was increased from 47 desks to 77 desks, which could not be accommodated in the standard secure mini-compound; (2) classified information processing was added as a new requirement; and (3) a Marine security guard detachment was assigned to post, requiring the addition of a residence for them. Due to these new requirements, according to a State official, State decided to terminate the contract for the convenience of the government. According to OBO, the embassy compound was redesigned under a design contract to accommodate the new project scope. The redesign contract lasted 14 months, from April 2014 to June 2015. OBO reported that when the first contractor stopped work on the standard secure mini- compound, the concrete structures for all buildings on the compound had been completed. The new design, finished in June 2015, added a four- story office tower next to the existing chancery structure, with additional general work areas, and new controlled access areas. The redesigned site also added a nine-bed Marine security guard quarters, enlarged the building for the warehouse and shops, and added upgraded community facilities. According to OBO, further cost increases could accrue because of damage to government-provided equipment left by the first contractor, which may need to be re-purchased. Design Case Study: U.S. Embassy in Beirut, Lebanon Project Overview According to OBO, the embassy currently operates out of a nearly 18- acre compound in East Beirut consisting of a mixture of office and residential facilities that are both government-owned and leased. According to State, this site is severely cramped and does not meet current security standards. The new embassy site consists of just over 44 acres situated on a steep hilltop area near the existing U.S. embassy. State typically seeks to build new embassy compounds on 10 acres of land. OBO noted to us the new compound will include a chancery office building, staff residences, support buildings such as a warehouse and recreation facility, and Marine security guard quarters. Figure 24 shows architectural renderings of the new embassy. Background on the Design The design contract for this design-bid-build project was awarded in September 2014 for $39.6 million. Project documentation indicates the design process included the development of three initial concepts, which were reviewed by OBO’s Industry Advisory Group and OBO senior management. OBO reported a single design concept was selected in January 2015 for further development. The design firm then developed a schematic design (less than 35 percent design) that OBO indicated was approved by the OBO Director in March 2015. The design proceeded through design development (35 percent) and construction document development (60 percent and 90 percent); OBO reported to us the final construction documents were completed in April 2016. Following completion of the 100 percent design and subsequent contract solicitation activities, in December 2016 State awarded a $613.3 million construction contract to build the new embassy. As of September 2017, a notice to proceed for construction had just been issued. Contributors to Design Contract Cost Changes OBO reported that the 100 percent design was completed in April 2016 (19 months after contract award). OBO reports that final design cost by itself was $45.3 million, amounting to a $5.7 million, or about 14.5 percent, increase over the original design contract value. OBO documentation shows the increase in the cost for the project’s design was, in part, attributed to added design for temporary construction support facilities—to include both temporary office space and 40 secure housing units—that would be needed on-site by State’s project management team during the construction. However, the total contract cost as of the end of fiscal year 2017 was $58 million, about 46.5 percent more than the original contract value. This larger value includes approximately $13 million primarily for “Title II, construction phase services.” Through these services, the design firm provides technical support to OBO during construction to answer the construction contractor’s questions about the design and generally to support OBO’s review of the construction contractor’s material samples, drawings, building systems and product data, test and inspection reports, and any design changes or substitutions. State’s estimated construction costs increased during the project’s design from approximately $500 million to over $660 million due to what OBO reports were challenging site conditions that required the extensive use of retaining walls and engineered foundation systems. Additional perimeter security in the form of guard towers was also added. OBO indicated these scope changes required additional design and increased the construction cost estimate. Design Case Study: U.S. Consulate, in Hyderabad, India Project Overview Established in 2009, the U.S. consulate general in Hyderabad is the first new U.S. diplomatic post to open in India since India’s independence in 1947. According to OBO, in 2007 the U.S. government leased the current 4-acre consulate property—that was once used as a palace—for use as an interim consulate location. OBO indicated the new consulate will be built on a 12.3-acre site located in Hyderabad’s financial and high-tech districts. Further, the new compound will include a consulate office building, three compound access facilities, a support annex to include a warehouse, a recreation facility, and Marine security guard quarters. Figure 25 shows the existing interim consulate and an architectural rendering of the new consulate. Background on the Design State issued a task order for the design of the project in September 2014 with the intent that the project would be a design-bid-build project and that the design firm OBO tasked would prepare a 100 percent design. OBO indicted the construction contract for the project was planned to be awarded in fiscal year 2017. However, after beginning initial design, OBO determined that changing the delivery method from design-bid-build to design-build with bridging would allow for an earlier construction contract award in fiscal year 2016. With this change in the project delivery method, the design task order was modified such that OBO’s design firm would provide bridging documents—roughly a 35 percent design—rather than a 100 percent design. The bridging documents would then be used by the design-build construction contractor to complete the design and construct the project. The design firm that had been tasked by OBO to prepare the bridging documents would also (1) review and process design submittals from the design-build contractor, (2) answer any request for information about the bridging design intent, and (3) ensure the design intent represented by the bridging design was maintained throughout design development by the design-build contractor. Contributors to Design Contract Cost or Schedule Changes According to OBO, the bridging design was completed in April 2016 (19 months after the initial contract task order). As noted earlier in this report, OBO project documentation shows the initial design of the building’s unique exterior screen concerned OBO management, leading to more design development by the contract architect, further review by OBO’s design staff, and added cost. OBO senior management expressed concerns about the look of the screen, mainly that the screen was too traditional compared with the spirit of the design of the building and the rest of the campus and that the pattern of the screen needed more variation for daylight and views. To respond to these concerns, OBO issued two contract modifications to OBO’s architect for additional design work for the exterior screen. According to OBO, subsequent design development for three alternatives for the screen contributed an additional design cost of about $750,000, raising the final bridging design cost to approximately $10.5 million. That amount does not include the roughly $816,000 for the design firm to provide additional support services during construction, of which OBO reports a minor portion is attributable to ensuring the construction contractor achieved the design intent for the exterior screen. According to OBO data, the design-build contract to complete the design and build the project was awarded in September 2016 at a value of $203 million. OBO also reported the design-build contractor received full notice to proceed with construction in March 2017. As of the end of September 2017, the project was still under construction. Appendix IV: Comments from the U.S. Department of State Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts Brian M. Mazanec, (202) 512-5130 or mazanecb@gao.gov. Lori Rectanus, (202) 512-2834 or rectanusl@gao.gov. Staff Acknowledgments In addition to the contacts named above, Leslie Holen (Assistant Director), Michael Armes (Assistant Director), David Hancock, John Bauckman, and Eugene Beye, made key contributions to this report. David Dayton, Justin Fisher, Alex Welsh, and Neil Doherty provided technical assistance.
Why GAO Did This Study In 1998, terrorists bombed two U.S. embassies in East Africa, killing over 220 people and injuring more than 4,000 others. In 1999, State launched the CSCP with the primary goal of providing secure, safe, and functional workplaces, and OBO adopted a streamlined, standard design for all new embassies. In 2011, OBO shifted to the Excellence approach for new embassies, where greater use of custom designs is intended to improve embassies' functionality, quality, operating costs, and appearance. GAO was asked to review the performance of the CSCP. This report examines (1) the pace of the CSCP in constructing new embassies, (2) the cost and schedule performance of OBO's recent embassy construction projects, and (3) key factors that have affected State's ability to deliver construction projects efficiently. GAO analyzed information from State planning, funding, and reporting documents and interviewed State and contractor officials. As part of an assessment of nine construction case-study projects, selected for cost or schedule increases, GAO conducted four site visits to embassies under construction. What GAO Found The Department of State's (State) Bureau of Overseas Buildings Operations (OBO) has constructed new embassies at a slower pace than forecast due in part to unexpected building requirements and inflation. In 1999 State identified a need to replace 180 embassies. In 2005, with about 30 projects underway, State planned to replace the other 150 embassies by 2018. Since 1999, OBO has built 77 embassies under its Capital Security Construction Program (CSCP), at a total cost of about $24 billion as of fiscal year 2017. CSCP's pace has been affected by unexpected additional building requirements, such as office annexes and Marine quarters. Also, CSCP received only one program funding adjustment for inflation since 1999, and State does not intend to seek annual adjustments. Currently, OBO does not provide information on inflationary effects on CSCP or an estimated total capital investment or feasible time frames for the nearly 50 embassies identified for replacement beyond 2022. Lack of such information may affect stakeholders' ability to make informed budget decisions. While cost growth occurred on a majority of completed embassy projects and durations averaged about 36 months, these were generally within budgeting and planning allowances. GAO could not assess performance of Excellence projects because none had been completed as of the end of fiscal year 2017. Staffing workload and contractor collaboration have affected OBO's project delivery. Without an OBO-wide workforce analysis, it is unclear whether OBO's staffing is commensurate with its workload needs. OBO maintains that its office overseeing project design reviews is understaffed, adversely affecting some of its critical functions. Contractors also expressed concerns about the quality of design reviews, which may be affected by a staffing shortage and the use of temporary contractors. Also, OBO and contractor officials acknowledged weaknesses in collaboration, particularly with regard to contractors less experienced with embassy construction. Of the five contractors GAO spoke with, three said they are unlikely to pursue future projects because of issues working with OBO. Formal construction partnering—an industry best practice—between OBO and its contractors could help avoid adversarial relationships that inhibit swift resolution of issues. OBO's two long-standing contractors that have completed most of the CSCP embassy projects participated in early projects OBO identified as having used formal partnering. What GAO Recommends GAO recommends that State (1) provide information on the estimated effects of inflation on planned projects, (2) provide an analysis of estimated total costs and time frames to complete the CSCP, (3) conduct an OBO-wide workforce analysis, and (4) pilot formal construction partnering. State concurred with our recommendations and also conveyed it is now pursuing other initiatives beyond Excellence.
gao_GAO-18-296
gao_GAO-18-296_0
Background The livelihood of cattle producers, such as cow-calf operators and feeders, depends fundamentally on the price they receive for their cattle and the cost to produce these cattle. Numerous supply and demand factors can affect this. For example, the long production cycle for cattle means that producers must make decisions about herd size long before they can price and sell their cattle. Producers’ profits also hinge on how weather affects the supply and cost of forage and feed grains. Additionally, the outcome for producers depends on the effect of consumer preferences on demand for and price of beef. International trade in cattle and beef and competition from other protein sources—such as poultry and pork—are also among the many supply and demand factors that influence cattle prices and producers’ incomes. Cattle Production Cycle and Recent Price Trends for Fed Cattle The cattle production cycle, which runs from birth to slaughter, for most cattle generally ranges from 15 months to 24 months. Calves are usually weaned from cows when they weigh about 500 pounds. They may then move to stocker or growing operations until they weigh 600 to 800 pounds. At this point, they move to feedlots, which produce fed cattle. Specifically, feedlots specialize in feeding cattle a concentrated diet of corn and other grains to enable them to reach between 950 and 1,300 pounds. They are then transported to and slaughtered at a packing plant. Feedlots and packing plants are located throughout the United States but are concentrated in states such as Texas, Oklahoma, Kansas, Nebraska, Colorado, South Dakota, and Iowa. Figure 1 traces the movement of cattle from breeding to processing and consumption. Figure 2 shows the locations of cattle in feedlots. According to price data from AMS’s price reporting group, inflation- adjusted fed cattle prices have generally been increasing since about 2010. Fed cattle prices rose from about $125 per hundred pounds (live weight) in July 2013 and began to increase rapidly in fall 2013. Prices reached a historical high of about $173 per hundred pounds in November 2014, began to drop at the beginning of 2015, and then decreased dramatically in August and September of 2015, decreasing to about $123 per hundred pounds by the end of that year—an overall drop of about 30 percent from November 2014. In 2016, after briefly increasing, prices dropped further throughout much of the year to about $100 per hundred pounds—an overall drop of about 40 percent from November 2014. Prices then rose in the first half of 2017 before dropping again midyear. See figure 3 for more detailed information on fed cattle price changes over the past 10 years, including a trend line. Function of the Futures Market for Fed Cattle Market participants use the futures market for fed cattle to manage the risk associated with price changes, determine prices, or speculate on price changes. Futures contract terms that reflect the underlying fed cattle market help ensure that prices in both the fed cattle and futures markets are closely linked because they are influenced over the long run by the same market forces. The two markets also show similar patterns because participants in both markets tend to rely on the same types of information when entering into transactions. The Chicago Mercantile Exchange establishes the terms of futures contracts, including the quantity, quality, and locations to which fed cattle bought and sold on the futures market may be delivered. The only aspect left unspecified is the price at which each individual contract will be bought or sold. The futures market provides cattle market participants with a means to hedge—shift unwanted price risk to others more willing to assume the risk. Some buyers and sellers in the fed cattle market, such as packers and feeders, trade in futures contracts to hedge the risks of price changes in the fed cattle or wholesale and retail beef markets. For example, a feeder concerned that fed cattle prices may decline in the future may decide to lock in his or her sell price by selling futures contracts: if fed cattle prices decline, profits from the futures contracts will generally offset losses from the lower fed cattle prices. The same is true for a meat packer concerned about prices going up. The packer might buy a futures contract to lock in a purchase price, with futures profits offsetting higher fed cattle prices. Other futures market participants—generally, speculators—may take a view about whether the price of fed cattle may go up or down and, based on that view, enter into the market as a buyer or seller. For example, speculators could purchase futures contracts from cattle market participants if they think that futures prices may increase in the future or, conversely, sell a futures contract if they believe prices may decline. These speculators provide the market with additional liquidity so that cattle market participants have willing buyers and sellers with whom to conduct transactions. Cattle Market Oversight Roles and Responsibilities of USDA and CFTC Within USDA, AMS’s P&SP and price reporting group play specific roles in the cattle market. For example, P&SP performs various functions to help USDA execute its oversight responsibilities for cattle markets, which include halting unfair and anticompetitive marketing practices. To help USDA execute these oversight responsibilities, P&SP collects the following types of information to conduct both routine monitoring and targeted investigations: Packers’ annual reports. Under the Packers & Stockyards Act, each packer must submit an annual summary of operations to P&SP that includes information on the dollar volume of cattle purchased, number of head purchased, and some proprietary financial information. P&SP officials use this information to, among other things, review the financial status of packers and their ability to stay solvent to pay for their purchases. Transaction data from the four largest packers. P&SP officials told us that they send letters annually to the industry’s four largest packers requesting data on their transactions with feeders. According to P&SP officials, the packers provide P&SP with information on every transaction made during that year. P&SP officials told us that they also ask for new marketing agreements the packers have entered into throughout the year, to allow officials to track marketing agreements over time. Investigation information. During investigations, P&SP officials collect evidence such as business records and witness testimony from packers and others. P&SP can conduct investigations based on its own initiative or based on complaints from market participants. If, in the course of its oversight work, P&SP determines that a competition violation may have occurred, P&SP officials refer the case to USDA’s Office of the General Counsel, which may pursue the case or further refer the case to the U.S. Department of Justice. The price reporting group’s role in the cattle market is to implement the Livestock Mandatory Reporting program as required by the Livestock Mandatory Reporting Act of 1999. According to AMS, the purpose of the group is, among other things, to provide information regarding the marketing of livestock and encourage competition in the marketplace for livestock and livestock products. To fulfill this role, the price reporting group collects information on packers’ daily livestock purchases on both mandatory and voluntary bases. Mandatory. Under the Livestock Mandatory Reporting Act of 1999, all qualifying packers must report information on all their purchases and sales on a daily basis. The price reporting group receives daily price data on all fed cattle that a packing plant purchases, and all the beef it sells. According to price reporting group officials, they aggregate and summarize the information by sector and publish it within an hour of receipt. For example, the price reporting group publishes information on the number of cattle transacted, proportion of each of the four transaction types used, and the average weight and price of cattle transacted. The price reporting group does not report information on individual transactions or summarized information if there is a risk that the packer may lose confidentiality due to low reporting numbers. Voluntary. The price reporting group collects additional voluntary information from packers, such as data on feeder cattle transactions and on new or unique markets (e.g., the market for grass-fed cattle). CFTC, an independent agency of the federal government, has exclusive jurisdiction over futures and other derivatives markets, except otherwise provided in law. Consistent with the Commodity Exchange Act, CFTC’s mission is to protect market users and the public from fraud, manipulation, abusive practices, and systemic risk related to derivatives, and to foster open, competitive, and financially sound futures markets. This mission is achieved through a regulatory scheme that is based on federal oversight of industry self-regulation through organizations such as the Chicago Mercantile Exchange. As a self-regulatory organization, the Chicago Mercantile Exchange is responsible for, among other things, establishing and enforcing rules governing the conduct and trading of its members and preventing market manipulation. A Variety of Supply and Demand Factors Affected Fed Cattle Price Changes from 2013 through 2016 Our review identified several supply and demand factors—such as a prolonged drought that affected the price of cattle feed and the availability of relatively less expensive protein substitutes such as pork—that affected changes in fed cattle prices from 2013 through 2016. Furthermore, we found that varying competition levels among packers did not appear to explain the large national price changes but may have contributed to variations in fed cattle prices in different areas of the country. Several Supply and Demand Factors Including Drought and the Retail Price of Substitute Proteins Affected Fed Cattle Price Changes Based on interviews with some experts, stakeholders, officials from USDA and CFTC, and our analysis of cattle market data, several interrelated supply and demand factors affected the large national changes in fed cattle prices from 2013 through 2016. These factors included drought, costs for feed, and the price of substitute proteins, such as pork. As it relates to supply factors, from 2010 through early 2013 a prolonged drought—beginning in the southern United States in late 2010 and expanding to the High Plains in 2012—affected major cattle areas. This drought caused the supply of young cattle to decrease and then increase and, correspondingly, the national price of fed cattle to increase and then decrease when those cattle came to market as fed cattle. Some experts and stakeholders we interviewed told us that cow-calf operators may have liquidated their herds in 2012 and 2013 because the droughts reduced the supply of forage available to raise younger cattle, and cow- calf operators could not feed as many cattle on available pasture and rangeland. The domestic cattle inventory decreased from about 96.5 million in 2007 to about 88.5 million in 2014. This decrease in inventory reduced the supply of fed cattle available for sale in 2013 and 2014, which could have driven up prices for fed cattle. As the drought eased in late 2013, it became more feasible to feed herds on forage, creating incentives for cow-calf operators to expand their herds throughout 2014 and 2015. This increased the number of fed cattle sold for slaughter by late 2015, and prices began to drop at that time. See figure 4 for information on the relationship between fed cattle price changes and the U.S. cattle inventory over the past 10 years. See appendix II for more information on the number of U.S. cattle at various points in the supply chain. Costs for feed also affected the fed cattle supply, contributing to the large changes in fed cattle prices from 2013 through 2016. An easing of the widespread drought in late 2013 reduced the price of corn and other grains used to feed cattle, which, according to some experts and P&SP officials, may have created an incentive for feeders to grow their cattle to heavier weights before marketing them to packers. For example, the price of corn decreased from about $6.87 per bushel in late 2012 to about $3.50 per bushel in late 2014. According to data from USDA’s price reporting group, fed cattle weight increases from 2003 through 2013 averaged about 14 pounds per year; however, our analysis of cattle market data from USDA showed average fed cattle weights increased by about 40 pounds in 2015. For additional longer-term information on increases in cattle weights, see appendix II. However, particularly heavy cattle can receive lower prices per pound, in part because packers told us that unusually large cuts of beef can be more difficult to sell. In 2014 when the fed cattle supply was low, P&SP officials reported that packers were not necessarily paying lower prices for over-heavy cattle, so feeders would not have received this price indicator to keep the cattle they sold below certain weights. According to some experts, these heavier weights, combined with the larger overall number of cattle offered for sale in 2015, resulted in increased supply, exacerbating the price decline. Reduced demand for wholesale beef and for fed cattle also affected the large national changes in fed cattle prices. Our analysis of cattle market and other economic data showed that several factors reduced demand for beef; this in turn reduced demand for fed cattle. These factors included (1) higher wholesale beef prices and concurrently lower relative prices of pork and chicken, which are substitutes for beef for consumers and which would reduce demand for retail beef; (2) increases in the amount of beef in cold storage, also limiting packer demand for fed cattle; and (3) fluctuations in the strength of the U.S. dollar, which would shift consumer purchases toward or away from relatively less expensive imported beef, as well as contribute to shifts in net exports—that is, total exports minus total imports. In addition, according to some experts and stakeholders, an overall reduction in packing capacity when packers closed several plants, including one large plant in Texas, may have also limited packer demand for fed cattle. P&SP officials conducted an investigation into the price drop beginning in August 2015. P&SP officials told us that as they saw fed cattle prices rapidly decreasing in August and September 2015, they included this investigation in the agency’s annual work plan for 2016. They also told us that P&SP conducted the work based on its own initiative and not as the result of a request from a market participant or because it received specific information on possible wrongdoing. The P&SP investigation reviewed changes in price spreads between fed cattle and wholesale—or boxed—beef because such price spreads can serve as a rough indicator of packer profit. P&SP found that packers may have benefitted for a short period as the prices they paid for fed cattle decreased more quickly than the prices they received for boxed beef, but it also found that those price differences quickly diminished to smaller levels than before the price drop. The report concluded that the sharp price decrease in 2015 was likely due to a number of market factors that affected both supply and demand, such as an increased number of fed cattle sold for slaughter and lower relative prices for pork and chicken. Competition Levels among Packers Did Not Appear to Affect National Price Changes in the Fed Cattle Market but May Have Contributed to Price Variations in Different Areas of the Country Competition levels among packers varied in different areas of the country. These variations did not appear to explain the large national changes in fed cattle prices from 2013 through 2015 but may have contributed to variations in fed cattle prices in different areas of the country. Specifically, at the national level, packer competition levels were stable from 2013 through 2015. Using P&SP’s annual data on transactions between packers and feeders during this time frame, we estimated the degree of competition in any given area by calculating market concentration levels among packers using a measure called the Herfindahl-Hirschman Index (HHI). From a practical perspective, a lower HHI indicates generally that there is more competition in a market. In particular, an HHI is lowest when a market is occupied by a large number of firms of relatively equal size and is highest when a market is controlled by a single firm (i.e., there is no competition in that market). Some large packing plants closed from 2013 through 2015, but the average HHI level varied by only one percentage point (from about 51 to about 52 percent), whereas the total price decrease from November 2014 through December 2015 was about 30 percent. Because of this, it was unlikely that variations in competition affected the large price decrease. However, variations in competition levels in different areas of the country may have contributed to price differences we observed in those areas. The data show that the average competition level was about 51 percent, suggesting that, on average, a given feedlot had two packing plants to which it could sell its fed cattle. Competition levels tended to be higher in states such as Texas, Oklahoma, Kansas, Nebraska, Colorado, South Dakota, and Iowa, where there are more cattle on feed as we showed in figure 2, suggesting that feeders in those areas had more packing plants to choose from. Competition levels tended to be lower in areas that had fewer cattle on feed, such as in the northeast and the Pacific Northwest, suggesting that feeders in those areas had fewer packing plants to which they could sell their cattle. Using an econometric model, after controlling for other factors that could affect price—such as the supply and demand factors we discuss above, or attributes of the beef produced by fed cattle such as yield and quality grade—we found that less packer competition in any given area was associated with lower fed cattle prices in that area. Specifically, our model estimated that fed cattle prices in less concentrated areas (those with an HHI in the 25th percentile of our analysis) may have been about 9 percent higher than in more concentrated areas (those with an HHI in the 75th percentile of our analysis). Such competition effects can exist in legitimately functioning markets. The results of our analysis suggest that some packers may have been able to exercise market power in areas with less competition. Evidence of this effect alone does not imply that packers engaged in anticompetitive or improper behavior. For more detailed information on our analysis, see appendix III. CFTC Did Not Find Evidence of Trading Irregularities in the Futures Market for Fed Cattle in 2015, and Is Overseeing Changes to Address Contract Concerns CFTC’s regular monitoring efforts and its analysis of trading patterns, including of particularly volatile trading days, did not find evidence of irregularities in the futures market for fed cattle in 2015. However, CFTC and others have expressed concern that certain terms in futures contracts for fed cattle—such as the quality of beef represented in the contract—did not sufficiently mirror the specifics of the fed cattle market, which could make them less useful to cattle market participants for hedging risk. In response, the Chicago Mercantile Exchange submitted changes to contract terms to CFTC. CFTC reviewed those changes, and where the agency found the changes consistent with the Commodity Exchange Act and regulations, allowed or expressly approved those changes. CFTC’s Monitoring and Analysis of Volatile Trading Days Did Not Find Evidence of Trading Irregularities CFTC’s daily monitoring of the futures market for fed cattle did not find evidence of trading irregularities. In addition, CFTC conducted a more in- depth review of volatile trading days in 2015 and did not identify evidence of trading anomalies or that certain groups of traders, such as speculators, unduly influenced the market. Our analysis of trading data confirmed that the futures market for fed cattle experienced episodes of higher volatility beginning in late 2015 and going through 2017 than it had experienced in years immediately prior, and some market participants expressed concern that this volatility could be due to possible trading irregularities. Specifically, variations in futures market prices were generally higher in late 2015 than in 2013 or 2014 and more frequently reached the maximum allowed change in price for any given day, based on rules set by the Chicago Mercantile Exchange. See figure 5 for information on average futures prices for fed cattle and historical volatility from 2008 through 2017. Some experts told us that high volatility in the futures market generally can be the result of uncertainty or shocks in the futures or fed cattle markets. For example, the futures market experienced high levels of volatility in late 2003 through 2005 after bovine spongiform encephalopathy (BSE) was first detected in a cow in the United States in December 2003 (see appendix II for more information on BSE events since 2003 and their impact on U.S. beef exports). More recently, the market also experienced high levels of volatility during the financial crisis that began in 2008 as well as in the latter part of 2015 as the price of fed cattle rapidly decreased. However, some cow-calf operators and feeders, including members of the National Cattleman’s Beef Association and the Ranchers-Cattlemen Action Legal Fund United Stockgrowers of America raised questions about whether the futures market volatility in 2015 might be due to manipulation or to high-frequency trading, a specific type of activity in which a speculator makes numerous trades at very high speeds in an effort to profit from small changes in the market. Both CFTC and the Chicago Mercantile Exchange conduct daily monitoring of the futures market for fed cattle, and CFTC officials told us that they did not identify evidence of trading irregularities in 2015. In addition, in response to concerns and a request from some cattle market participants, CFTC analyzed trading patterns in the market, including reviewing particularly volatile days in 2015. CFTC did not find evidence of trading anomalies or that certain groups of traders, such as speculators, unduly influenced the market. The Chicago Mercantile Exchange conducted a similar review and came to similar conclusions. Both CFTC and the Chicago Mercantile Exchange also concluded that high-frequency trading did not contribute substantially to volatility on the days they reviewed. Specifically, the Chicago Mercantile Exchange concluded that the futures market volatility was predominantly the result of non-high frequency traders placing and executing large, aggressive futures orders. Furthermore, as a way of comparing the use of automated and high- frequency trading in the futures market for fed cattle to related markets, CFTC officials told us that their review found that futures contract markets for other agricultural commodities from 2014 through 2016—including for corn, wheat, soybeans, and pork—were characterized by a greater percentage of automated trading, including high-frequency trading, than the futures market for fed cattle. Finally, according to documentation from the Chicago Mercantile Exchange, the high levels of volatility in the futures market could be related to both the swift declines in fed cattle prices and the fact that an increasing number of fed cattle are sold during the last few business days of the week, rather than throughout the week. Concentrating purchases to one or two days of the business week decreases the number of price signals that the fed cattle market can provide futures market participants. According to Chicago Mercantile Exchange documentation, a decrease in the frequency of price signals creates information gaps for market participants and likely contributes to price volatility. CFTC and Some Stakeholders Expressed Concern about Cattle Futures Contract Terms, and CFTC Is Overseeing Related Changes CFTC and some stakeholders expressed concern that the terms of cattle futures contracts did not adequately reflect structural changes in the fed cattle market and that differences between the terms of futures contracts and the fed cattle market could cause futures contracts to become less useful to cattle market participants to hedge risks. According to Chicago Mercantile Exchange documents, futures contract terms are designed to match relevant commodities markets and industry standards to help ensure that there is a two-way relationship between the futures market and the relevant commodity market. When contract terms reflect the market and futures markets operate properly, prices in the fed cattle and futures markets may initially diverge, but over time should generally converge by the time a contract expires. If the prices do not converge, contracts become less useful to market participants as a way to hedge risks. For example, prior to October 2017, cattle futures contracts specified that at least 55 percent of the fed cattle in those contracts were to produce a beef quality grade of Choice or better. From fiscal years 2013 through 2017, the percentage of beef graded nationally as Choice or better has been higher than this—at times as high as about 80 percent, although proportions have varied by region. Stakeholders have expressed concern that because the beef quality specifications in futures contracts for fed cattle are lower than the beef quality produced by animals traded in the fed cattle market, this difference may decrease the value of those futures contracts. Additionally, stakeholders expressed concern that this difference can negatively impact whether prices in the futures and fed cattle markets effectively converge as expected. In response to these concerns, the Chicago Mercantile Exchange made changes to the terms of futures contracts for fed cattle in 2016 and 2017, which were reviewed and approved by CFTC. To better align futures contracts with the fed cattle market, the Chicago Mercantile Exchange has increased the quality percentage of Choice or better quality beef to 60 percent, starting with October 2017 futures contracts, and to 65 percent Choice or better quality beef, starting with October 2018 futures contracts. In 2016, also in response to concerns raised by stakeholders, CFTC asked the Chicago Mercantile Exchange to provide information on additional measures under consideration by the exchange, such as changing the terms in futures contracts for fed cattle and making them more consistent with the fed cattle market. As a result of dialogue between the two entities, the Chicago Mercantile Exchange revised its delivery process and expanded the timeframe for making deliveries, which has allowed it to add locations where cattle can be delivered to satisfy a futures contract. According to CFTC, this change made delivery more accessible and improved the connection between the fed cattle and futures markets. The Chicago Mercantile Exchange submitted these and similar changes to CFTC. CFTC reviewed those changes, and where the agency found the changes consistent with the Commodity Exchange Act and regulations, allowed or expressly approved those changes. Chicago Mercantile Exchange representatives told us that these changes will help futures contracts better reflect the fed cattle market. CFTC officials said that they believe the changes have the potential to strengthen the performance of the futures market for fed cattle as a risk management and price discovery tool, but will continue to monitor the effectiveness of the changes. P&SP Does Not Analyze Some Key Transaction Data Two factors affect P&SP’s routine monitoring to ensure against discriminatory or anticompetitive practices in the fed cattle market. First, USDA’s view of its legal authority does not allow P&SP routine access to the data from AMS’s price reporting group on daily transactions between packers and cattle feeders. Second, P&SP does not periodically analyze the transaction data that it collects from packers to learn more about the operation of the fed cattle market. P&SP Does Not Have Routine Access to Daily Transaction Data That the Price Reporting Group Collects P&SP carries out its oversight responsibilities through monitoring and investigations. The price reporting group, housed within AMS with P&SP (which moved to AMS in November 2017), collects extensive data on transactions between packers and feeders via livestock mandatory price reporting as required by law. The price reporting group does not regularly share these data with P&SP, so the data are not available for P&SP to use for regular monitoring activities to flag potential issues for investigation. Currently, according to USDA officials, P&SP officials may request and receive only specific portions of price reporting data based on individual investigations it has already decided to conduct. For example, P&SP was able to analyze price reporting data in the course of its investigation into the price drop in 2015. Based on USDA’s reading of the Livestock Mandatory Reporting Act of 1999 provisions that prohibit the disclosure of facts or information acquired through the mandatory reporting program, the price reporting group has not routinely shared the data with P&SP. The act provides some exceptions to the disclosure prohibition. For example, the act allows the price reporting group to share data, as directed by the Secretary of Agriculture, for enforcement purposes. USDA officials told us that they do not believe this exception allows the price reporting group to provide routine access to the data for monitoring activities. The officials told us that while the statute does allow for sharing of price reporting data for enforcement purposes, they interpret the term “enforcement purposes” to be a specific ongoing investigation, not market oversight. USDA officials note that the act does not discuss market oversight; rather, it was established to help market participants make business decisions through USDA’s collection and dissemination of price data. P&SP officials told us that regular access to price reporting data would allow them to more routinely conduct analyses as part of their routine market monitoring activities similar to those carried out in their investigations as part of their routine market monitoring activities. Specifically, the officials said that going forward, price reporting data could be used to detect price outliers more quickly and help P&SP identify potential anticompetitive behavior; for example, where buyers might agree to take turns buying cattle at different times so as to avoid competing with one another. Under federal internal control standards, an agency’s management should internally communicate the necessary quality information to achieve the entity’s objectives. Such information is, for example, communicated down, across, up, and around reporting lines to all levels of the entity. Because USDA eliminated the Grain Inspection, Packers & Stockyards Administration and reorganized P&SP under AMS in November 2017, the reorganization provides an opportunity for USDA to review the extent to which price reporting data could be shared with P&SP under the act— now that both P&SP and the price reporting group are within the same agency. However, USDA officials told us in November 2017 that it was too early in the reorganization process to determine whether AMS leadership would view routine sharing of these data any differently. By reviewing the extent to which AMS’s price reporting group can share daily transaction data with P&SP to strengthen the effectiveness of its oversight, USDA has an opportunity to allow P&SP to more effectively carry out its responsibilities to ensure against discriminatory or anticompetitive practices in the fed cattle market. In reviewing its authority to share these data, determining whether it is necessary or advisable to request additional exceptions from the current information disclosure restrictions from Congress would position USDA to strengthen its oversight of that market. P&SP Does Not Conduct Detailed Periodic Analyses of Transaction Data Collected from Packers P&SP does not periodically analyze the transaction data that it collects from packers to learn more about the operation of the fed cattle market. As part of its monitoring program, P&SP reviews publicly available, summarized price data on a weekly basis but it does not routinely review the data it collects on transactions between packers and feeders, a potentially useful source of data from packers that would enable P&SP to conduct more detailed monitoring. We conducted several in-depth analyses of P&SP’s transaction data, and found that some of these analyses could provide useful information to agency management when it makes oversight decisions. For example, as discussed earlier in this report, one of our analyses found that different areas of the country experienced differing levels of competition and that, controlling for other possible sources of price variation, areas with less packer competition were associated with lower fed cattle prices. Such analyses may allow P&SP to better monitor changes in competition and prices over time, which may help inform its decisions on where to direct its investigative resources and better fulfill its mission to ensure against discriminatory or anticompetitive practices in the fed cattle market. Other federal agencies conduct routine, in-depth analyses to efficiently direct their investigative resources. For example, as we reported in March 2012, as required by statute, USDA routinely conducts in-depth analyses of crop insurance data to detect potential program fraud, waste, and abuse by farmers, insurance agents, and loss adjusters. The agency then uses these analyses to direct its investigative resources. Federal internal control standards specify that management should use quality information to achieve the entity’s objectives including processing the obtained data into quality information and then evaluating the processed information. P&SP officials told us that they typically do not receive all of the previous year’s transaction data from packers until the following May. As a result, P&SP has previously considered the use of packer transaction data for routine monitoring to be somewhat limited by the lack of timeliness. However, these officials also told us that the analyses we suggested could still provide useful information. By routinely conducting in-depth analysis of the transaction data it collects, USDA could enhance its monitoring of the fed cattle market. Such analysis could include but not be limited to examining competition levels in different areas of the country. Conclusions The cattle industry is an important part of the nation’s agricultural sector and contributes tens of billions of dollars to the U.S. economy. Amid concerns about the drop in fed cattle prices beginning in late 2015 and ongoing questions about anticompetitive behavior in the fed cattle market, P&SP’s role in overseeing this market is paramount. While P&SP routinely conducts monitoring and investigations, the program does not have routine access to daily price reporting data or periodically analyze the transaction data that it currently collects from packers. The Livestock Mandatory Reporting Act of 1999 allows AMS’s price reporting group to share data with P&SP for enforcement purposes, as directed by the Secretary of Agriculture, but USDA does not believe it has the authority to do so, based on its interpretation of “enforcement purposes” in the statute. Although both P&SP and the price reporting group are within AMS because of a November 2017 departmental reorganization, USDA officials told us that it was too early in the reorganization process to determine whether AMS leadership would view routine sharing of these data any differently. By reviewing the extent to which AMS’s price reporting group can share daily transaction data with P&SP to strengthen the effectiveness of its oversight, USDA has an opportunity to allow P&SP to more effectively carry out its responsibilities to ensure against discriminatory or anticompetitive practices in the fed cattle market. In reviewing its authority to share these data, determining whether it is necessary or advisable to request additional exceptions from the current information disclosure restrictions from Congress would position USDA to strengthen its oversight of that market. Furthermore, as part of its monitoring, P&SP does not periodically analyze the transaction data that it collects from packers to learn more about the operation of the fed cattle market. In analyzing P&SP’s transaction data, we found that while less competition among packers did not appear to result in lower national cattle prices from 2013 through 2015 on a national level, it did account for variations in prices in different parts of the country. By routinely conducting in-depth analysis of the transaction data it collects, USDA could enhance its monitoring of the fed cattle market. Such analysis could include but not be limited to examining competition levels in different areas of the country. Recommendations for Executive Action We are making the following two recommendations to USDA: The Secretary of Agriculture should review the extent to which, under the Livestock Mandatory Reporting Act of 1999, the price reporting group can share daily transaction data with P&SP to allow P&SP to strengthen the effectiveness of its oversight. After reviewing that authority, if the Secretary determines that the statute does not permit the price reporting group to share data with P&SP for routine monitoring purposes, and that routine sharing is advisable in light of the purposes behind the statutory disclosure restrictions, the Secretary should submit to Congress a proposal to allow such sharing. (Recommendation 1) The Secretary of Agriculture should direct the AMS administrator to ensure that P&SP routinely conducts in-depth analysis of the transaction data that it collects. Such analysis could include but not be limited to examining competition levels in different areas of the country. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this product to USDA and CFTC for comment. In written comments, reproduced in appendix V, USDA agreed with our two recommendations and described actions it has taken and will take to implement them. CFTC only provided technical comments, which we incorporated as appropriate. With respect to our first recommendation, USDA stated that it took action and reviewed the authority provided by the Livestock Mandatory Reporting Act of 1999 and determined that the act does not allow for data sharing for routine monitoring purposes. Further, USDA stated that the agency believes considering a statutory amendment to allow for routine data sharing is not advisable, due to the agency’s concerns about maintaining the public’s trust in USDA’s administration of the Livestock Mandatory Reporting program. We believe the steps USDA has taken address our recommendation. Concerning our second recommendation, USDA agreed that routine in- depth analysis of packer transaction data would enhance USDA’s monitoring of the fed cattle market to ensure against discriminatory or anticompetitive practices. USDA stated that it plans to create a new competition branch in P&SP—now known as the Packers and Stockyards Division—that will be staffed by employees with economic expertise. USDA stated that this new branch will be responsible for reviewing the transactions data P&SP receives from packers and conducting in-depth analyses that would help the agency to monitor changes in competition and prices over time to inform USDA decisions on where to direct its resources. Routinely conducting such analyses would address our recommendation. USDA also provided technical comments. We incorporated these comments as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Chairman of the Commodity Futures Trading Commission, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact Steve Morris at (202) 512-3841 or moriss@gao.gov or Oliver Richard at (202) 512-2700 or richardo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report (1) describes key factors that affected fed cattle price changes from 2013 through 2016; (2) describes what CFTC found about possible trading irregularities in the futures market for fed cattle in 2015 and any changes to the futures contract for fed cattle since 2015; and (3) examines factors that may affect the U.S. Department of Agriculture’s (USDA) routine monitoring to ensure against discriminatory or anticompetitive practices in the fed cattle market. To describe the key factors that affected fed cattle price changes from 2013 through 2016 and to understand changes and trends in the U.S. cattle market since 2000, we analyzed economic and other market data collected by federal agencies. These data included information about cattle and beef prices, quality, and inventories; cattle and beef transactions; feed prices and feedlot sizes; transaction methods; national drought patterns; and consumption trends for beef, pork, and chicken. We gathered these data from USDA’s Agricultural Marketing Service (AMS), Economic Research Service, National Agricultural Statistics Service, and World Agricultural Outlook Board, among others. For example, we reviewed AMS data on fed cattle prices from November 2002 through August 2017, and we used it to, among other things, develop a long term price trend line. We did not quantify or rank the impact of various factors. We assessed the reliability of the data we analyzed by interviewing officials who maintain the data, reviewing related documentation, and testing the data for missing or erroneous values, and determined that the data were sufficiently reliable for our purposes. When we found discrepancies such as data entry errors, we brought them to the agencies’ attention and worked with the agencies to correct the discrepancies before conducting our analyses. We also collected USDA transaction data on beef packer (packer) purchases of fed cattle from 2013 through 2015 and we analyzed these data using a variety of methods, including econometric analysis. For more on the methods and results of this analysis, see appendix III. We assessed the reliability of the transactions data we analyzed by interviewing officials who maintain the data, reviewing related documentation, and testing the data for missing or erroneous values. We determined that the data were sufficiently reliable for our purposes. In addition to analyzing these data, we reviewed an investigation by AMS’s Packers & Stockyards Program (P&SP) on the 2015 drop in fed cattle prices. We did not obtain and review internal packer documents, so the scope of our analysis did not include a review of whether packers engaged in anticompetitive behavior. Such specific investigations would typically be carried out by entities with subpoena authority such as the Federal Trade Commission of the Antitrust Division in the Department of Justice. To describe what CFTC found about possible trading irregularities in the futures market for fed cattle in 2015 and any changes to the futures contract for fed cattle since 2015, we reviewed and summarized relevant statutes and regulations, such as the Commodity Exchange Act and Commodity Futures Trading Commission (CFTC) regulations for futures exchanges. We compared that information with CFTC documentation on its oversight activities related to the futures market for fed cattle, such as its 2013 review of the Chicago Mercantile Exchange and the Chicago Board of Trade to verify the exchange’s ongoing compliance with standards intended to, among other things, prevent market manipulation. Such rule enforcement reviews include oversight into whether designated contract markets comply with core principles as outlined by CFTC. We also reviewed CFTC analyses of trading patterns on specific dates in 2015 after conducting a review of the analyses data and methods and determining the work to be sufficiently reliable for our purposes. In addition, we reviewed and summarized documentary evidence from the Chicago Mercantile Exchange on its analysis of the market and on its changes to terms in futures contracts for fed cattle. To better understand the volatility in the market in 2015, we gathered and analyzed price data from Bloomberg on the futures market for fed cattle. To examine factors that may affect USDA’s routine monitoring to ensure against discriminatory or anticompetitive practices in the fed cattle market, we gathered and reviewed relevant oversight documentation, including P&SP annual reports and investigative policies and procedures. In addition, we met with officials from AMS’s P&SP and Livestock Mandatory Reporting program (price reporting group) to discuss their roles and responsibilities. We also used the results of our analysis of USDA transaction data on packer purchases of fed cattle. We compared USDA actions with standards for internal control in the federal government, specifically those related to the communication and use of quality information. To address all our objectives, we conducted interviews with (1) cattle market experts; (2) stakeholders selected to represent a variety of views including small and large feedlot operators (feeders), packers, futures market speculators, the Chicago Mercantile Exchange, and an organization specializing in competition and antitrust issues; and (3) agency officials from AMS’s P&SP and price reporting group, and USDA’s Office of the General Counsel, as well as CFTC. We used the following criteria to identify cattle market experts: the expert’s recognition in the professional or academic community, and the relevance to cattle markets of his or her published work or research to cattle markets. We identified these experts through our prior work, the recommendations of USDA or CFTC officials, stakeholders, or other recognized experts. We conducted semi-structured interviews with 34 individuals or groups of experts, stakeholders, and officials, and performed a content analysis of relevant responses to our questions. To characterize responses and quantify interviewees’ views throughout this report, we defined modifiers (e.g., “some”) as follows: “some” users represents 2 to 5 users, “several” users represents 6 to 9 users, “many” users represents 10 to 15 users, “most” users represents 16 to 24 users, and “nearly all” users represents 25 to 29 users. The views of the experts and stakeholders we interviewed cannot be generalized to all others with expertise in the cattle markets or all cattle market stakeholders, but they provided valuable insights to our work. Appendix IV presents a list of recognized experts that we interviewed. We conducted this performance audit from August 2016 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Supplemental Information on Trends in the Fed Cattle Market This appendix provides supplemental information on trends in the fed cattle market. The sections below provide information from analyses and interviews we conducted as part of our review of the fed cattle market, including on fed cattle transaction methods, drought, number of U.S. cattle, feedlot consolidation and size, cattle weights, consumption trends, product differentiation and branded beef, beef price spread, and factors affecting beef exports. Fed Cattle Transaction Methods Beef packers (packers) and cattle feedlot operators (feeders) generally use one of four transaction methods to buy and sell fed cattle, and their use of these methods has changed over time for various reasons. The four transaction methods are: Cash (also referred to as spot or negotiated). A purchase price is determined through buyer-seller interaction. The price is known at the time of agreement, and delivery to the packing plant may take place up to 30 days later. Negotiated grid. A base price is negotiated between buyer and seller and is known at the time of agreement. Delivery to the packing plant is usually expected within 14 days. Unlike a cash transaction, the final net price is determined by applying a series of premiums and discounts after slaughter based on carcass performance (usually related to weight, beef yield grade, and beef quality). Forward contract. An agreement for the purchase of cattle, executed in advance of slaughter, under which the base price is established by reference to prices quoted on the Chicago Mercantile Exchange and can be set any time prior to the transaction. Formula contract. An advance commitment of cattle—by any method other than cash, negotiated grid, or forward contract—in advance of slaughter. Formula contracts use a method of calculating price in which the price often is not known until a later date. For example, a feeder and a packer may enter into a formula contract several months in advance of slaughter. According to U.S. Department of Agriculture’s (USDA) Agricultural Marketing Service (AMS) officials and others we interviewed, formula contracts often use the cash price from AMS’ Livestock Mandatory Reporting price summaries around the time of slaughter as a base upon which the contract then applies additional premiums and discounts. Since 2002, the share of fed cattle sold via cash transactions has decreased and the share of cattle sold through formula and forward contracts has increased proportionally. According to our analysis of AMS data, approximately 50 percent of cattle were traded using cash transactions in 2002, but the share fell as low as 22 percent of cattle transactions in 2015. Conversely, the use of other types of transactions— formula and forward contracts and negotiated grid arrangements— increased from about 50 percent of cattle in 2002 to approximately 78 percent in 2015. However, the use of the cash transactions slightly increased again from 2016 through 2017. Figure 6 shows the share of fed cattle transactions by method from November 2002 through September 2017. Several experts and stakeholders we interviewed told us that feeders and packers have generally increased their use of formula contracts for a variety of reasons, including improving the quality and consistency of beef products while decreasing transaction costs. For example, one industry stakeholder told us that formula contracts ensure a steady supply of specific cattle breeds and eliminate the costs of sending personnel to bid for these cattle using cash transactions. In addition, a report from AMS’s Packers and Stockyards Program (P&SP) noted that formula contracts help feeders to, among other things, reduce the price risks of raising and selling fed cattle; these contracts also help packers ensure a steady supply of cattle to help them satisfy delivery requirements they may have in contracts with their wholesale or retail customers. However, some experts and stakeholders told us that the movement away from cash transactions has reduced the depth and liquidity of several regional markets, which may make it more difficult for market participants to accurately determine the market price of cattle (e.g., for a cash sale) because there are fewer observed price points. Moreover, the effect of this difficulty in determining market prices is not limited to cash transactions because cash prices are often used to establish a base price in formula contracts. This reduction of depth and liquidity may also make the fed cattle market more susceptible to wider price fluctuations, according to some experts we interviewed. Several experts and stakeholders told us that options such as an online fed cattle exchange, established in May 2016, may help address this issue by providing a transparent forum for feeders and packers to sell and purchase fed cattle. However, the exchange is still in its early stages and, as of September 2017, comprised a small fraction of total fed cattle transactions. Drought Prolonged drought may cause cow-calf operators to liquidate their herds. This is because drought can reduce the supply of forage used to raise younger cattle, so that cow-calf operators cannot feed as many cattle on available pasture and rangeland. From 2000 to 2010 the United States saw periods of both extensive drought and extensive wetness on a broad scale, according to the National Oceanic and Atmospheric Administration. Following that, in early 2010, little of the country was experiencing drought, according to the U.S. Drought Monitor; however, drought conditions worsened throughout the second half of that year and improved through the first half of 2011 before worsening in the second half of 2011. This drought impacted some areas of the United States particularly hard with nearly 12 percent of the country in an exceptional drought by the third quarter of 2011. Although the winter months of January 2012 through March 2012 were dry, extreme drought levels improved through early 2012 before a widespread drought began in the summer of 2012. By July 2012, more than 80 percent of the country was at least abnormally dry and more than 60 percent of the country was experiencing drought. From 2013 through early 2015, drought conditions generally improved. Overall drought conditions continued to improve in 2015, except in the spring and fall, which were somewhat drier. The second half of 2016 was drier but after this, drought conditions improved, with a smaller percentage of the country experiencing dryness in 2017 than had been seen since 2000. Figure 7 shows the percent of the United States land mass experiencing drought conditions from January 2000 through May 2017. Number of U.S. Cattle at Various Points in the Supply Chain The number of cattle at different points in the supply chain can provide various levels of insight into fed cattle market supply. Specifically, the beef cow inventory provides insight into what may happen in the fed cattle market in a few years, and the number of cattle on feed can give an indication of what may happen in the fed cattle market in the next few months. The number of cattle sold for slaughter (also called marketings) is an indication of current supply levels in the fed cattle market. Beef Cow Inventory The beef cow inventory drives the size of the overall cattle inventory and therefore the number of fed cattle coming to market. As such, the size of the beef cow inventory provides a sense of how the fed cattle industry may change over the following 2 years. Our analysis of inventory data from USDA’s National Agricultural Statistics Service indicated that the beef cow inventory declined from 2006 through 2014, at which point it started to increase. In the most recent period of contraction the year-over- year period with the highest rate of contraction in the beef cow inventory was from July 2011 to July 2012, during which the beef cow inventory decreased by 3.0 percent—a rate of contraction not seen in a single year- over-year period since July 1988 to July 1989. The inventory then began to expand in 2014, increasing rapidly by mid-2014, and continued to expand through 2016. From January 2016 to January 2017, the beef cow inventory expanded 3.5 percent, the highest rate of expansion in a single year-over-year period since January 1993 to January 1994. Prior to the late 1980s, higher rates of expansion and contraction were common, but during the next 20 years, annual changes in the beef cow inventory were more gradual, with rates of expansion staying below 0.5 percent. Figure 8 shows the beef cow inventory from 1920—the first year for which we have data—through 2016, with an overall downward trend since the mid-1970s. Cattle are sent to feedlots and are fed for 3 to 10 months before being sold for slaughter. Thus, the number of cattle on feed at a given point in time provides insight into the number of cattle that will be available for slaughter in the coming months. Unlike the beef cow inventory, which saw larger rates of increase in the mid-2010s than seen in the prior 2 decades, the number of cattle on feed increased at a more modest rate during the same time frame. The total number of cattle on feed decreased throughout 2012 and 2013, then began increasing in 2014, and continued to increase through 2015, before decreasing in 2016. Although it might be expected that cattle on feed would increase steeply about 18 months after the steep increases in the beef cow inventory, these sharper increases may be delayed as cow-calf operators continue to increase their beef cow herds, thus preventing these heifers from going into the pool of fed cattle. Sales for Slaughter Total sales for slaughter declined overall from the early 2000s through 2015. On an annual basis, sales for slaughter declined sharply from 2014 through 2015 before increasing sharply in 2016. Sales for slaughter fell 5.68 percent in 2014, the largest decline in the data available (starting in 1996), followed by a further decline of 3.87 percent in 2015 and a rise of 6.29 percent in 2016, the largest increase in the data we analyzed. The monthly sales for slaughter data show that after the long decline starting in 2014, year-over-year increases in sales for slaughter began in November 2015 and continued through August 2017, the most recent month for which data were available at the time of our review. Feedlot Consolidation and Size Some experts told us that significant consolidation has occurred among feedlots. Our analysis of USDA National Agricultural Statistics Service data from the mid-1990s through 2016 suggests that the number of individual larger feedlots (those with a capacity of 50,000 or more head of cattle) increased by a small amount—in terms of both number and percentage of total feedlots. During this time frame, the number of cattle fed at large lots increased, and the number of cattle fed at feedlots of other sizes decreased. For example, while there were 45 feedlots with a capacity of more than 50,000 head of cattle in 1996, there were 73 feedlots of this size in 2016. Similarly, in 1996, large feedlots made up 2 percent of all feedlots with a capacity of more than 1,000 head of cattle; this number rose to 3 percent in 2016. Furthermore, since the late 2000s, larger feedlots generally have been contributing an increasing portion of fed cattle to overall slaughter numbers, with medium-sized feedlots (those with a capacity of 16,000 to 49,000 head of cattle) generally contributing fewer. Cattle Weights Average cattle weights have increased gradually and steadily from 2002 through September 2017, according to our analysis of average weights reported to AMS and several industry stakeholders we interviewed. Figure 9 shows average monthly and annual cattle weights in live weight contracts from November 2002 through September 2017. In the figure, seasonal fluctuations are visible, with weights generally declining in late fall. Consumption of Beef and Other Proteins According to our analysis of consumption data from USDA’s Economic Research Service, there has been a broad societal shift in consumption from beef to chicken in the United States since the mid-1970s. Increasing consumption of proteins such as chicken may shift consumption away from beef, which would put downward pressure on beef and cattle prices. Per capita chicken consumption has increased steadily for the past 40 years, though the growth in consumption has slowed since 2006. Per capita pork consumption has remained steady over the same period, while per capita beef consumption has largely decreased. Figure 10 provides information on the long-term trends in per capita consumption of beef, pork, and chicken in the U.S. from 1970 through 2016. Product Differentiation and Branded Beef As consumer tastes and demands have changed since 2000, producers have increased differentiation of their products. For example, producers have increased grass-fed options since 2000, and organic beef became available in 2002. In addition, producers have increased their offerings of branded beef varieties (e.g., Certified Angus and Wagyu beef). As beef products become increasingly differentiated and more branded varieties become available, average prices of beef and fed cattle may be expected to rise. Packers are unlikely to differentiate or brand a product if it is less valuable than an unbranded commodity product, so they would likely only create differentiation or branding for higher-value beef products, which are sold at higher prices than commodity beef. Because of this, packers will likely pay more for the fed cattle that produce these higher value products. We analyzed information on branded beef from AMS and found that branded beef sales increased from about 7 percent of total beef sales in 2002 to about 17 percent of total beef sales in 2017. Some experts we spoke with pointed out that the increase in formula and forward contracts has gone hand-in-hand with the increase in product differentiation and branding. They told us that, as retailers demand specific types or brands of beef, the industry has relied more heavily on formula and forward contracts to ensure a steady supply of those types and brands. Beef Price Spread In the fed cattle market, the fed cattle-retail price spread is the difference between the price feeders receive for their cattle and the price consumers pay for beef at the retail level. The vast majority of the price spread comes from price spread between the wholesale and retail levels. In short, the retail price is much higher than the wholesale price that retailers pay packers for beef, which, in contrast, is not much higher than the price packers pay feeders for fed cattle. The fed cattle-wholesale price spread remained fairly steady from 2000 through May 2016, typically remaining below $0.50 per pound of retail weight equivalent. The price spread, at both the fed cattle-wholesale and wholesale-retail levels, spiked in June 2016. The spike was small but persistent, continuing through the end of 2016. To be more specific, the fed cattle-wholesale spread was between $0.51 and $0.67 from June through December, compared with a range of $0.36 to $0.52 from January through May of 2016. The price spread dropped to lower levels in early 2017, then spiked again from May through August 2017, the latest date for which data were available at the time of our review. Similar to the fed cattle-retail and fed cattle-wholesale spreads, the fed cattle share of the beef dollar is a measure of the percentage of the retail price of beef made up by the price of fed cattle. The fed cattle share of the beef dollar dropped from about 65 percent in the early 1970s to about 50 percent by the mid-1990s. From 2000 to the present, the farmers’ share of the beef dollar has remained relatively flat, rising to close to 60 percent in 2014 but regularly being as low as 40 percent. Several factors can drive changes in the fed cattle share of the beef dollar. For example, a report from USDA’s Economic Research Service found that much of the decline in the proportion of the beef dollar paid to producers can be driven by technology changes that help increase productivity; and, as producers have become more productive, they have been willing and able to supply more animals to packers at lower prices. Figure 11 shows the historical price spread for beef from January 1970 through December 2016. Bovine Spongiform Encephalopathy and Beef Exports Some industry stakeholders told us that the bovine spongiform encephalopathy (BSE) event—in which the disease was detected in a cow in the United States in 2003—has had a lasting effect on beef exports from the United States. Specifically, these industry stakeholders told us that the 2003 event—and additional BSE events in 2005 and 2006—has continued to depress demand for beef by closing certain foreign markets to U.S. beef. Based on our review of ERS export data, the total tonnage of beef exports plummeted in January 2004 due to the BSE outbreak in the United States and did not consistently return to levels seen before the BSE outbreak until May 2010. Appendix III: Econometric Model to Estimate the Impact of Market Power on Fed Cattle Transaction Prices This appendix provides information on the econometric model we used to estimate the impact of market power on transaction prices for fed cattle. It describes our econometric model in detail, provides the results of our analysis, and discusses some limitations. Econometric Model We developed an econometric model to analyze the effect of market concentration on the cash price of fed cattle. Specifically, we analyzed how the level of market concentration of beef packers (packers) affected the cash price of fed cattle. The U.S. fed cattle market is characterized by a large number of feedlot operators (feeders) that sell to a small number of packers for slaughter at packing plants; approximately 83 to 85 percent of the total amount of packing market is conducted by four major packing companies. To analyze the packing market, we obtained transaction data from the Agricultural Marketing Service’s Packers and Stockyards Program (P&SP) within the U.S. Department of Agriculture (USDA). The data we used for our analysis comprised transactions collected from these four largest packers for about 127,000 cash transactions from 2013 through 2015. The data identified the packing plant involved in each transaction; however, we generally could not identify the specific feedlot involved, especially when comparing transactions across different packers. The data were administrative data from each packer, and in some instances, a packing plant may have used a unique set of identifying codes for the feedlots with which it did business. Therefore, we could only consistently identify different feedlots associated with a given packing plant. The same feedlot may have done business with a different plant but we were unable to identify this information consistently across plants. The data contained 963 different dates on which transactions occurred, 970 counties where feedlots were located, and 23 packing plants that purchased fed cattle. To reduce distortion from dissimilar transactions and outliers, we eliminated transactions that were not cash transactions as well as cash transactions that met certain parameters. Specifically, we excluded transactions with (1) fewer than 10 animals; (2) a per-pound carcass price of less than 1 dollar or of 10 dollars or more; (3) an average weight per animal that was less than 500 pounds or more than 2,000 pounds; (4) a slaughter date that preceded the number of days from the purchase date by more than 14 days; (5) more than 10 percent cows in the lot; and (6) more than 10 percent ungraded cattle in the lot. Dependent Variable Our dependent variable in the model was the logarithm of the transaction price per carcass-based pound (not including freight) between a packing plant and a feedlot on a given purchase date. Explanatory Variables Our model included a variety of explanatory variables, including the Herfindahl-Hirschman Index (HHI), beef quality and yield grades, feedlots, live weights, and fixed effects for time and geographic location of the feeder and packing plants. HHI. The key variable in the model was the HHI, a measure of packer market concentration faced by feedlots in a given geographic area— analyzed in the model by county—on a given purchase date. The HHI takes the same value for any transaction in a given county on a given purchase date (it varies only at the county level and over time). Our calculation used a 90-day moving average window (current day and the 89 days prior) to calculate the HHI for each county on each date. Although our model included only cash transactions, we calculated the HHI using all transactions; that is, we included formula contracts, forward contracts, negotiated grid transactions, and cash transactions. However, we excluded transactions involving packer- owned feedlots and feedlots not in the United States from our HHI calculation. Econometric analysis that uses HHIs to explain prices generally considers the possibility that the HHI variable is endogenous and is possibly correlated with the error term and to address this issue, we instrumented our HHI variable. Beef quality and yield grades. For each lot of cattle transacted, we used as controls the percent of fed cattle in each transaction whose beef graded as Choice or better. We also used as a control the percent of fed cattle in each transaction whose beef yield was rated grades 1 or 2. In addition, we included a measure of the percentages of Holstein cattle, ungraded cattle, and cows in the lot. Large feedlots. We used an indicator (dummy) variable for large feedlots—specifically feedlots that were in the 95th percentile of feedlots for the packing plant with which the transaction occurred. We used this variable to control for possible extra bargaining leverage that large feedlots may have with packers. Live weight. We controlled for the average live weight of the cattle lot by including categorical variables (dummies) for: less than 1,050 pounds and more than 1,500 pounds (the 1,050 pounds to 1,500 pounds category is the omitted comparison category). We selected these category cut-off values because generally prices are reduced for cattle lots with an average weight of less than 1,050 pounds or more than 1,500 pounds. Fixed effects. We used a set of indicator variables to account for fixed effects associated with packing plants, time, and individual counties. Specifically, we used a set of packing plant indicator variables to account for effects pertaining to individual packing plants, such as a plant’s location. We also used a set of time indicator variables—one for each purchase date in the data—to account for prevailing market conditions on that particular day, such as whether prices were generally low or high on that day. Last, we used a set of county indicator variables to account for local or regional effects that are time invariant, such as a county’s transportation availability or proximity to inexpensive sources of feed. The Model Our model was written as: 𝑦𝑦𝑖𝑖,𝑡𝑡=𝑋𝑋𝑖𝑖,𝑡𝑡𝛽𝛽+𝜀𝜀𝑖𝑖,𝑡𝑡 ,𝑖𝑖=1,…,𝑁𝑁𝑡𝑡; 𝑡𝑡=1,…,𝑇𝑇. 𝑦𝑦𝑖𝑖,𝑡𝑡 was the dependent variable in our model; namely, the logarithm of 𝑋𝑋𝑖𝑖,𝑡𝑡 was the list of control variables used in the model including the the transaction price per pound. sets of fixed effects for plants, counties and purchase dates. β was the list of parameters associated with the control variables Each observation in the model was a single transaction between a packing plant and a feedlot. The subscript i represented a transaction between a feedlot and a packing plant, and the subscript t represented (𝑋𝑋𝑖𝑖,𝑡𝑡). 𝜀𝜀𝑖𝑖,𝑡𝑡. was an error term. the purchase date of that transaction. The term 𝑁𝑁𝑡𝑡 expressed the fact that the number of transactions may have varied across purchase dates. Results Our results suggest that when there is a more concentrated market of buyers (packers), those packers will have more negotiating and market power, and therefore, with other factors held constant, these packers will be able to purchase fed cattle at lower prices from feeders. We found a significant negative parameter estimate for our HHI explanatory variable. This estimate suggests that for each 0.01 increase in the HHI—meaning, a greater degree of packer concentration—there is about a 0.86 percent reduction in the price of cattle. The interquartile range for the HHI is from approximately 0.45 to 0.55, which implies an approximate price effect of 9 percent across that range. For a carcass price of about $2.22 per pound—the average for 2013 through 2015, based on the data from P&SP—that translates to a difference of about 20 cents per pound variation across this HHI range. The variables used in the model to control for effects other than HHI had the expected directional effect on price or else were not significant. Parameter estimates for the indicator variables for beef quality and yield were both significant and positive, suggesting that fed cattle with higher beef quality grade and yield levels have a higher price. The indicator variables for the lots with weights of less than 1,050 pounds average weight suggest that lots with very low weight received lower prices. However, the variable for lots with more than 1,500 pounds was not significant. The feedlot size variable was not statistically significant. Our controls for the percent of Holsteins and ungraded cattle in the lot were both negative and statistically significant, as expected. The percent of cows in the lot was not statistically significant. Finally, our measure of feedlot size was positive and statistically significant, suggesting that larger feedlots may be able to obtain higher prices from packers. Our results suggest that instrumenting the HHI variable was appropriate. We used a measure of the proportion of total fed cattle traded by the packer using non-cash transaction methods as an instrument. Our results satisfied the essential specification tests for appropriate use of instruments: The endogeneity tests rejected the null hypothesis that the endogenous variable (HHI) can be treated as exogenous. Thus it is appropriate to instrument the HHI variable. Our results rejected the null hypothesis of weak instruments— Sanderson-Windmeijer, Stock-Wright and Anderson-Rubin. The F- Statistic from the first stage of the regression (20.36) is highly significant and exceeded the critical Stock-Yogo value for the 10 percent maximal instrumental variable size (16.38). Thus the instruments had sufficient explanatory power in the first-stage regression equation. See Table 1 for a more detailed description of our results. Limitations Our analysis had a number of limitations as listed below. Only transactions for the market’s four major packers were included in the data from P&SP. As a result, our HHI variable is a “large firm HHI.” Whereas these four firms account for approximately 83 to 85 percent of total cattle sold, the remaining 15 to 17 percent of fed cattle sold in the United States was not included in the data from P&SP. In addition, we did not use some of the four large packers’ plant-level data because the data was missing key variables, such as the purchase date. Therefore, our estimates of HHI in any location are likely to be overestimates, and in general, our HHI estimates for any location should be viewed only as relative to other locations in this analysis and should not be compared with measures in other studies or industries. The feedlot location may not be in the city listed for it. In some cases, the feedlot city that is named in the data from P&SP as the location of the feedlot is not the exact feedlot location. The feedlot may be somewhat outside the city or at a headquarters location. Feedlot concentration differs across counties. The analysis reflects the fact that, on average, in any given area, feedlots are far more numerous and packing plants are relatively few in number. However, this is not generalizable to all areas. Although there are a relatively large number of feedlots in the United States in general, in some cases, it is possible that a relatively small number of feedlots account for a relatively large proportion of cattle sold to some packing plants. Our data could only identify a feedlot that sold cattle to a particular packing plant, so we could not identify which feedlots might have sold fed cattle to multiple plants. We control for this in the regression model in part by including an indicator variable for packing plants’ transactions that were with a large feedlot (in the 95th percentile for that particular packing plant). HHI calculations must use a geographic definition. In our analysis, we include fixed effects for each packing plant as well as fixed effects for each county, which controls for variations in market conditions in different areas that are constant over time. The calculation of the HHI takes into account transactions flowing from different counties to the same packing plants and from a single county to different packing plants, so the HHI calculations by necessity must use some geographic definition. However, our HHI calculation does not depend upon a county to define a market, but simply measures market concentration conditions that the feedlots in that county face. The level of detail and scope in the data varied across the different packing plants in our data set. For example, a detailed breakdown of the type of cattle was not available on a consistent basis across all packers and packing plants. Therefore, we were unable to control for some variation in quality and type of cattle in our model. However, this may be mitigated by our use of fixed effects if certain transaction characteristics—for instance, the type or breed of cattle sold—are fairly constant over time in a given county or plant. As in any model, there is the possibility of misspecification or bias. We used various econometric tests for our instrumental variables estimation (two-stage least-squares): endogeneity of the HHI measure, J-statistic for identification, and weak instrument tests. However, in any instrumental model there is a possibility that the instruments are inappropriate or the estimators are biased, and that bias may be exacerbated in the presence of outliers. Sargan recommends a simple procedure for assessing the efficacy of two- stage least-squares versus ordinary least squares. Our results using this criterion suggests our use of two-stage least squares is justified. Packing plants from the same company likely did not compete with one another. Our HHI measure was calculated treating each packing plant as a separate entity rather than at the packing company level, despite the fact that multiple plants are owned by each of the four major packing companies. Therefore, we assumed that packing plants “compete” to some extent regardless of whether they are owned by the same company. However, in the data we used for our model, there were no plants owned by the same packing company in the same city. There may be noise in the data. The data were administrative data and may have random noise associated with issues such as different administrative procedures of a plant, affecting when and how the data are entered. We cleaned the data to remove observations that appeared unreasonable or not easily explained, but some variation in prices remains. Specifically, in the data that was used in our model, the median intra-day price variation was about 18 percent for the 1st to 99th percentile and about 11 percent for the 5th to the 95th percentile. Appendix IV: Recognized Experts That We Interviewed Appendix V: Comments from the U.S. Department of Agriculture Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Thomas Cook (Assistant Director), Michael Kendix (Assistant Director), Kevin Bray, Candace Carpenter, Tara Congdon, Jaci Evans, Dan Royer, Monica Savoy, Kiki Theodoropoulos, Richard Tsuhara, and Jarrod West made key contributions to this report. Related GAO Products U.S. Agriculture: Retail Food Prices Grew Faster Than the Prices Farmers Received for Agricultural Commodities, but Economic Research Has Not Established That Concentration Has Affected These Trends. GAO-09-746R. Washington, D.C.: June 30, 2009. Livestock Market Reporting: USDA Has Taken Some Steps to Ensure Quality, but Additional Efforts Are Needed. GAO-06-202. Washington, D.C.: December 9, 2005. Economic Models of Cattle Prices: How USDA Can Act to Improve Models to Explain Cattle Prices, GAO-02-246. Washington, D.C.: March 15, 2002. Packers and Stockyards Programs: Actions Needed to Improve Investigations of Competitive Practices, GAO/RCED-00-242. Washington, D.C.: September 21, 2000. Beef Industry: Packer Market Concentration and Cattle Prices, GAO/RCED-91-28. Washington, D.C.: December 6, 1990.
Why GAO Did This Study The U.S. cattle industry accounted for about $64 billion in receipts in 2016, according to USDA. The price of fed cattle has fluctuated widely from 2013 through 2016 and experienced a sharp downturn beginning in late 2015, raising concerns about the market and questions about USDA's oversight. GAO was asked to review issues related to the U.S. cattle market. This report (1) describes key factors that affected changes in fed cattle prices from 2013 through 2016; (2) describes what CFTC found about possible trading irregularities in the futures market for fed cattle in 2015 and any changes to the futures contract for fed cattle since 2015; and (3) examines factors that may affect USDA's routine monitoring to ensure against discriminatory or anticompetitive practices in the fed cattle market. GAO reviewed economic data and USDA and CFTC documentation; analyzed transaction data on beef packer purchases from 2013 through 2015; and interviewed recognized experts, cattle industry stakeholders such as feedlot operators and packers, and agency officials. What GAO Found Supply and demand factors , such as a drought that affected the price of cattle feed, affected changes in prices of fed cattle—those ready for slaughter from 2013 through 2016. According to industry experts and GAO's analysis, a drought from late 2010 to early 2013 led the cattle inventory to fall and rise and, in turn, fed cattle prices to fluctuate (see figure). GAO's analysis of cattle market data from the U.S. Department of Agriculture (USDA) also indicated that competition levels among packers that slaughter and process fed cattle did not appear to affect the national price changes in the fed cattle market in 2015 but that areas of the country with less competition among packers had lower cattle prices. The Commodity Futures Trading Commission (CFTC)—an agency that regulates cattle futures markets where participants buy and sell standardized agreements for cattle at an agreed-upon price at a specified date in the future—did not find evidence of trading irregularities in the cattle futures market in 2015. However, to better align futures contracts with the actual fed cattle market, CFTC reviewed changes to contract terms and will continue to monitor those changes. The Packers & Stockyards Program (P&SP), which oversees the cattle industry within USDA's Agricultural Marketing Service (AMS), does not have routine access to daily data for transactions between feedlot operators, which produce fed cattle, and packers. Those data are collected by AMS's price reporting group, which does not routinely share them with P&SP because officials said it is prohibited by statute from doing so. The Livestock Mandatory Reporting Act of 1999 specifies that the Secretary of Agriculture may authorize the sharing of these data for enforcement purposes, which USDA interprets as an ongoing investigation, not market monitoring. In November 2017, USDA reorganized P&SP under AMS and officials said it was too early in the reorganization to determine whether AMS would view routine sharing of these data any differently. Reviewing the extent to which these data can be shared with P&SP provides an opportunity to enhance P&SP's oversight of the fed cattle market. Determining whether it is advisable to request additional exceptions from information disclosure restrictions from Congress would help USDA strengthen its oversight. What GAO Recommends GAO is making two recommendations, including that USDA review the extent to which, under statute, the price reporting group can share daily transaction data with P&SP, and if USDA determines the statute does not permit such sharing and it is advisable, submit to Congress a proposal to allow such sharing. USDA agreed and subsequently determined that the act does not allow for such sharing and it would not be advisable citing concerns about the public's trust in the program.
gao_GAO-18-7
gao_GAO-18-7_0
Background The National Strategy for Combating Wildlife Trafficking and Reducing Demand The 2014 National Strategy defines wildlife trafficking as including all aspects of the trade, from poaching and transit through consumer use. The National Strategy outlines the guiding principles and strategic priorities for U.S. efforts to stem illegal trade in wildlife, and one of the top three priorities identified is to “Reduce Demand for Illegally Traded Wildlife.” Specifically, the National Strategy states that, as a strategic priority, reducing demand for illegally traded wildlife calls for raising public awareness of the harms done by wildlife trafficking through outreach in the United States and public diplomacy abroad. The National Strategy also states that the Task Force will seek to enlist individual consumers in this fight through education and outreach to reduce demand for these products and change consumption patterns that drive wildlife trafficking. While the Implementation Plan outlines a unique set of activities to reduce demand, other activities under the plan’s objectives may indirectly affect demand. For example, one of the objectives under “Reduce Demand for Illegally Traded Wildlife” is to raise public awareness and recognition of wildlife trafficking and its negative impacts and associated risks of prosecution (emphasis added) as a means to change harmful consumption patterns. Implementing robust legal frameworks and effective enforcement increases the risk of prosecution, which may deter not only wildlife traffickers but also consumers, who may risk legal penalties. For the purposes of this report, we consider efforts to reduce consumption of wildlife and law enforcement efforts to prevent illegal use of wildlife as demand reduction-related activities. The Implementation Plan designates various U.S. agencies to lead or participate in achieving the strategic priority of reducing demand for illegally traded wildlife, which are outlined in table 1. In fiscal years 2014 through 2017, Congress directed that not less than certain specified amounts, totaling $271 million over the 4 fiscal years, be made available to combat wildlife trafficking (see fig. 1). Global Efforts to Reduce Wildlife Trafficking Since September 2016, U.S. agencies and global stakeholders have taken a range of actions to address CWT issues (see fig. 2). For example, in October 2016, Congress passed the Eliminate, Neutralize, and Disrupt (END) Wildlife Trafficking Act of 2016. Among other things, the act calls for the Secretary of State, in consultation with the Secretaries of the Interior and Commerce, to submit an annual report that lists each country determined by the Secretary of State to be a focus country and a country of concern. The act also directs the Task Force to submit an annual strategic assessment of its work and provide a briefing to Congress. Additionally, the 17th Meeting of the Conference of the Parties for the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) took place September 24, 2016, to October 5, 2016. In December 2016, China announced that it would close its domestic ivory market by the end of 2017, and in March 2017, China announced closure of 67 ivory carving entities and retail outlets across the country. Estimates of the Illegal Wildlife Trade As reported by the United Nations, the International Criminal Police Organization (INTERPOL), and other sources, wildlife trafficking networks span the globe. Although sources have attempted to measure trade flows, there is no precise estimate of illegally traded wildlife, and available estimates are subject to uncertainty. In 2014, the United Nations Environment Programme (UNEP) reported that various sources estimate the global scale of illegal wildlife trade is between $7 billion and $23 billion annually. In 2016, UNEP and INTERPOL estimated that the scale of wildlife crime may have increased, based on a rise in environmental crime. They estimate that environmental crime increased by 26 percent since 2014 and continues to increase by 5 to 7 percent annually. Illegal, unreported, and unregulated (IUU) fishing often are not included in these estimates since discussion of wildlife trafficking, as it relates to marine species, focuses on those species protected under CITES and under statutes such as the Endangered Species Act and the Marine Mammal Protection Act, according to NOAA. While IUU fishing targets commercially harvested marine species, as the Implementation Plan outlines, the trafficking of fisheries products is a form of wildlife trafficking. In 2016, UNEP and INTERPOL estimated that the global scale of IUU fishing ranges from $11 billion to $24 billion annually. Demand for Illegal Wildlife and Wildlife Products Is Difficult to Measure, but Data and Reports Indicate Range of Species and Products Is Diverse The United States, China, and countries in Southeast Asia consume many types of legal and illegal wildlife for diverse purposes. It is difficult to measure demand for illegal wildlife and wildlife products due to the illicit nature of the trade, but various data sources and reports provide examples of the range of wildlife demand by illustrating types of wildlife that are seized by governments and purchased by consumers. Illegal Wildlife Demand and Trade Are Diverse in the United States U.S. trade in wildlife and wildlife products includes a variety of wildlife such as live reptiles, birds, mammals, and elephant ivory, according to law enforcement information, reports, and government and NGO officials. FWS and NOAA data on wildlife products seized at U.S. ports provide examples of the diversity of illegally traded wildlife in the United States. FWS and NOAA may seize wildlife products for a variety of reasons that include import, export, or sale of endangered or threatened species protected under U.S. laws and regulations. For example, FWS may seize a shipment due to invalid documentation needed to clear the shipment. From 2007 to 2016, the top 10 wildlife shipments—by species or species group—seized nationally by FWS were coral, crocodile, conch, deer, python, sea turtle, mollusks, ginseng, clam, and seahorse. These seized wildlife were in a variety of forms when confiscated, as shown in table 2. For example, more than half of seized seahorse shipments were dead whole animals, and a smaller percentage were medicinal parts or products. During the past 10 years, more than one-third of the wildlife shipments seized by FWS were confiscated while being imported from or exported to Mexico (13.6 percent), China (13 percent), Canada (8.6 percent). Additional examples of wildlife seized by FWS are shown in figure 3 and in appendix II. Seizure data from NOAA Fisheries’ Office of Law Enforcement show that it has also seized a variety of marine wildlife products. From 2007 to 2016, confiscated shipments have included whale teeth and meat, seal oil pills, shark fins, and seal fur products like mittens and boots, according to NOAA’s seizure data. Seizure data from FWS and NOAA provide a helpful illustration of wildlife that has been confiscated at U.S. ports but may not be fully reflective of the illegal wildlife trade and consumption. Seizure data show the types of wildlife confiscated at ports of entry in a country, but there are limits to what these data can tell us about the demand for products like illegally traded wildlife. Various factors influence the number of seizures at any given time or in any location such as the level of illicit trade and the level and efficacy of enforcement efforts. For example, as part of their enforcement, both NOAA and FWS conduct inspections of shipments at U.S. ports. In some cases, they conduct targeted inspections that may be based on information they have about a particular species or market, which may influence detection and seizures of illegal products. NOAA and FWS officials indicated that they can increase their enforcement efforts by targeting investigations on specific species or products. This additional effort may result in the seizure of more shipments than would be made using routine inspection processes. In 2016, the NGO WildAid published a baseline survey conducted by KRC Research to inform a public awareness campaign effort with FWS. The survey reported that roughly 1 in 10 respondents in the United States indicated that they had purchased or knew someone who had purchased live animals such as iguanas, parrots, parakeets, or tortoises. A smaller proportion of those that responded (roughly 1 in 20) reported that they had purchased or knew someone who had purchased ivory. Reporting by the International Fund for Animal Welfare in 2013 also identifies the United States as a key end market for reptiles such as crocodiles, pythons, and caimans and for wildlife products such as ivory. Based on reporting and discussions with U.S. government officials, there may be varying reasons for the demand for wildlife and wildlife products. Potential drivers of demand in the United States—in particular demand for illegal wildlife from Latin America— may include a desire for rare and exotic plants and animals, according to reporting by Defenders of Wildlife. FWS officials in Miami told us that some of the wildlife products they confiscate—such as products from cruise passengers— were purchased by travelers who were unaware of the restrictions on the wildlife product. FWS and U.S. Customs and Border Protection (CBP) officials told us that consumers use wildlife for many different purposes, including as pets, trophies, jewelry, food, religious items, and for medicinal purposes. For example, FWS officials in Miami told us that coral is often smuggled as part of the pet trade for use in aquariums. At the Port of Miami, an FWS official told us that FWS has also seized queen conch meat, which is exported from the Caribbean as a delicacy, according to FWS. During investigations, FWS has found wildlife intended to be used as art or trophies. During the course of their investigations, FWS officials in Miami have found a rhino bust being sold for $80,000 and a giraffe bust being sold for $100,000. FWS has also seized scarlet macaw feathers being used in jewelry, elephant ivory carved as decorative pieces, and taxidermy big cats seized as hunting trophies. Illegal Wildlife Demand and Trade Are Diverse in China and Countries in Southeast Asia Demand for illegally traded wildlife in China and countries in Southeast Asia includes many wildlife species and end uses, according to reports and government and NGO officials in the region. For example, iconic wildlife such as elephants and rhinos are often cited in reports and by officials in the field as examples of wildlife consumption in China and Southeast Asia, but other wildlife, such as pangolins, bears, sharks, and sea turtles are also named among the wildlife being consumed. China is a consumption country for illegal wildlife, while Hong Kong, Thailand, and Vietnam are consumption and transit locations, according to officials we spoke with from the U.S. government, foreign governments, international organizations, and NGOs in these locations. Thailand often serves as a transshipment point for illegal wildlife due to its land borders with China, Laos, and Cambodia, according to government of Thailand officials. Government officials in Vietnam stated a similar claim, explaining that the country is often a transshipment point due to its land borders with China and Laos. Table 3 displays examples of wildlife consumed and trafficked in China, Hong Kong, Thailand, and Vietnam, according to U.S. government, foreign government, and NGO officials in-location and at DOI Headquarters. The International Fund for Animal Welfare has reported that China is the world’s largest consumer of illegal wildlife products due to its demand for ivory, rhino horn, pangolin scales, bear bile, tiger bone, and shark fin soup. According to analysis by the United Nations Office on Drugs and Crime (UNODC) of seizure data from the World Wildlife Seizure database, China was the destination for about 40 percent of the ivory shipments that had reported destinations from 2006 to 2015. Reports also identify Thailand as a part of the illegal ivory trade. INTERPOL’s 2015 investigation, Operation Worthy II, resulted in seizures of several tons of elephant ivory in Thailand and Singapore. TRAFFIC, the wildlife trade monitoring network, visited retail outlets in Bangkok, Thailand, during certain periods in 2013 and 2014 and, through covert surveys of vendors, found bangles, rings, toothpicks, hairpins, chopsticks, sculptures, and other products made of ivory for sale. TRAFFIC reports that for seven consecutive months, from November 2013 to May 2014, their surveys found more than 10,000 ivory items openly on display for sale in Bangkok. An NGO official we spoke with in China told us that part of the NGO’s efforts includes targeting Chinese tourists traveling to Africa and Southeast Asia to prevent purchasing of ivory as well as rhino horn. UNODC has reported that more than two-thirds of rhino horns seized from 2006 to 2015 were destined for China or Vietnam. Government officials in Hong Kong told us that they have also seized a variety of wildlife products such as pangolin scales and turtles. Examples of wildlife products seized by the government of Hong Kong are shown in figure 4 and in appendix II. Hong Kong’s government has also seized elephant ivory, though as of March 2017, certain registered ivory can be legally traded in Hong Kong under a license. The Organisation for Economic Co-operation and Development reports that high economic growth may fuel consumer demand for status goods such as art from elephant ivory and traditional medicine using rhino horn. According to NGO officials we met with in Vietnam and China, consumers purchase illegal wildlife products as a status symbol or to demonstrate wealth. Wildlife is considered to be expensive and exotic in these countries, and there is conspicuous consumption in some areas, according to State officials in Vietnam and an FWS attaché. UNODC reports that a survey of 18 restaurants— identified as high end by UNODC— in Vietnam found that all of these restaurants sold pangolin meat. UNEP and INTERPOL describe a similar phenomenon of a culture of conspicuous consumption for wildlife products that indicate wealth. These organizations report that buyers place higher value on illegal wildlife products when they are considered rare and uncommon and thus drive up prices for illegal wildlife. Higher prices and the perception of luxury associated with products such as tiger pelts and shark fin soup may attract consumers who want to display their wealth, according to Global Financial Integrity. Another end use of illegally traded wildlife is in traditional medicine in China and Vietnam, according to State and NGO officials in these countries. They stated that there are beliefs that certain wildlife provide health benefits; for example, pangolin scales are believed to help lactating mothers produce milk. State and NGO officials noted that traditional Chinese medicine has a long history of using various wildlife products. For example, American ginseng root is often consumed as a medicinal ingredient in China, according to FWS. While export of American ginseng is permitted, there are restrictions based on factors such as the age of the root. FWS has seized American ginseng root being exported from the United States to China, and the Hong Kong government has seized American ginseng being smuggled into Hong Kong. For additional examples of how wildlife is consumed, see the side bar for results from surveys conducted by USAID’s Asia’s Regional Response to Endangered Species Trafficking (ARREST) program. Agencies Are Implementing Demand Reduction Efforts, but Opportunities Exist to Improve Collaboration in Southeast Asia FWS Raises Awareness and Enforces Laws in the United States and Builds Capacity Abroad State Contributes to CWT- Related Diplomacy, Training, and Outreach State has led diplomacy efforts and implemented training and outreach programs in Southeast Asia and China. Diplomacy: State’s diplomatic CWT efforts have included coordinating discussions between the U.S. and Chinese presidents in 2015 that, according to State, contributed to China and the United States jointly committing to further restrict ivory exports and imports. In June 2016, State and China’s State Forestry Administration also led the breakout session on wildlife trafficking during the eighth round of the U.S.-China Strategic and Economic Dialogue in Beijing. In December 2016, China announced that it would implement a domestic ivory ban, and in March 2017, China announced the closure of approximately one-third of the country’s licensed ivory stores and carvers. Training programs: State’s INL works to build law enforcement capacity abroad by supporting various trainings and workshops. For example, in 2015, Thailand INL funded training in wildlife trafficking and environmental crimes for 179 participants. In 2016, ILEA Bangkok sponsored two FWS-led CWT training courses and one environmental crimes course led by officials of the U.S. Environmental Protection Agency. During our field visit to Bangkok, we observed an ILEA course on combating wildlife trafficking for law enforcement officers, which is shown in figure 7. Through the United Nations Office on Drugs and Crime, INL funds Border Liaison Offices in Burma, Cambodia, Laos, Thailand, and Vietnam, intended to enhance interdiction and investigation capacity at land borders to prevent illicit trafficking. At these offices, INL has supported training for officials on wildlife trafficking detection and investigations. Outreach efforts: State has supported and implemented activities to raise awareness about wildlife trafficking in Southeast Asia and China. For example, State collaborated with USAID and the government of Vietnam to implement Operation Game Change, a 2015 awareness- raising effort designed to inform the Vietnamese public about wildlife trafficking issues such as the trade in rhino horn. In 2016, for World Wildlife Day, State’s Acting Assistant Secretary of State for Oceans and International Environmental and Scientific Affairs published an opinion editorial for the South China Post in Hong Kong to raise awareness about the illegal trade in elephant ivory. Four Major Initiatives Frame USAID CWT Efforts in Asia USAID conducts a range of CWT activities that are part of biodiversity, conservation, or other initiatives, but it has four major initiatives explicitly dedicated to CWT in Asia. Asia’s Regional Response to Endangered Species Trafficking: ARREST was a multiyear program completed in 2016. The program was designed to curb wildlife trafficking by reducing consumer demand, strengthening law enforcement capacity, and boosting regional learning networks. As part of ARREST’s demand reduction objective, the program implemented various awareness-raising efforts such as the iThink campaign, which developed and displayed public service announcements in airports and subways in China, Thailand, and Vietnam and on television stations in China and Vietnam. Through the initiative dubbed “Wildlife Friendly Skies,” the ARREST program raised awareness among airline and airport staff in transport hubs identified as hotspots for wildlife trafficking, which included Bangkok, Thailand; Guangzhou, China; Hanoi, Vietnam; Nairobi, Kenya; and Nanning, China. The ARREST program also held various courses aimed at strengthening capacity across Asia. For example, the program held 14 courses for 195 trainees who were from Cambodia, Indonesia, Lao People’s Democratic Republic, Thailand, and Vietnam to train participants on completing wildlife crime investigations. Saving Species: This USAID project began in 2016 and is a 5-year, $9.9 million effort to combat wildlife trafficking in Vietnam. The project specifically aims to reduce consumer demand for and consumption of illegal wildlife and wildlife products, strengthen wildlife law enforcement and prosecution, and improve and harmonize the legal framework for prosecuting wildlife crimes in Vietnam. Some of the project’s planned activities for the first year include market surveys focused on demand for wildlife such as elephant ivory, rhinos, pangolins, and tigers. The project plans to use the survey results to inform its awareness campaign messaging. The project also plans to conduct capacity assessments of enforcement agencies in Vietnam to inform development of targeted training curricula, modules, and materials. Wildlife Asia: This USAID activity, in collaboration with the Association of Southeast Asian Nations, aims to reduce the demand for wildlife products and to improve regional action to end wildlife crime in Southeast Asia and China. As of August 2016, USAID has issued one contract, with an estimated value of $22.9 million, to implement this activity. Protect Wildlife: This USAID project began in 2016 and is a 5-year, $24.5 million effort to reduce threats to biodiversity in the Philippines such as poaching and the illegal trade of wildlife and wildlife products as well as to sustain healthy ecosystems. USAID is working with public and private partners in the Philippines to strengthen conservation policies and improve habitat management and on-site and off-site enforcement systems. USAID also conducts biodiversity and conservation initiatives that have CWT-related objectives but are not dedicated solely to CWT. For example, USAID implemented the Ecosystems Improved for Sustainable Fisheries project in the Philippines, designed to conserve marine biodiversity, enhance ecosystem productivity, and improve law enforcement at fisheries to combat illegal, unreported, and unregulated fishing. Other Agencies Also Contribute to CWT DOJ, NOAA, and Homeland Security also support efforts to combat wildlife trafficking in the United States and Asia. DOJ prosecutes criminals and publicizes through press releases the results of criminal convictions to encourage public awareness of this issue. DOJ also has participated in capacity-building workshops in Burma, Laos, Thailand, and Vietnam and CWT events such as the 2016 Hanoi Conference on Illegal Wildlife Trade and the annual U.S.–China Joint Liaison Group on law enforcement, in which DOJ, State’s INL, and other agencies participate in the Anti- smuggling Working Group. According to DOJ officials, DOJ also regularly advocates the use of the United Nations Transnational Organized Crime Convention as a legal basis for international cooperation to combat wildlife trafficking. Domestically, NOAA inspects and seizes shipments at U.S. ports, investigates cases of wildlife trafficking, and raises awareness about wildlife crimes. NOAA has a liaison at Homeland Security’s CTAC and, according to NOAA officials, the CTAC has allowed NOAA to more proactively target shipments and improve coordination with FWS and CBP through daily interaction and more information sharing. As part of their efforts to raise awareness about wildlife trafficking, NOAA also works with DOJ, FWS, and State’s Bureau of Public Affairs to publicly report information on and raise awareness about law enforcement efforts such as seizures. Internationally, NOAA provides technical assistance, conducts capacity-building, and serves as a resource in international policy discussions. For example, in collaboration with USAID, an analysis unit from NOAA assisted the Philippines in developing an intelligence assessment of illegal trade and trafficking in marine species. In November 2015, NOAA Office of Law Enforcement officers participated in the Association of Southeast Asian Nations Trade and Environmental Dialogue in Malaysia, providing presentations on illegal, unreported, and unregulated (IUU) fishing and ways to combat the trade in IUU fish and fish products. DHS’s CBP supports and coordinates with FWS and NOAA to interdict illegal wildlife shipments at U.S. ports. ICE HSI investigates wildlife crime in the United States, and in Asia it supports foreign government CWT efforts through capacity building and information sharing. For example, in Vietnam, ICE HSI regularly shares information on wildlife seizures with the host government to support investigations. In 2015, in Thailand, ICE HSI conducted a 5-day workshop on advanced wildlife trafficking investigations for officials across the government. Disagreement on Roles and Responsibilities Hindered Some CWT Activities in Southeast Asia Although agencies have worked together to combat wildlife trafficking, disagreement on roles and responsibilities has hindered some CWT activities in Southeast Asia, according to some officials. In prior work, we defined collaboration broadly as any joint activity that is intended to produce more public value than could be produced when the organizations act alone. We also identified practices that can enhance and sustain collaborative efforts, including establishing mutually reinforcing or joint strategies, defining and articulating a common outcome, and agreeing on roles and responsibilities. We found that agencies applied some collaboration mechanisms but also have an opportunity to improve on agreeing on roles and responsibilities. For example, the White House established a joint strategy, the National Strategy for Combating Wildlife Trafficking, in 2014. The strategy lays out guiding principles and strategic priorities for U.S. efforts to stem illegal trade in wildlife. In Southeast Asia, the U.S. embassy in Malaysia’s Integrated Country Strategy articulates mission goals and objectives for a coordinated effort among all U.S. agencies and includes prevention of illegal wildlife trafficking as a key activity, according to officials. In addition, U.S. missions in Bangladesh, Cambodia, India, Laos, Nepal, Thailand, and Vietnam are developing CWT-specific country strategies, according to officials. Agencies also defined and articulated a common outcome, outlined in the National Strategy for Combating Wildlife Trafficking Implementation Plan (Implementation Plan). The Implementation Plan states that success relies on agencies working in concert to carry out the objectives, which include strengthening enforcement, reducing demand for illegally traded wildlife, and building international cooperation. Under three strategic priorities, the Implementation Plan identifies 24 objectives and ways to measure progress for each. In Southeast Asia, State and USAID officials told us that they work toward those shared outcomes. In particular, they stated that to achieve the shared outcome of reducing demand for wildlife products, they cooperated on raising public awareness. For example, State collaborated with USAID in Vietnam to implement Operation Game Change, a 2015 awareness-raising effort designed to inform the Vietnamese public about wildlife trafficking issues. In addition, to achieve the common outcome of strengthening law enforcement capacity, USAID is partnering with State, FWS, and DHS and other nongovernmental actors to implement the Reducing Opportunities for Unlawful Transport of Endangered Species program, which aims to increase enforcement capacity at ports of entry in Vietnam and other countries. The Implementation Plan designates various U.S. agencies to lead or participate in achieving CWT strategic priorities, so it provides high-level direction on agency roles. However, the Implementation Plan does not define specific roles and responsibilities at the working level for mission staff implementing programs and activities. Officials at some missions reported that agreement on roles, responsibilities, and priorities facilitated collaboration on CWT activities in some instances. For example, an FWS attaché in the region told us that there has been effective collaboration between FWS, State, and ICE HSI due to agreement on roles and a shared understanding of key law enforcement terms and responsibilities. In Thailand, FWS and ICE HSI officials told us that they share information on cases, and FWS and State officials indicated that they have jointly conducted a variety of capacity-building activities across the region. State officials at ILEA in Bangkok attributed their successful regional collaboration with FWS to a mutual understanding that CWT capacity building is a responsibility that should be prioritized. State officials in Cambodia indicated that their Embassy CWT Interagency Working Group has been a forum for discussion among agencies in Cambodia to collaborate on CWT roles and activities. The working group has a designated lead agency and provides a forum to prevent or resolve potential differences in points of view among the agencies. However, some officials also reported instances of disagreement on roles and responsibilities that they said led to bad outcomes. For example, at the mission in Bangkok, Thailand, which coordinates CWT activities across the Southeast Asia region, agencies’ disagreements on roles and responsibilities have resulted in the delivery of inappropriate training activities and interference with U.S. efforts to cooperate with a foreign government, according to some officials. Specifically, FWS, State, and ICE HSI have disagreed with USAID on the roles and responsibilities that USAID implementing partners play with regard to law enforcement activities. USAID officials stated that they entrust their implementing partners to conduct law enforcement training and believe they sufficiently involve their U.S. agency counterparts. However, FWS, State, and ICE HSI officials believe that due to their law enforcement responsibilities specifically related to strengthening host countries’ antiwildlife trafficking enforcement efforts, they should be consulted and involved to a greater degree on activities directly related to such efforts. In Thailand, a USAID implementing partner’s lack of collaboration with U.S. law enforcement entities resulted in inappropriate training activities, according to some officials. Officials from FWS, ICE HSI, and an NGO told us that a CWT course conducted by a USAID implementing partner in Thailand was inappropriate due to a focus on ambush and military tactics, which are not suitable for the park rangers that received the training. In addition, another training course conducted in Thailand was not tailored for that country’s landscape, according to a U.S. official, who explained that the Thai officers receiving the training would be unable to apply its lessons locally due to differences in terrain. FWS and ICE HSI officials stated that they were not sufficiently consulted prior to the training and, although they have provided feedback to USAID about these issues, they expressed concern that USAID had not fully considered the feedback. USAID officials indicated that training on ambush or military tactics would not have been allowed, and they have no evidence it occurred. USAID officials also stated that they were unaware of training that was not properly tailored and that host countries generally praised training that was provided by its implementing partner. FWS and State officials in Thailand also told us that agencies’ and implementing partners’ efforts to share information on wildlife crime with foreign governments have been fragmented due to disagreements about roles. For example, USAID’s implementing partners and FWS separately approached foreign government entities to provide information or support during a recent law enforcement seizure of wildlife products. According to State and FWS officials in Thailand, while USAID’s implementing partner has a role in providing information that can support CWT activities, U.S. agencies in-country are responsible for official engagement on law enforcement matters and, therefore, should take the lead in communicating with host governments, particularly in criminal investigations. According to USAID officials, USAID and its implementing partners share this responsibility and have a role to play. USAID officials told us that they were aware of the difference in views and acknowledged that there may have been instances in which an implementing partner overstepped. USAID officials further explained that they have made an effort to address this particular issue by changing its implementing partner as well as changing their CWT program structure from a cooperative agreement to a contract so that USAID has more oversight and control. The new implementing partner also brought in a law enforcement expert to help ensure that training and related activities will be appropriate, according to USAID officials. In addition, the new USAID program specifies that coordination with other agencies is required, and USAID conducted a regional workshop in March 2017 to serve as a mechanism for coordination. However, even after this conference, officials indicated that some agencies still had not agreed on the appropriate balance for how implementing partners should collaborate with U.S. law enforcement on criminal investigations. According to State and FWS officials, differences in agency views of their roles have hindered U.S. efforts to cooperate with a foreign government and confuse foreign government officials who may not realize that an implementing partner is not a U.S. government agency and thus does not have the same authority. USAID officials indicated that they were unaware of instances where its implementing partner interfered with U.S. efforts to cooperate with a foreign government. Our work has shown that although collaborative mechanisms differ in complexity and scope, they all benefit from certain key features, including clarity of roles and responsibilities. For example, our work also notes that agreement on roles and responsibilities helped agencies determine who will lead a collaborative effort, clarify who will perform specific tasks, organize joint and individual efforts, and facilitate decision making. In addition, we have previously reported that key issues agencies should consider whether participating agencies have clarified the roles and responsibilities of participants in collaborative efforts and whether participating agencies have agreed to a process for making and enforcing decisions. Some U.S. missions in Southeast Asia are developing CWT- specific country strategies, which could provide a platform for the Task Force to give additional guidance on roles and responsibilities of mission staff engaged in CWT efforts in the region. Doing so would help clarify which agency will do what and facilitate maximum use of resources. FWS, State, and USAID Have Taken Steps to Assess CWT Activities FWS, State, and USAID Monitor CWT Activities FWS uses a range of measures to track the progress of its partners and grantees. For example, FWS has established standard indicators for CWT, which include the following: the number of arrests of large-scale wildlife traffickers resulting from a project’s investigations, operations support, or both; and the number of wildlife traffickers who have been arrested who are successfully prosecuted. Specifically for public relations efforts, the guidance calls for applicants to identify the desired behavior that the campaign is intended to encourage. In addition, FWS required 2017 CWT project proposals to identify all expected outputs or products of key project activities. This may include management plans, brochures, posters, training manuals, number of people trained, workshops held, hours of training provided, and equipment purchased. One FWS-funded program designed to counter pangolin trafficking to China by laying the foundations for reducing consumer demand provides an illustrative example of how it applied FWS monitoring guidance. Among other activities, the program proposed developing and piloting strategies to change behavior, with the goal of eliminating the market for illegally traded wildlife in key areas. The proposal identifies outputs, such as reports on consumer demand, and states that key components of developing a demand reduction strategy include identification of target audiences and the specific behaviors that the campaign aims to change. Quarterly reports as of April 2017 have described progress toward goals, outlining methodological details on how motivation and potential barriers for desired behavior will be measured. The program is scheduled to conclude in September 2017. The FWS Office of Law Enforcement Strategic Plan 2016 – 2020 identifies a set of CWT-related metrics for CWT, such as interdictions, penalties, fines, and value of illegal activities. FWS reports this information publicly. For example, Operation Crash, an ongoing nationwide criminal investigation led by FWS that focuses on the illegal trade in rhinoceros horn and elephant ivory, has resulted in 32 individuals sentenced and approximately 34 years of total prison sentences, $2 million in fines, and $6 million in forfeitures as of February 2017. Regarding U.S.-based partnerships, FWS monitored the U.S. Illegal Wildlife Demand Reduction Campaign by tracking the estimated number of people who see the ads (reach) and the number of times content is displayed (impressions). From launch through the middle of the second quarter of fiscal year 2017, FWS reported the following: Billboards: Monthly, about 5 million travelers are estimated to pass by the airport billboards at the international airports of Atlanta, Georgia; Chicago, Illinois; Los Angeles, California; and Miami, Florida. To date, an estimated total of about 45 million travelers have passed through these airports and may have seen the messages. Social media: On September 7, 2016, FWS and its implementing partner, WildAid, launched the campaign with joint press conferences held at the Atlanta International Airport and at the Los Angeles International Airport. This resulted in more than 1 million impressions on Twitter, engagement of more than 236,000 friends on Facebook, and 5,000 new followers on Instagram. In addition, at the beginning of the campaign, WildAid completed a public survey to assess what percentage of the U.S. general public was aware of wildlife trafficking. At the conclusion of the 3-year campaign, WildAid intends to facilitate another public survey to evaluate the effectiveness of the campaign, with results expected in late 2018. State INL’s Guide to Developing a Performance Measurement Plan states that program teams are to monitor project activities and results in order to identify project successes and challenges, guide resource allocations, and facilitate improved performance. According to a State official, INL requires every CWT program implementer to provide quarterly progress and financial reports and final programmatic and financial reports. Quarterly reports must provide a quantitative and qualitative analysis of work performed and include, among other things, results achieved, challenges encountered, and action taken. At the end of a program, INL extracts best practices and lessons learned for future planning, according to a State official. We examined monitoring documentation related to three INL CWT programs as illustrative examples, described below. From February to March 2016, State’s ILEA in Thailand provided a Wildlife Trafficking Investigators course designed to cover a range of topics, including case management, corruption, and wildlife identification. The report covering the first quarter of calendar year 2016 for this program describes progress made toward objectives and identifies challenges and corrective action. For example, the report states that students participated in crime scene processing, surveillance, undercover operations, interviewing, raid planning, and case presentation exercises – all reflective of a specific performance measurement objective. The report also identifies challenges such as securing role players for exercises and proposes using FWS instructors and ILEA staff as a solution. State provided an approximately $2 million grant, running from September 2015 to September 2017, to the Wildlife Conservation Society (WCS) aimed at strengthening the capacity of government and law enforcement officials on wildlife trafficking across key countries in Latin America and Asia. The report covering the first quarter of calendar year 2017 for this program describes progress and activities related to objectives. For example, one activity is intended to strengthen legislative frameworks to combat wildlife trafficking, and the report states that in Vietnam, WCS has been providing inputs to articles of the penal code relevant to wildlife protection. State provided approximately $400,000 to UNODC and the University of Washington for a program running from September 2015 to September 2017 to facilitate forensic DNA analysis of ivory seizures in Africa and Asia. The most recent quarterly report for the program provides information on results associated with objectives. For example, one objective is to conduct DNA analysis on 175 African elephant reference samples, and the report indicates that over 100 samples had been analyzed from countries in Africa. USAID’s Evaluation Policy states that performance monitoring reveals whether implementation is on track and that project managers will ensure that implementing partners collect relevant monitoring data. To monitor ARREST, USAID’s implementing partner collected and self-reported data on activities and progress against main goals. For example, the implementing partner reported in 2016 that to strengthen law enforcement, ARREST trained approximately 2,300 people. To reduce consumption of endangered species, ARREST’s iThink campaign at its peak reached more than 40 million people per day, according to the partner’s report. In addition, a contractor analyzed ARREST’s iThink demand reduction campaign results. According to its report, 62 percent of the audience in China had received the message after 6 months. In Thailand, 63 percent of the audience had received the message, while in Vietnam, 75 percent of the audience had received the message. The report also provided suggestions for future work based on lessons learned, such as segmenting the market, incorporating social norms, and increasing the emphasis on social media. USAID designed monitoring elements into and developed plans for its recently initiated programs in Southeast Asia. For example, USAID’s request for proposal (RFP) for Saving Species Vietnam, issued in January 2016 prior to the contract award, identifies key results and illustrative indicators for the main tasks. Specifically, the RFP suggests metrics for reducing consumer demand, such as percentage of target audience that receives the intended message and percentage change in purchases of targeted illegal wildlife products. In addition, the RFP calls for quarterly reports that must include, among other things, performance indicator results against targets. USAID’s RFP for Wildlife Asia also designed monitoring into the program from the start by including similar elements. In May 2017, USAID produced an Activity Monitoring, Evaluation and Learning Plan for Saving Species (MEL Plan), which includes a Results Framework that identifies the purpose of the program and details associated tasks and key results. According to the MEL Plan, the Results Framework was developed based on a range of inputs, including USAID’s Measuring Efforts to Combating Wildlife Crime – A Toolkit for Improving Action and Accountability. The MEL Plan also provides a mix of output and outcome performance indicators with baselines and targets, to be used for communication and decision making. In addition, the MEL Plan calls for Pause and Reflect Sessions, Annual Strategic Reviews, work planning sessions, and other key learning events to reflect on progress and use that knowledge to adapt accordingly. In May 2017, USAID also produced a draft MEL Plan for Wildlife Asia, which provides performance indicators with baselines and targets. In addition, the April 2017 draft MEL Plan for the Philippines Protect Wildlife program contains similar information and, according to USAID officials, the MEL Plan used the action and accountability toolkit to inform the development of CWT metrics. One Evaluation of CWT Activities Has Been Conducted by USAID One USAID CWT program in Asia conducted a midterm evaluation, but State and FWS have not conducted any evaluations. State has not conducted any evaluations of INL CWT activities because none meet State’s criteria for completing an evaluation, including funding and duration thresholds, according to a State official. FWS has not conducted any evaluations of its CWT activities in Asia but has established a new CWT-focused branch, which is developing a strategic plan, a framework, and indicators to measure progress and success for CWT efforts. In March 2016, the Task Force released an annual progress report that describes U.S. government accomplishments; however, according to an official, the Task Force does not plan to issue a progress report in 2017 due to vacancies in leadership and because agencies are working on a similar report planned for completion sometime in 2017, in response to the Eliminate, Neutralize, and Disrupt Wildlife Trafficking Act of 2016. USAID’s Evaluation Policy states that for each project, consideration will be given during the design phase to the performance evaluation that will be undertaken. The ARREST program conducted a midterm evaluation, and we assessed it against key elements to determine the quality of the evaluation. We have previously reported that addressing or requiring certain elements provides the basis for a high-quality evaluation. For this analysis, we considered a range of criteria, including the following: Evaluation questions align with program goals. Target population and sampling method are appropriate, given the scope and nature of the evaluation questions. Data collection is appropriate for answering the evaluation questions. Data analysis is appropriate to answer the evaluation questions. We found that overall, the midterm evaluation was acceptable in quality, although it fell short of fully addressing all the key elements. For example, the evaluation generally met the first two elements above. However, the evaluation only partially met the element for data collection and data analysis. For example, the evaluation did not clearly specify how survey recipients had been selected and did not provide precise details about how qualitative data from in-person interviews had been analyzed. USAID did not conduct a final evaluation of ARREST because, according to officials, the timing of a late midterm evaluation was such that its findings were used in the development of the new Wildlife Asia program, and it would not have been cost-effective to conduct a final evaluation, among other reasons. The draft Wildlife Asia MEL Plan identifies plans to prepare for a midterm and final performance evaluation at the middle and end of the program time line, and USAID officials confirmed that they intend to conduct evaluations of the program. The Saving Species MEL Plan indicates that program officials will work in collaboration with USAID to conduct a midterm evaluation and that one objective will be to provide recommendations in order to improve effectiveness and evaluate factors that help or hinder the achievement of expected outcomes and objectives. The MEL Plan also calls for a third-party firm, identified by USAID through a competitive process, to conduct the evaluation in the third year of the program. The draft Philippines Protect Wildlife MEL Plan indicates that the program will conduct a midterm and final evaluation. FWS, State, and USAID Have Identified and Applied Some Lessons Learned FWS, State, and USAID guidance states that agencies should learn from monitoring and evaluation efforts so they can identify what works, what does not work, and why. For example, from monitoring the first year of implementation, FWS learned from its domestic campaign that most Americans consider themselves wildlife lovers, but most know little about wildlife trafficking, indicating the need for outreach and education efforts. State officials told us that they took stock of regional CWT activities in Asia to improve program planning. As a result, before launching the next set of CWT courses, INL is conducting a needs assessment to clarify skill gaps, impact potential, and alignment with other activities. In addition, INL is examining approaches to strengthen sustainability such as adding train-the-trainer courses. USAID and implementing partner officials told us that they learned lessons during the implementation of ARREST and applied or plan to apply them to new programs. For example, in response to ARREST’s midterm evaluation recommendation to focus demand reduction efforts increasingly on behavior change communication, officials stated that they adjusted the message of their campaign advertisements to target behavior change and worked to recruit a range of key opinion leaders to maximize reach and impact. USAID intends to carry this lesson over to its new regional program, according to 2016 plans that call for the use of behavior change communication methodologies, as opposed to one-off public relations campaigns, in demand reduction activities. Officials told us that in practice this means future campaigns will focus on specific species, such as pangolins, and target Chinese and Vietnamese consumers who believe pangolin scales can help with lactation. USAID’s implementing partner for Saving Species also identified possible ways to improve the impact and sustainability of CWT training. For example, instead of providing traditional, onetime classroom training, officials plan to establish mentoring and on-the-job training programs in which officials in similar roles teach one another. This facilitates learning and may help identify CWT champions, enhancing sustainability and effectiveness, according to program officials. Conclusions Wildlife trafficking, worth at least an estimated $7 billion annually, continues to push some protected and endangered animal species to the brink of extinction. Although agencies have worked together to combat wildlife trafficking, as outlined in the National Strategy for Combating Wildlife Trafficking Implementation Plan, disagreement on roles and responsibilities has hindered some CWT activities in Southeast Asia. We have previously reported that key issues agencies should consider include whether participating agencies have clarified the roles and responsibilities of participants in collaborative efforts and whether participating agencies have agreed to a process for making and enforcing decisions. Agencies have collaborated on a range of CWT activities, including building law enforcement capacity, raising awareness, and helping spur partner-nation action on CWT. While agencies have applied some practices that can enhance and sustain collaborative efforts, such as establishing joint strategies and defining a common outcome, some officials in Southeast Asia reported a level of disagreement on roles and responsibilities, resulting in the delivery of inappropriate training activities and in the hindering of U.S. efforts to cooperate with a foreign government. DOI, State, and USAID are members of the Presidential Task Force on Wildlife Trafficking that is charged with coordinating among agencies combating wildlife trafficking efforts. By ensuring that all relevant member agencies have agreed on and clearly defined roles and responsibilities, agencies will have more reasonable assurance that they can effectively marshal the contributions of all agencies to take full advantage of their expertise and resources in addressing CWT issues. Taking steps to clarify specific roles and responsibilities, for example by including them in a CWT country strategy or other document, could help improve coordination, help ensure activities are mutually reinforcing, reduce the risk of fragmented efforts, and maximize the impact of CWT activities in Southeast Asia. Recommendations for Executive Action GAO is making the following three recommendations: The Secretary of the Interior should work with the Task Force to clarify roles and responsibilities of mission staff engaged in collaborative efforts on combating wildlife trafficking in Southeast Asia. (Recommendation 1) The Secretary of State should work with the Task Force to clarify roles and responsibilities of mission staff engaged in collaborative efforts on combating wildlife trafficking in Southeast Asia. (Recommendation 2) The Administrator of the U.S. Agency for International Development should work with the Task Force to clarify roles and responsibilities of mission staff engaged in collaborative efforts on combating wildlife trafficking in Southeast Asia. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to the Departments of Commerce, Homeland Security, the Interior, Justice, State, and the Treasury, and USAID. The Departments of the Interior and State and USAID agreed with our recommendations, and their comments are reproduced in appendixes III, IV, and V, respectively. The Departments of Commerce, the Interior, Justice, and State and USAID provided us with technical comments, which we incorporated as appropriate. In its comments, USAID indicated that it objects to the phrase “bad outcomes”, the word “inappropriate,” and the description related to an implementing partner that may have “overstepped” as used in our discussion of agency collaboration. We attribute that language specifically to certain agency officials, acknowledge differences in agency views, and include perspectives from USAID officials for balance. In its comments, DOI notes that that the content in the report that most directly substantiates the recommendations occasionally reads as disagreements involving a few specific activities among a small number of U. S. government personnel. Our findings focus on a limited set of people and activities but reflect a clear opportunity to clarify roles and responsibilities. Moreover, as we mention, the mission in Bangkok coordinates CWT activities across the Southeast Asia region, so efforts to improve collaboration potentially would have a broad effect and benefit. We are sending copies of this report to the appropriate congressional committees and to the Secretaries of Commerce, Homeland Security, the Interior, State, and the Treasury; the Attorney General of the United States; the Administrator of USAID. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612, or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines (1) what is known about the demand for illegal wildlife and wildlife products in the United States and in Asia, (2) actions agencies are taking to reduce demand for illegal wildlife products in the United States and Asia, and (3) the extent to which the U.S. Fish and Wildlife Service (FWS) within the Department of the Interior (DOI), the U.S. Department of State (State), and the U.S. Agency for International Development (USAID) are assessing the effectiveness of their combating wildlife trafficking (CWT) activities. We limited the scope of this review to the United States and Asia— identified as major markets for the illegal wildlife demand—to complement our 2016 report and to provide geographical diversity in our work. We selected these geographic areas based on our review of reports on demand for illegally traded wildlife and discussions with U.S. government agencies. To address our objectives, we analyzed agency documentation and met with officials from DOI, State, USAID, the Department of Justice, and the Department of Commerce’s National Oceanic and Atmospheric Administration, which have designated roles in the National Strategy for Combating Wildlife Trafficking Implementation Plan to lead or participate in efforts to reduce illegal wildlife demand; the Department of Homeland Security, which has a role in enforcement and capacity-building efforts; and nongovernmental organizations (NGO) that focus on combating wildlife trafficking. We conducted fieldwork in Miami, Florida; China; Hong Kong; Thailand; and Vietnam. We selected these locations using a combination of criteria: (1) Since fiscal year 2014, the location has received at least $1 million in U.S. government funding for efforts related to CWT; (2) CWT activities are under way in the location; and (3) the location has the presence of at least two U.S. government agencies conducting CWT work. This sample is not generalizable to all the locations in which the United States has CWT-related programs. While in each location in Asia, we interviewed officials who played a role in CWT activities, which included officials from State, USAID, and the Departments of Homeland Security and the Interior. We also interviewed officials from host governments responsible for the management of natural resources and parks and representatives from NGOs, some of which were involved in implementing U.S. government programs related to awareness raising, law enforcement, and other CWT objectives. To describe what is known about the demand for illegal wildlife and wildlife products in the United States and in Asia, we reviewed reports on wildlife trafficking produced by United Nations organizations, the Organisation for Economic Co-operation and Development, and NGOs about the demand for these products in our locations of interest. We also reviewed surveys conducted for programs partially or fully funded by U.S. agencies that asked questions about purchasing behaviors for these products in the United States, China, Vietnam, and Thailand. These reports were either recommended to us by officials we interviewed or had been identified during our prior work on the supply of wildlife products. We reviewed the methodologies described in the reports and surveys and determined they were sufficiently reasonable for providing examples of wildlife and wildlife products traded and consumed and drivers for consumption in China and countries in Southeast Asia. However, it was beyond the scope of this review to determine the reliability of the underlying data. Many of these reports depend heavily on seizure data, which have limitations. The amount and location of seizures depend on law enforcement efforts, efficacy of law enforcement efforts, presence of illicit trade, and other factors, which are difficult to isolate. Additionally, we analyzed national seizure data from the FWS’s Law Enforcement Management Information System to report on wildlife confiscated in the United States. To assess the reliability of these data, we interviewed agency officials, reviewed documentation about the data, and conducted basic logical tests. We reviewed the 42,100 seizure records that FWS provided for logical consistency and removed a few hundred records for which we found duplicative, unknown, or blank values. Overall, we determined the data are sufficiently reliable for the purposes of identifying wildlife products seized between fiscal years 2007 and 2016. Data on seizures may not be indicative of underlying trends in trade and consumption, as they are dependent upon factors such as enforcement and techniques used by those importing the goods. To gather perspectives on demand for illegally traded wildlife in China and Southeast Asia, during our field visits to China, Hong Kong, Thailand, and Vietnam, we interviewed officials from DOI, State, USAID, the Department of Homeland Security, and officials at foreign ministries, NGOs that are implementing partners for U.S. agencies or have cooperated with U.S. agencies on CWT activities, and one company. We interviewed the company for illustrative purposes. To examine actions agencies are taking to reduce demand for illegal wildlife products in the United States and Asia, we interviewed relevant officials and reviewed information, including agency and implementing partner documentation of CWT-related projects, programs, and grants. We also analyzed how agencies combating wildlife trafficking in Southeast Asia are applying selected practices that can enhance and sustain collaborative efforts. As we have previously reported, such practices include establishing mutually reinforcing or joint strategies, defining and articulating a common outcome, and agreeing on roles and responsibilities. In addition, we conducted fieldwork at the Port of Miami and interviewed U.S. government officials at this location to obtain insights on U.S. government activities. We selected the Port of Miami because it has been the site of large-scale CWT operations, and agency officials identified Miami as a hub for wildlife trade and an illustrative example of U.S. government CWT operations. We also conducted fieldwork in China and Vietnam, where we visited rescue centers and interviewed host government officials and NGO representatives. To examine the extent to which FWS, State, and USAID are assessing the effectiveness of their CWT activities, we selected programs to analyze, spoke with agency officials, and reviewed documentation from the programs selected. We included programs that had started, finished, or been ongoing from the beginning of fiscal year 2015 to the end of fiscal year 2016 and that are or were solely dedicated to CWT. Specifically for State, programs must have been identified by its Bureau of International Narcotics and Law Enforcement Affairs as a discrete activity that contributed to CWT and must have been at least 3 months into implementation. Specifically for USAID, programs must have (or have had) funding greater than $1 million. To assess agency monitoring practices, we analyzed agency guidance on monitoring and examined selected programs as illustrative examples of how agencies applied their own guidance. To assess evaluation practices, we assessed a USAID midterm evaluation against key elements to determine quality. Two social science analysts independently assessed this evaluation using the same criteria, methods, and procedures that we developed for GAO-17-316. The analysts met and reconciled any initial differences in their assessments. We conducted this performance audit from October 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Photographs Related to Combating Wildlife Trafficking in the United States and Asia The following photographs (see fig. 8-26) were taken by GAO staff during field visits to Miami, Florida; Beijing, China; Hong Kong; Bangkok, Thailand; and Hanoi, Vietnam. GAO observed and photographed the following: shipment inspections conducted by U.S. Fish and Wildlife Service inspectors at the Port of Miami; examples of wildlife that are traded in the United States; examples of wildlife and wildlife products that have been seized by the U.S. Fish and Wildlife Service; examples of wildlife and wildlife products that have been seized in antiwildlife trafficking awareness campaigns at the Hartsfield–Jackson Atlanta International Airport; Beijing Capital International Airport; Hong Kong International Airport; Suvarnabhumi Airport, Bangkok, Thailand; Chatuchak Market in Bangkok, Thailand; and a highway in Hanoi, Vietnam; wildlife at the Beijing Rescue and Rehabilitation Center; the Endangered Primate Rescue Center, Cuc Phuong National Park, Vietnam; and the Carnivore and Pangolin Rescue Center, Cuc Phuong National Park, Vietnam; and shops that sell ivory products in Hong Kong. To view these photographs online, please click on this hyperlink. Appendix III: Comments from the Department of the Interior Appendix IV: Comments from the Department of State Appendix V: Comments from the U.S. Agency for International Development Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Kimberly M. Gianopoulos, (202) 512-8612, or gianopoulosk@gao.gov. Staff Acknowledgments In addition to the individual named above, Judith Williams (Assistant Director), Marc Castellano (Analyst-in-Charge), David Dayton, Martin De Alteriis, Neil Doherty, Mark Dowling, Michael Hoffman, and Jasmine Senior made key contributions to this report.
Why GAO Did This Study Wildlife trafficking—illegal trade in wildlife—is estimated to be worth $7 billion to $23 billion annually and, according to State, continues to push some protected and endangered animal species to the brink of extinction. In 2013, President Obama issued an executive order that established an interagency Task Force charged with developing a strategy to guide U.S. efforts to combat wildlife trafficking. GAO was asked to review U.S. agencies' CWT efforts. GAO's September 2016 report on wildlife trafficking focused on supply side activities ( GAO-16-717 ). This report focuses on demand side activities and examines, among other things, (1) what is known about the demand for illegal wildlife and wildlife products in the United States and in Asia and (2) actions agencies are taking to reduce demand for illegal wildlife products in the United States and in Asia. GAO reviewed information from U.S. agencies and international and nongovernmental organizations and interviewed U.S. officials in Washington, D.C., and Miami, Florida, and U.S. and foreign government officials in China, Thailand, and Vietnam. What GAO Found In the United States, China, and countries in Southeast Asia, there is diverse demand for illegally traded wildlife, according to data, reports, and various officials. The Department of the Interior's (Interior) U.S. Fish and Wildlife Service (FWS) has seized a variety of wildlife at U.S. ports, such as coral for aquariums, conch meat for food, seahorses for medicinal purposes, and crocodile skin for fashion items. In China and Southeast Asian countries, reports and officials have identified seizures and consumption of illegally traded wildlife products such as rhino horn, elephant ivory, pangolins (shown below), turtles, and sharks, among others, used for purposes such as food, decoration, pets, or traditional medicine. U.S. agencies are taking actions designed to reduce demand for illegal wildlife, including building law enforcement capacity and raising awareness, but disagreement on roles and responsibilities has hindered some combating wildlife trafficking (CWT) activities in Southeast Asia. FWS inspects shipments in the United States and facilitates law enforcement capacity building with partner nations overseas. The Department of State (State) conducts diplomatic efforts, some of which contributed to a joint announcement by China and the United States to implement restrictions on both countries' domestic ivory trade. The U.S. Agency for International Development (USAID) works with local organizations abroad to support programs intended to reduce wildlife demand, strengthen regional cooperation, and increase law enforcement capacity. Several other agencies also contribute expertise or resources to support various demand reduction activities. Certain practices can enhance and sustain collaborative efforts, such as establishing joint strategies, defining a common outcome, and agreeing on roles and responsibilities. GAO found that agencies applied the first two practices but could improve with regard to agreement on roles and responsibilities in Southeast Asia. For example, although the National Strategy for Combating Wildlife Trafficking Implementation Plan designates various Task Force agencies to lead or participate in achieving CWT strategic priorities, it does not define specific roles and responsibilities at the working level. Agencies have different views on roles and responsibilities in Southeast Asia. According to some officials, this disagreement resulted in inappropriate training activities and hindered U.S. cooperation with a host nation. More clearly defining roles and responsibilities would enhance agency collaboration. What GAO Recommends GAO recommends that Interior, State, and USAID work to clarify roles and responsibilities for staff collaborating on combating wildlife trafficking efforts in Southeast Asia. Agencies agreed with GAO's recommendations.
gao_GAO-18-403
gao_GAO-18-403_0
Background Civil aviation in the United States can be generally divided into two broad categories—general aviation and commercial aviation. All civilian pilot students undergo their initial pilot training in the general aviation sector, which comprises all aviation activities other than military and commercial airlines. Once hired in the commercial aviation sector for businesses that carry passengers or cargo for hire or compensation, pilots may receive additional, employer-specific training. FAA is responsible for regulating the safety of civil aviation in the United States, including the administration of pilot certification (licensing) and conducting safety oversight of pilot training. Regulations for initial pilot training and certification are found in two parts of the Federal Aviation Regulations—pilot training requirements and requirements for obtaining a pilot school certificate. Pilot training requirements: These regulations prescribe the minimum training, knowledge, and experience requirements for acquiring a private, commercial, or airline transport pilot certificate, and for becoming a certificated flight instructor (CFI). Individual flight instructors can provide pilot training to individuals under these regulations and the training is not subject to direct FAA oversight beyond the initial flight instructor certification and subsequent renewal. Requirements for obtaining a pilot school certificate: These regulations prescribe requirements pilot schools must meet to obtain an FAA certificate and the general operating rules applicable to a school’s holding a certificate. FAA-certificated schools are required to meet prescribed standards with respect to training equipment, facilities, student records, personnel, and curriculum. Schools’ pilot program curriculum can vary in content, but FAA provides core training guidelines that schools must follow to receive a certificate. To ensure safety, FAA requires its inspectors to conduct on-site inspections of each FAA-certificated school at least once a year, focusing on pilot school operations and training aircrafts’ airworthiness. Schools that provide initial pilot training generally fall into three categories: (1) collegiate aviation schools, (2) non-collegiate vocational pilot schools, and (3) non-collegiate, instructor-based pilot schools. Collegiate aviation schools that provide initial pilot training typically offer a 2- or 4- year undergraduate degree in an aviation-based major along with the pilot certificates and ratings necessary to become a commercial pilot. All pilot schools must comply with FAA’s pilot training requirements, but some may elect to become FAA-certificated as well. Instructor-based schools offer flexible training environments where the training sequence can be altered to meet specific students’ needs and time commitments. Upon completion of the training, the students can obtain pilot certificates for which they were trained, as long as they pass FAA’s tests. FAA-certificated vocational schools do not allow flexible training environments as the training sequence outlined in the curriculum cannot be altered. FAA requires annual inspections of these schools, unlike flight instructor- based schools. As we have previously reported, it takes years of training to meet FAA’s certification and aeronautical experience qualifications to become an airline pilot. Once cleared by a medical examination, an individual may obtain a medical certificate and a student pilot certificate from FAA. Pilot students may then begin training, acquiring the knowledge and flight training to obtain a private pilot certificate, instrument rating, commercial pilot certificate, and multi-engine rating (see fig. 1). To be eligible for hire as either a captain or first officer for an airline, individuals must also obtain an airline transport pilot (ATP) certificate in addition to the other certificates and ratings. In July 2013, FAA began requiring all first officers to have an ATP certificate, which requires 1,500 hours of flight experience. Pilots with fewer than 1,500 hours can obtain a “restricted-privileges” ATP certificate (R-ATP), under which specific academic training courses can count toward the required hours of total flight time. FAA made this change for airline first officers following the 2009 Colgan Air Inc. crash in New York, and subsequent legislation that required FAA to modify, among other things, first officer qualifications. In our 2014 report, FAA and industry stakeholders estimated that it could take an additional 1 to 2 years for pilots coming out of school to meet the 1,500 hour requirement. Consistent with airline representatives’ views from our prior report, regional airline association representatives have recently cited the revised first officer training requirements and several other factors as contributing to a tight pilot labor market. By increasing the minimum number of required flight hours for a first officer, entry into the airline pilot profession may take longer, which may decrease the pool of eligible pilots that mainline and regional airlines can hire as a first officer. In addition, as we previously reported, the civil aviation industry has been a historically volatile industry because demand for air travel is sensitive to economic conditions, as well as political, international, and even health-related events. After several years of industry contraction during the 2007-2009 economic recession, demand for air travel has increased since 2012, and FAA projects continued future growth. In addition, since 2014, pilot retirements have been increasing, further tightening the labor market, according to one study. That study forecasts between 2,000 and 3,000 annual mandatory age retirements from the mainline airlines between 2018 and 2021. According to the Bureau of Labor Statistics, most of the newly hired pilots in the next 10 years will be replacing retiring pilots. While Some Information on Collegiate Aviation Schools’ Pilot Degree Programs Is Available, Enrollment and Graduation Data are Limited Collegiate Aviation Schools Are Located across the Country and Offer Different Types of Pilot Degree Programs We identified 147 U.S. colleges and universities that offered at least one professional pilot degree program in academic year 2015-2016. These collegiate aviation schools are located throughout the country, as shown in figure 2. They may offer pilot programs within different academic departments, such as aviation or business. Within a department, pilot programs may be offered as a stand-alone program, as an integral part of a larger major, such as flight education or aviation management, or as a specialty or track within a major. Professional pilot degree programs at collegiate aviation schools may vary in several ways: School type: About three-quarters of collegiate aviation schools are public (110 out of 145), while the remainder are either private non- profits or private for-profits, according to Education’s data (see fig. 3). Program degree length: A majority of collegiate aviation schools offer 4-year degree programs, as shown in figure 3. Program degree length may affect how long it takes pilot students to meet FAA’s requirements and their career options once they complete training. For example, pilot students in 2-year degree programs may complete the program and acquire a commercial pilot certificate and ratings in less time than the 4-year degree program, which may save the students time and money. However, according to associations representing pilot training providers and pilots, mainline airlines prefer pilots with a 4-year degree. In addition, representatives from one mainline airline told us that the airline requires a 4-year degree for employment as a pilot. Regardless of which school and degree program a pilot student graduates from, all pilot students must pass the same knowledge and flight tests to obtain pilot certificates and are, by FAA’s standards, eligible for the same career opportunities. FAA Regulations and academic curriculum: Forty-six collegiate aviation schools we identified operate their pilot programs solely under FAA’s pilot training requirements. The remaining 101 collegiate aviation schools’ pilot programs are certificated by FAA under FAA’s pilot school requirements. As previously discussed, FAA-certificated schools must meet prescribed standards, have structured programs, and FAA must approve their pilot program’s curriculum. In addition, each pilot program’s academic curriculum may differ, though all must meet FAA’s pilot training requirements and, if the school is certificated, FAA’s pilot school requirements. R-ATP authorization: Only FAA-certificated collegiate aviation schools may apply to FAA for authority to certify eligible graduates for an R- ATP certificate with a reduced number of flight hours. Since FAA promulgated the new first officer qualification rule and established the R-ATP certificate in 2013, FAA has issued R-ATP authorizations to more schools each year. As of August 22, 2017, 86 collegiate aviation schools hold R-ATP authorizations. In addition, the number of R-ATP certificates FAA has issued to eligible graduates each year has steadily increased, from 37 in 2013 to 2,190 in 2016. The number of R-ATP certificates issued in 2016 represented about 18 percent of all ATP certificates. The reduced flight-hour eligibility may save students time and money on their path to becoming a professional pilot, depending on how they gain flight experience, which may motivate more students to consider attending collegiate aviation schools that are authorized for R-ATP certificates, compared to other training alternatives. Aviation Accreditation Board International accreditation: Schools’ professional pilot programs may choose to pursue program accreditation in addition to the school’s institutional accreditation. Thirty-two collegiate aviation schools we identified have pilot programs accredited by the Aviation Accreditation Board International and an additional 4 schools have pilot programs that are candidates for accreditation, as of December 27, 2017. The collegiate aviation schools we identified require that students complete training that includes both classroom (ground) and flight training. Ground school aims to provide students with the required aeronautical knowledge and cognitive skills necessary to perform the tasks required to become a pilot. Flight training focuses on teaching how to manipulate the controls of and safely operate an airplane. Most schools (89 of 147) conduct their own flight training using university-owned or – leased aircraft and university employed CFIs (in-house flight training). The number of CFIs employed by collegiate aviation schools varies and is one of the primary determinants of a school’s enrollment capacity. The remaining 58 schools contract out their flight training to one or more pilot schools or allow students to complete their flight training at a pilot school of the student’s choosing. Schools that provide in-house flight training operate at a relatively small number of all domestic airports, which vary greatly in size as measured by annual passenger enplanements (see fig. 4). Approximately 69 percent of these schools operate at non-primary airports—those with fewer than 10,000 passenger enplanements a year. Flight training may comprise a large proportion of an airport’s activity, particularly at smaller airports, according to representatives from seven schools and two airport authorities. The remaining 28 percent of the schools that provide in-house flight training operate at primary airports with over 10,000 passenger enplanements a year. There are advantages to operating at small and large airports. Representatives from three schools and five stakeholders representing flight training providers, airports, and pilots told us that operating out of smaller airports may be advantageous because they are less crowded, a condition that can save waiting time for take-offs and allows students to practice certain maneuvers that may be more difficult to perform at larger airports. Conversely, according to representatives from two schools, two pilot training provider associations, and one airport, operating at larger airports can be advantageous because students can learn to fly in the controlled environment that airline pilots will eventually fly in. Pilot Student Enrollment and Graduation Data Are Limited For several reasons, there are no comprehensive data on pilot student enrollment at collegiate aviation schools. First, because non-certificated schools are not subject to periodic FAA inspections, FAA does not collect any enrollment data for these schools. Second, enrollment data are available for only some FAA-certificated schools because reporting that data is optional for those schools during FAA’s certification and inspection process. In addition, FAA does not verify the data to determine their accuracy. As previously noted, FAA is responsible for regulating the safety of civil aviation in the United States. As such, according to FAA officials, FAA requires data collection when such a requirement serves a safety purpose, such as data required for pilot school certification and FAA oversight. FAA officials told us that other data on collegiate aviation schools, such as enrollment numbers, do not serve FAA’s primary safety purpose. The size of collegiate aviation schools appears to vary greatly. Although voluntary, almost all FAA-certificated collegiate aviation schools submitted enrollment data to FAA. According to FAA’s data provided to us on October 5, 2017, 92 FAA-certificated schools had reported average yearly enrollment data for their pilot programs. Reported enrollment at these FAA-certificated collegiate aviation schools varied greatly—from 5 professional pilot students to 850. Despite this wide range, most (66) of these schools reported that they enrolled 100 students or less in their pilot programs. A majority (15 of 18) of representatives from selected collegiate aviation schools noted an increase in enrollment over the past 5 years. Additionally, the data on graduations from professional pilot programs are not comprehensive. Education requires schools, including collegiate aviation schools, to report how many students they graduate annually. School officials classify and report completed degrees by program type to Education using the agency’s classification system. One of Education’s program codes—for “Airline/Commercial/Professional Pilot and Flight Crew”—appears to best capture graduates from professional pilot programs. Education’s data for professional pilot degrees awarded by collegiate aviation schools under this code totaled 1,356 in academic year 2015–2016. However, of the 147 collegiate aviation schools we identified for academic year 2015–2016, 72 reported pilot student graduates using the code. This might be because collegiate aviation schools may report their pilot student graduates under other program codes, such as “Aeronautics/Aviation/Aerospace Science and Technology, General” and “Aviation/Airway Management and Operations.” According to an Education official, while the agency expects schools to provide precise reporting of graduations from each degree program, he said it is possible that some school officials may not perceive their programs consistently with Education’s program classifications, despite specific definitions for each program category. Because pilot student graduates could be reported under a number of aviation-related program codes in Education’s system, the number of professional pilot graduates could be higher. According to Education’s data, the number of professional pilot degrees awarded by collegiate aviation schools under the Airline/Commercial/Professional Pilot and Flight Crew code fluctuated from year to year between academic year 2010–2011 and 2015–2016. Almost half of the representatives from our selected collegiate aviation schools (8 of 18) noted increased pilot student graduations over the past 5 years. The number of these graduations could continue to increase in the next few years since, according to representatives from seven schools, student enrollment generally responds to industry need and the perception of a more stable career pathway. According to one of these representatives, graduations increase with a lag relative to the increased industry demand and student enrollment, given the time it takes to complete the degree program. Given the observations from school representatives of increasing enrollment, graduations may continue to increase as well. Flight Instructor Turnover, Cost of Training, and Other Factors Affect Collegiate Aviation Schools’ Ability to Produce Pilots Retaining Flight Instructors Is a Key Challenge for Collegiate Aviation Schools’ Ability to Produce Pilots Selected school and other aviation industry representatives we spoke with generally agreed that retaining and recruiting flight instructors is one of the key challenges facing collegiate aviation schools. Representatives from nearly all (16 of 18) of the schools identified recruiting and retaining flight instructors as a great or moderate challenge and a majority stated that it was their greatest challenge affecting their ability to produce pilots (see app. I for a summary of the responses.). According to representatives from 3 aviation industry stakeholders, in the current environment some schools are unable to recruit and retain enough flight instructors to train all the pilots that they otherwise have the resources to accommodate in their pilot programs. To illustrate, representatives from 2 schools reported an inability to accept some qualified students because they did not have sufficient flight instructors. Meanwhile, representatives from 4 other schools said they have been able to hire enough new instructors to keep up with flight instructor attrition. In addition to presenting a management challenge, instructor turnover may hinder training effectiveness. For example, one pilot association representative told us that the quality of instruction tends to be lower when students are routinely subject to new instructors since there is little instructional continuity. Representatives of 6 of the collegiate aviation schools we interviewed said they recognize that instructor turnover is unavoidable because most pilots do not pursue flight instruction as a long-term career. Regardless, the rate of turnover in recent years has increased, according to selected school and other aviation industry representatives. As previously discussed, school representatives told us that most pilots use flight instruction as a stepping stone to accrue the required flight time to become an airline pilot, which commands a higher salary and greater prestige than flight instructor positions. Flight instructors generally seek employment with an airline as soon as they are eligible, according to most school representatives (15 of 18) and other stakeholders we spoke with. According to two aviation industry stakeholder representatives, the career progression of civilian-trained pilots from flight instructor to commercial airline pilot has typically worked in this way. However, stakeholders have stated that in recent years, airline industry growth, increasing pilot retirements, and other factors previously discussed have caused commercial airlines to accelerate pilot recruitment, ultimately causing pilots to move through the instructor ranks more quickly. Regional airlines now hire qualified pilots as soon as they accrue the minimum hours required by FAA, according to representatives from one airline pilots association. According to one study, in the mid-2000s most of the larger regional airlines set minimum flight-hour requirements for first officer applicants of 800 to 1,000 hours, which were well above the FAA requirements at the time. Furthermore, applicants needed an even higher number of hours to be competitive for those positions prior to that time—between 1,500 and 2,000 hours, according to representatives of a pilots’ association. Recruiting and retaining flight instructors with more advanced qualifications, such as instructors qualified to train other pilots to be flight instructors and chief instructors, can be a particular challenge for collegiate aviation schools: Flight instructors qualified to train flight instructors: FAA requires flight instructors to have a minimum 2 years of instructor experience before they may train other pilots to obtain their CFI certificate. Representatives from almost half (8 of 18) of collegiate aviation schools reported challenges with retaining flight instructors long enough for them to meet that requirement. According to some school representatives, flight instructors typically accrue the minimum hours required to qualify for their ATP or R-ATP within 2 years or soon afterward. The resulting attrition of experienced flight instructors can therefore hamper schools’ ability to train enough pilots to become flight instructors, an ability that is crucial for turning out the next generation of instructors and pilot students. Chief Instructors: FAA requires certificated schools to have a chief instructor who meets minimum regulatory qualifications, such as at least 2,000 hours of flight time as “pilot-in–command.” Representatives from two schools told us that because of high instructor turnover, few instructors meet these qualifications and the schools find it challenging to recruit qualified chief instructors. Four school representatives and two other aviation stakeholders we interviewed noted that the revised first officer requirements have helped collegiate aviation schools retain flight instructors. As previously discussed, these revised requirements increased the minimum number of flight hours a pilot must have to become a first officer, so instructors continue to instruct longer than they might have otherwise. The school representatives noted that while they are still experiencing high flight instructor turnover the situation would be more challenging without the new requirements. In addition, representatives from two large collegiate aviation schools stated that when there is a high demand for pilots, they would not be able to recruit and retain any flight instructors in the absence of FAA’s first officer requirements. As shown in table 1, several of the collegiate aviation schools we interviewed have taken some actions to address the challenge of recruiting and retaining flight instructors. At least 6 regional airlines offer cadet programs, which may provide additional incentives for graduates to remain at their alma mater as flight instructors until they meet FAA’s first officer qualification requirements, according to school representatives we spoke to. These programs may include incentives such as bonus pay for a number of flight hours, health benefits, or tuition reimbursement. Students who sign onto the cadet programs typically accept a provisional employment offer and are expected to work for the airline upon obtaining the number of hours necessary for the ATP certificate and completing an airline’s new hire training. Representatives from two schools said that few students participated in these programs, attributing lower participation to students who may not want to commit to one airline. In addition to actions that schools can take to retain flight instructors, school representatives suggested additional actions that would require cooperation from airlines. Representatives from one state university told us that the school negotiated an agreement with one airline to initially hire its graduates as part-time pilots, allowing the pilots to continue to work part-time as flight instructors. The school is attempting to go one step further by negotiating agreements whereby airlines will not hire its instructors until the school is ready to relinquish them. According to the school’s representatives, two regional airlines have recognized that keeping instructors at the school longer could be to their benefit, increasing the school’s capacity to produce more pilots that the airlines will then hire. Another school representative suggested that airlines might consider loaning out their pilots to instruct for schools, but a representative from an airline association said that airlines do not have extra personnel to spare. Representatives of a pilot school said they are working with airlines to change the seniority system so that pilots can get their seniority number while they are instructors, which could reduce the strong incentive to become an airline pilot as quickly as possible. School representatives and a stakeholder described additional actions that could be taken to address this issue, including encouraging students to obtain their CFI, encouraging retired airline pilots to instruct, and raising the profile of the flight instructor profession as a possible career path. Collegiate aviation schools may require their students to obtain a CFI to graduate. Those schools that do not require a CFI may produce fewer graduates who are qualified to instruct. A representative from one school told us that it is now encouraging students to obtain their CFI as a way to increase the number of potential flight instructors. Representatives from three industry associations said the FAA should consider changing its requirement for instructors to have 2 years instructing experience before they may train other pilots to obtain their CFIs. In addition, in 2017 the FAA Aviation Rulemaking Advisory Committee issued a report recommending that FAA permit completion of an FAA-approved standardized course at FAA-certificated schools as an alternative to the 2-year experience requirement. According to FAA officials, the agency is drafting a proposed regulatory change to allow appropriately qualified flight instructors who have met proficiency requirements to train other pilots to obtain a CFI. The Cost of Flight Training Is a Challenge for Some Colleges in Recruiting and Retaining Students There was general agreement among the majority of school representatives we interviewed that in the last 5 years more students have shown interest in the pilot profession by applying for and enrolling in pilot programs at collegiate aviation schools. Representatives from eight schools and one aviation industry stakeholder noted that students may be interested in becoming pilots because there appears to be more pilot career opportunities and a greater likelihood of a secure and lucrative career path. Some airlines have created career path programs that document the requirements to move along the career path from pilot school to a particular regional airline and on to a particular mainline airline. According to an association representing pilots, they have done so to encourage more students to enter the pilot profession. Nonetheless, representatives from nearly all schools we interviewed identified the cost of a professional pilot degree program as a great (10 schools) or a moderate (6 schools) challenge to recruiting and retaining students. While high education costs are not unique to pilot programs, these programs can be particularly expensive, and therefore unaffordable to many students. As previously reported, professional pilot students incur flight training “lab fees” in addition to general college tuition and fees, that together often exceeds $100,000. Schools’ tuition and fees can vary significantly. Factors affecting cost include whether the school is public, private non-profit, or private for-profit, whether the school offers a 2-year or 4-year program, and the student’s resident status. According to Education’s data, annual in-state tuition at public collegiate aviation schools we identified ranges from approximately $1,100 to $13,000. However, annual out-of-state tuition at a public 4-year program can cost as much as approximately $28,800. Private school tuition can cost more. For example, one 4-year private for-profit collegiate aviation school lists estimated annual undergraduate tuition of nearly $36,000, not including room and board or flight training costs. Flight training costs also vary considerably. According to the University Aviation Association’s 2016 directory of collegiate aviation schools, a majority of pilot programs (27 of 45) have total approximate flight training costs of more than $50,000, with an upper cost of about $81,000. Flight training costs may vary, depending on the school requirements, student interest, and aptitude. Pilot program curriculum may differ and some students may choose to take additional classes. Each additional certificate and rating adds to the total cost of the training. Also, the time required for students to complete their certificates and ratings varies. Compounding the issue of cost is that the maximum federal financial aid available to eligible students is well below the full cost of a collegiate flight education, a factor that is also not unique to collegiate aviation students. For academic years 2017–2018 and 2018–2019, the maximum federal Pell Grant award is currently $5,920, and annual federal loan limits range from $5,500 up to $12,500 depending on the student’s year in school, dependency status, and other factors. Most students need to either use family resources or take out private loans to pay for the total cost of a pilot program, according to representatives from four schools. Not all students have the means to do so, as private lenders may require a co-signer with good credit and a minimum income level. Also, representatives from two schools said that some students who initially secure private loans for flight training are unable or unwilling to secure loans needed later on to complete this training, causing them to leave the pilot program. This financing challenge may pose a significant barrier for lower income students to enter the pilot profession. There are lower cost alternatives to collegiate aviation schools, though they are not entirely equivalent. Students may obtain a flight education and achieve the same FAA certificates and ratings from a non-collegiate pilot school and incur flight training expenses without the added cost of college tuition. As previously discussed, a pilot with non-collegiate flight training could be eligible for the same employment opportunities with regional airlines, but according to five stakeholders, airlines prefer or have typically hired pilots with a 4-year degree. Military service is another lower cost alternative for flight training, as service members are compensated for their time while they are training. However, one school representative noted that service members may enlist in the military with the intention of pursuing flight training, but they are not guaranteed to receive a flight assignment. Representatives from two stakeholders told us it is not possible to significantly reduce the cost of flight training because it is inherently expensive, and four school representatives said that costs are increasing. One approach to controlling costs for students is to make it easier for them to transfer from public 2-year pilot programs to 4-year programs, since public 2-year programs are typically less expensive. A representative from a state university told us that he is developing a degree completion program for professional pilot students from U.S. 2- year colleges. This program would enable students to complete their bachelor’s degree online with the university after they have obtained an associate’s degree in flight. Similarly, a community college has transfer agreements with several 4-year universities, and most of its students aim to obtain a 4-year college degree. We previously found that when colleges provide their students with information on transfer agreements they help students save on tuition costs by enabling students to predict which credits will transfer and reducing the likelihood that they will need to repeat coursework. Two schools have opened satellite campuses for their flight programs, and two other schools are considering that option, both to expand their capacity and to provide options for students to receive flight training while living closer to home, according to school representatives. Other actions schools have taken focus on ensuring that students are able to pay for the program and offering assistance with costs where possible. Representatives of three schools told us that they are raising money for departmental scholarships, and a representative of one school said the school raises awareness about outside scholarships that may be available to its students. A representative from a community college said that there are scholarships available for women and minorities. According to one industry representative, there are not enough women and minorities in aviation, which will negatively affect the supply of future pilots. One state university offers in-state tuition for flight students who are residents of nearby states, with the aim of both reducing some students’ costs and increasing enrollment at the school. Representatives of four schools told us that they emphasize communication with potential students about costs before they enroll to improve pilot student retention. In addition, one school we spoke with requires students to pay their flight training fees for each certification upfront in one lump sum to ensure that students will be able to complete the training. Initiatives to assist students with funding and reduce costs of flight training have been in place for a long time with limited impact, according to one flight training provider association. Other aviation stakeholders noted that regional and mainline airlines could have a greater effect than previous efforts by working together. For example, airlines could provide scholarships and subsidize students’ flight training while students are still in school. The airlines could also work together as an industry to provide scholarships to students. However, as one aviation association noted, airlines are reluctant to provide scholarships to students who are likely to fly for a competitor. Representatives from two stakeholders suggested that increases to limits on federal student loans could provide additional resources to help students pay for flight training costs. To some extent and even if additional actions are taken to help defray some of the educational costs, some students may not be able to afford the cost of collegiate aviation schools. A Variety of Other Factors May Present Challenges for Some Collegiate Aviation Schools Some selected school representatives also cited other challenges, though these challenges were cited by fewer representatives, and most of the representatives characterized these challenges as moderate or slight. Purchasing and maintaining aircraft. Representatives from 13 schools said that purchasing or maintaining aircraft, or obtaining the requisite purchase approvals can be challenging. New single-engine training aircraft could cost more than $300,000, while a new multi-engine aircraft can cost around $750,000. Purchasing older, used equipment is one possible way to defray aircraft costs, but older equipment requires more time offline for maintenance. Representatives from two schools stated that aircraft used for training requires extensive scheduled and unscheduled maintenance, which can interfere with their ability to train students. Airport infrastructure and airspace constraints. When asked about challenges related to airport infrastructure, representatives of six schools identified challenges related to space constraints. Issues included insufficient space to store and maintain aircraft, insufficient classroom and office space, and crowded airspace that cannot accommodate the desired flight operations to train the number of pilot students they could with their existing resources. Few representatives identified infrastructure availability at the airport as a great (1 school) or moderate (3 schools) challenge, while 6 representatives reported that infrastructure posed only a slight challenge and 7 said it was not a challenge at all. VA education benefit program administration—publication of specific training hours and costs. Representatives from eight schools and two stakeholders expressed some concern about new enforcement of VA education benefit rules from the Post 9/11 GI Bill, as amended by the Post-9/11 Veterans Educational Assistance Improvements Act of 2010. VA issued two policy advisories in 2015 to notify collegiate aviation schools about statutory education benefit policies and bring them into compliance. One policy advisory notified schools that they must publish the specific number of training hours, as well as the specific cost of training, for each flight course, effectively setting a maximum number of training hours and fixed fees for each course taken as part of a standard degree program. According to VA, before the agency issued the policy advisories there was great public and congressional outcry about individual pilot students receiving hundreds of thousands of dollars from VA for their education. VA issued the policy advisories to specify what pilot training activities are appropriate uses of VA money, and under what circumstances. VA funds cannot be used to pay for pilot training to proficiency because that would entail an unlimited amount of funds to be available for an individual’s flight training. Representatives from five selected schools reported that this rule made it difficult to provide efficient and effective flight training for all pilot students. Depending on the program structure, students who cannot finish the course in the set number of hours must either pay out of pocket for additional training or accept a failing grade and take the course again. VA education benefits pay for eligible beneficiaries to repeat the course if needed. In contrast, FAA imposes a minimum but not a maximum number of hours per certificate, because the training goal is to achieve a certain level of proficiency for each certificate. One school representative stated that the school allowed its VA education benefit eligibility to lapse because it allowed them the freedom to train students to proficiency without maximum training hours; however, veterans can no longer use their benefits to enroll in that program. VA education benefit eligibility for contracted flight instruction. Representatives of two out of the five schools we interviewed that contract out flight training and one stakeholder reported a challenge concerning a rule described in the second VA policy advisory; the rule places restrictions on collegiate aviation schools that contract out flight training to a non-collegiate school. Previously, veterans received benefits for flight training conducted at non-collegiate pilot schools through the institution of higher learning that contracted out the flight training. However, in its policy advisory VA stated that this practice was not consistent with the rules of the education benefit program because there are different rules for non-collegiate pilot schools; VA benefits cannot be used to pay for training toward private pilot certification at non-collegiate pilot schools. In addition, federal law states that the VA cannot approve the enrollment of an eligible veteran in a course if it involves contracted training that is either otherwise barred from being approved or has not obtained approval on its own. As a result, to remain eligible for VA education benefits, a collegiate aviation school cannot include private pilot certification training provided by a non-collegiate pilot school in its degree program since such training is statutorily barred from approval at the contracted non- collegiate pilot school. Therefore, all students enrolled in such programs must have already earned their private pilot certificate before matriculating in the program, whether they use veterans’ education benefits or not. According to VA, it issued its policy advisory to clarify the statutory limitations of education benefits under the GI Bill relating to private pilot certificate courses. Representatives from two schools said that they are currently not eligible for VA education benefits as a result of this rule, which representatives of one school said has affected the school’s enrollment of veterans. Furthermore, industry stakeholders have expressed concern about greater limits on VA education benefits for flight training based on possible future legislative action. Meanwhile, the U.S. Department of Transportation has announced a new “Forces to Flyers Initiative” with two objectives: (1) to assess the level of interest among veterans in becoming pilots and (2) to help veterans who are not former military pilots to receive the training they need to become commercial pilots. Though representatives from five schools identified this issue as a great challenge, overall its impact is limited because not all schools have students using veterans’ benefits for their pilot programs, and a small percentage of students overall use veterans’ benefits to pay for their education. Agency Comments We provided a draft of this product to the DOT, Education, and VA for comment. DOT provided technical comments, which we incorporated as appropriate. Education and VA declined to provide formal or technical comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, the Secretary of the Department of Veterans Affairs, the Secretary of the Department of Education, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or vonaha@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology For our review we addressed (1) what is known about collegiate aviation schools with professional pilot degree programs in terms of location, types of training programs available, and enrollment; and (2) challenges that affect collegiate aviation schools’ ability to produce professional pilots and schools’ response to these challenges. To address both objectives, we reviewed a range of reports from GAO, Federal Aviation Administration (FAA), Congressional Research Service, and Bureau of Labor Statistics: these reports included general background information on a variety of related issues on pilot training, issues such as pilot certification and training issues in the United States; FAA regulatory training requirements for different levels of pilot certification; types and requirements of pilot training schools; current supply and demand, and forecasts for commercial airline pilots; and airport infrastructure financing. Furthermore, we reviewed the Federal Aviation Regulations related to training and certification for pilots under Part 61 and Part 141. We also reviewed provisions of the Airline Safety and Federal Aviation Administration Extension Act of 2010 (Pub. L. No. 111-216) related to “Flight Crewmember Screening and Qualifications” and “Airline Transport Pilot Certification.” To determine what is known about collegiate aviation schools we analyzed several sets of data and interviewed representatives from collegiate aviation schools and other aviation stakeholders. To identify colleges and universities with professional pilot degree programs for fixed wing aircraft in academic year 2015–2016, we compared FAA’s data on FAA-certificated pilot schools as of August 19, 2016; the Aircraft Owner and Pilot Association’s list of colleges and universities with aviation programs as of September 19, 2016; and the University Aviation Association’s 2016 directory of collegiate aviation schools. These data were the most applicable given the academic year reviewed. We verified schools on all three lists by checking school websites, typically the program’s webpage or course catalog detailing degree program requirements. For schools that were included on only one or two of the lists, two staff members independently reviewed school information and categorized the school as inside or outside of our scope. Disagreements between coders were reviewed by a third staff member and resolved through discussion. In a few cases where website information was unclear, the staff member contacted school officials to verify that they offered a professional pilot degree program. To determine the airport and airport types at which schools with professional pilot degree programs operated their flight training, we reviewed information from FAA’s National Plan of Integrated Airport Systems, the Aircraft Owner and Pilot Association, and school websites. We also selected and interviewed representatives of six airports of varying types (e.g., medium-hub, small- hub, and non-hub) and in different geographic areas of the country, all of whom had collegiate aviation school tenants. Because we selected the airports as part of a nonprobability sample, our findings cannot be generalized to all airports with collegiate aviation school tenants. To determine what is known about the institution type, college-wide tuition and fees, and graduations at these schools, we analyzed data from Education’s Integrated Postsecondary Education Data System (IPEDS). Of the 147 collegiate aviation schools with professional pilot degree programs that we identified, 146 of them have an IPEDS identification number. According to Education officials, schools with an IPEDS identification number are likely to participate in Title IV financial aid, be accredited, and consequently be monitored by Education through several mechanisms including IPEDS, federal student aid compliance, and accreditation. With respect the institution type, the categories of schools included in our analysis included degree-granting institutions of the following types: public, private non-profit, and private for profit with either 4-year baccalaureate or 2-year associates degrees. With respect to tuition and fees, we reviewed both in-state and out-of-state costs schools reported to Education. Data were not available for academic year 2014–2015 for two collegiate aviation schools we identified. In a few instances schools offered lower-cost tuition and fees to local students (in-district). For purposes of comparison, we did not include these costs in our report, since not all schools offer in-district discounts. With respect to the graduations data, we analyzed graduations data in academic year 2015– 2016 in 10 aviation-related categories within Education’s Classification of Instructional Programs (CIP) for schools we identified as having professional pilot degree programs. We determined that IPEDS data were sufficiently reliable for the purposes of our reporting objectives based on prior testing of the data from these systems and interviews with knowledgeable officials at Education’s National Center for Education Statistics. To determine what is known about enrollment at collegiate aviation schools, we analyzed enrollment and flight instructor data voluntarily reported to FAA by some schools between October 2015 and October 2017. Through interviews with FAA officials, we have determined these data were the most complete sources available and, while limited, were sufficiently reliable for the purpose of illustrating the variety in the size of professional pilot degree program enrollment. We also obtained and analyzed FAA’s pilot certificate and instrument rating data to identify, for a number of categories, the number of new pilot certificates FAA issued from 2012 through 2016 and the total number of pilot certificate holders for those years. One limitation associated with the database in which FAA stores certificate-holder information is that the agency does not have an active process in place to discover and deactivate deceased pilots. This lack may lead to an over count in the number of active pilot certificates. However, airline transport pilot certificate holders must regularly renew their medical certificates to remain active. We found that the data were sufficiently reliable for the purposes of reporting the number of “restricted privileges” airline transport pilot certificates FAA has issued since 2013. To determine challenges that affect collegiate aviation schools’ ability to produce professional pilots, we reviewed documents, interviewed, and administered a standardized question set to a non-generalizable sample of 18 collegiate aviation schools about their pilot programs and key challenges that affect their ability to produce professional pilots. To select our non-generalizable sample of schools, we used information from FAA, the Aircraft Owner and Pilot Association, school websites, and initial interviews with aviation stakeholders. Based on the schools’ geographic location, we selected schools in each of FAA’s nine airport regions. In order to provide a variety of perspectives in our selection, we included schools of each institution type (public, private non-profit, and private for- profit), of each program type (2-year and 4-year), some that were FAA- certificated and some that contracted out flight training. While the sample allowed us to learn about challenges that affect these schools’ ability to produce professional pilots, it was designed to provide anecdotal information, not findings that would be representative of all collegiate aviation schools with professional pilot degree programs in the United States. Our initial selection included 20 schools, of which 19 responded to our request for interview. Of these 19, 18 schools responded to our question set, and representatives of one additional school provided us with general information about their program. In our question set, we asked schools to rate 10 factors that we identified in preliminary interviews as potentially affecting schools’ ability to recruit, retain, and train professional pilot students—thereby affecting their ability to produce pilots. Schools rated each factor as a great challenge, a moderate challenge, a slight challenge, or not a challenge to the ability to recruit, retain, and train professional pilot students. After our interviews with officials from the selected schools, we analyzed and aggregated responses to these questions, and identified two factors that schools most frequently cited as the most challenging to their ability to produce pilots. In addition, 3 other factors were cited by multiple schools as a great or moderate challenge. Schools generally cited the remaining 5 factors as a slight or moderate challenge. To describe stakeholders’ views of factors that affect collegiate aviation schools’ ability to produce pilots and actions that have been or could be taken to address these factors, we reviewed and summarized schools’ comments. We also reviewed documents and interviewed FAA officials, representatives of airports and industry organizations representing collegiate and non-collegiate pilot schools, airports, flight instructors, pilots, regional airlines, and mainline airlines, selected to reflect a range of perspectives about initial pilot training. (See table 4.) In addition, we reviewed documents and interviewed Education and Department of Veterans Affairs officials about regulations and policies related to pilot programs’ eligibility for federal student financial aid and the use of veterans’ education benefits. We conducted this performance audit from September 2016 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gerald Dillingham, Ph.D. (Director); Vashun Cole (Assistant Director); Jaclyn Mullen (Analyst-in- Charge); Amy Abramowitz; Danielle Ellingston; Dave Hooper; Delwen Jones; Serena Lo; John Mingus; Natasha Oliver; Malika Rice; Michelle St. Pierre; and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study Collegiate aviation schools are one pathway for initial civilian pilot training in the United States and are a key source of airline pilots. Over the past 5 years, aviation stakeholders have voiced concerns that there is an insufficient supply of qualified airline pilots, citing increased airline pilot retirements, among other factors. The explanatory statement accompanying the Consolidated Appropriations Act of 2017 included a provision for GAO to review aspects of collegiate aviation schools' operations. This report examines: (1) what is known about schools with professional pilot degree programs and (2) challenges that affect schools' ability to produce professional pilots and schools' responses to these challenges. GAO reviewed relevant statutes, regulations, and documents from the FAA, Veterans Affairs, and Education; analyzed FAA's data on flight schools, airports, and pilots; and analyzed Education's degree completion data for the 2015–2016 academic year, the most recent data available. GAO also interviewed representatives from: 18 schools, selected based on factors including program type and location; 6 airports selected based on type and location; and 11 additional aviation stakeholders representing schools, airlines, pilots, airports, and flight instructors, selected to reflect a range of perspectives about initial pilot training. The results of the interviews are not generalizable to all aviation schools and stakeholders. GAO is not making recommendations in this report. On a draft of the report, DOT provided technical clarifications, which GAO incorporated as appropriate. What GAO Found GAO identified 147 collegiate aviation schools that offered professional pilot degree programs in academic year 2015–2016. All pilot students must pass the same knowledge and flight tests to obtain pilot certificates from the Federal Aviation Administration (FAA), but schools' programs vary. For example, 101 of these schools operated relatively more formalized, FAA-certificated degree programs. The other 46 schools operated under a model that provides flexibility and meets FAA requirements but that does not require FAA certification to conduct such training. Total annual pilot-student enrollment and graduation numbers are not known. According to FAA officials, FAA does not require schools to submit enrollment data and does not verify enrollment data that many certificated schools voluntarily submit. Regarding graduation data, schools must classify and report completed degrees by program type to the Department of Education (Education) using that agency's classification system. Education's data indicated a total of 1,356 professional pilot degrees in academic year 2015–2016. Because pilot-student graduates can be classified under a number of aviation-related programs in Education's system, the number of pilot-student graduates could be higher. Flight instructor retention, which has been influenced by the current high demand for airline pilots, and the high cost of pilot training are key challenges that affect schools' ability to produce pilots, according to aviation stakeholders GAO interviewed. Flight instructor retention : Nearly all (16 of 18) selected school representatives cited difficulty recruiting and retaining flight instructors as a great or moderate challenge for schools' ability to train pilots. According to most school representatives (15) and other selected stakeholders, instructors who aspire to be airline pilots are rapidly accruing the flight hours necessary to qualify and are obtaining employment as soon as they are eligible. In addition, regional airlines have recently increased hiring, generating high turnover among flight instructors, who are traditionally their main source of new pilots. High cost of training : Nearly all (16) selected schools' representatives identified the cost of a professional pilot degree program as a great or moderate challenge to recruiting and retaining pilot students. High education costs are not unique to these programs. Nonetheless, in addition to tuition, flight training fees alone often exceed $50,000, well above the cap for federal financial aid available to eligible students. Schools and regional airlines have taken a range of actions to address these challenges. For example, eight selected school representatives reported increasing flight instructors' compensation and benefits. In addition, some regional airlines' cadet programs provide mentorship and incentives such as bonus pay or tuition reimbursement to select students while they are still in school. The Department of Transportation (DOT) has also launched an initiative to assess the level of interest among veterans in becoming pilots and to examine strategies for employing military veterans as pilots.
gao_GAO-18-353
gao_GAO-18-353_0
Background In February 2011, Boeing won the competition to develop the Air Force’s next generation aerial refueling tanker aircraft, the KC-46. The KC-46 will allow for two types of refueling to be employed in the same mission—a refueling boom that is integrated with a computer assisted control system and a permanent hose and drogue refueling system. The boom is a rigid, telescoping tube that an operator on the tanker aircraft extends and inserts into a receptacle on the aircraft being refueled. See figure 1 for an example of boom refueling. The hose and drogue system is comprised of a long, flexible refueling hose and a parachute-like metal basket that provides stability. Drogue refueling is available via the centerline drogue system in the middle of the aircraft, or via wing aerial refueling pods located on each wing. The pods are used for simultaneous refueling of two aircraft. To develop a KC-46 tanker, Boeing modified a commercial 767 aircraft in two phases. In the first phase, Boeing modified the design of the 767 with a cargo door and an advanced flight deck display borrowed from its 787 aircraft and is calling this modified version the 767-2C. The 767-2C is built on Boeing’s existing production line. In the second phase, the 767-2C was militarized and brought to a KC-46 configuration in a separate Boeing facility. See figure 2 for a depiction of the conversion of the 767 aircraft into the KC-46 tanker with the boom deployed and the flight certifications needed at each stage. The Federal Aviation Administration has previously certified the airworthiness of Boeing’s 767 commercial passenger airplane (referred to as a type certificate) and in December 2017, awarded the amended type certificate for the 767-2C aircraft to Boeing. It is also responsible for certifying the design of the KC-46 with a supplemental type certificate. The Air Force is then responsible for certifying the airworthiness of the KC-46 with a military certification, as well as certifying the KC-46 and various receiver aircraft, such as F-16 fighters and C-17 cargo planes, for refueling operations. Boeing must complete developmental testing to support these certifications as well as to demonstrate that contract specifications have been met. After the first 4 KC-46 aircraft are delivered, the Air Force will complete operational testing to determine the KC-46’s operational effectiveness and operational suitability for combat. Boeing was awarded a fixed-price-incentive (firm target) contract for KC- 46 development, which includes the design, manufacture, and delivery of four test aircraft. Barring any changes, the contract specifies a ceiling price of $4.9 billion for Boeing to develop the first 4 aircraft, at which point Boeing must assume responsibility for all additional costs. The contract includes options to manufacture the remaining 175 aircraft with firm-fixed- price contract options for the first 2 production lots, and options with not- to-exceed fixed prices for production lots 3 through 13. For purposes of this report, a production lot refers to a set number of aircraft that must be built and delivered in a given time frame and procured with a specific year of funding. For example, the first production lot includes 7 aircraft procured with fiscal year 2015 funding that are to be built and then delivered to the Air Force starting in 2018. The original contract also required Boeing to deliver 18 fully capable aircraft by August 2017. The Under Secretary for Acquisition, Technology and Logistics approved the KC-46 program to enter low-rate initial production in August 2016. Since then, the Air Force has exercised options for the first 3 production lots for 34 aircraft totaling about $4.9 billion. Previously we reported that in January 2017, Boeing and the program office updated the schedule to reflect a 14-month delivery delay due to problems Boeing experienced wiring the aircraft, design issues discovered with fuel system components, a fuel contamination event, and test delays (see figure 3). As we reported, instead of meeting the original August 2017 date, the updated schedule shows Boeing would deliver the first 18 aircraft with booms and centerline drogue systems between September 2017 and February 2018. Then, the 9 wing aerial refueling pod sets would be delivered separately by October 2018, at which point Boeing will have delivered 18 fully capable aircraft. Cost Estimates and Performance Capability Goals Remain Favorable, but a Critical Deficiency Has Not Yet Been Resolved The KC-46 program’s total acquisition cost estimate remained stable over the past year at $44.4 billion, which is about $7.3 billion less than the original estimate. In addition, the aircraft is projected to meet all performance capabilities. However, Boeing is currently trying to resolve a critical deficiency it discovered in testing, which could affect performance. Cost Estimates Remain Stable Similar to last year, the Air Force estimates that the total program acquisition cost for the KC-46, which includes development, procurement, and military construction costs will be $44.4 billion. This is about $7.3 billion, or about 14 percent, less than the original estimate of $51.7 billion. Average program acquisition unit costs have decreased by the same percent because quantities have remained the same. Table 1 provides a comparison of the initial and current quantity and cost estimates. The Air Force decreased its cost estimate primarily because it has not added or changed requirements and therefore there were fewer engineering changes than expected. Program officials said the initial cost estimate included a large amount of funding for possible requirements changes, based on the Air Force’s experience with prior major acquisition programs. Military construction cost estimates also decreased as the Air Force has decided, for example, to reuse existing facilities at its operating bases rather than build new ones. Boeing Has Achieved Some Performance Goals and Others Are Projected to Be Met, though Additional Testing Is Needed The program expects to meet all of its 21 performance goals. For example, the aircraft is expected to be ready for operational use when required at least 89 percent of the time and, once it is deployed for an aerial refueling mission, be able to complete that mission 92 percent of the time. In addition, the aircraft is now using less than 1,557 gallons of fuel per flight hour, its fuel usage rate target. The program also closely tracks the actual weight of the aircraft because weight has a direct effect on the amount of fuel that can be carried. As of January 2018, program officials told us that there are approximately 176 pounds of margin to the operational empty weight target of 204,000 pounds. When we met with them in December 2017, Boeing officials told us they do not expect the aircraft to exceed the target weight. Appendix I provides a description of each of the performance capabilities. In some cases, the program will be tracking progress towards achieving performance capabilities while the aircraft is in operation. For example, the program set a reliability growth goal of 2.83 flight hours between unscheduled maintenance events due to equipment failure by the time the aircraft reaches 50,000 flight hours. As of November 2017, the program had completed about 2,159 flight hours, achieving 1.8 hours at that time. Program officials believe that the reliability will improve as additional flight hours are completed and as unreliable parts are identified and replaced. The 2017 Annual Report by the Office of the Director of Operational Test and Evaluation included a recommendation that the Air Force re-test the KC-46 in an operationally representative condition to demonstrate that aerial refueling systems could perform their required missions following an electromagnetic pulse event. This type of testing is related to the aircraft’s survivability performance goal, meaning the aircraft should be capable of operating in a hostile environment, including after a nuclear incident that delivers an electromagnetic pulse. The report stated that the program powered down or removed critical mission systems during this testing and that therefore, the KC-46’s capability to deliver fuel during or immediately following an electromagnetic pulse was not fully tested. Program officials stated that this testing was adequate to meet the initial contract specifications. They also stated that the program is assessing whether additional tests are needed to meet the new, more stringent standards that were issued by the Department of Defense after the fixed- price contract was signed. A Critical Deficiency Has Not Been Resolved Boeing is currently working to resolve a high-priority deficiency related to the performance of the aerial refueling boom that it discovered during testing. According to the 2017 Annual Report by the Director of Operational Test and Evaluation, analysis of boom aerial refueling testing to date showed a significant number of instances where the boom nozzle contacted the receiver aircraft outside the refueling receptacle. In many of those instances, the aerial refueling operators were unaware that those contacts had occurred. Boom nozzle contact outside the receptacle can damage antennae or other nearby structures. It is especially problematic for low-observable receiver aircraft, such as the F-22 fighter, because it can damage radar-absorbing coatings. Program officials said that Boeing is currently developing a software fix for the remote vision system that would provide aerial refueling operators better visibility for refueling operations to help avoid unintended boom contacts with receiver aircraft. The officials also said that Boeing is responsible for the costs to develop and retrofit the fix onto existing aircraft. Boeing Is Likely to Experience Additional Delays in Delivering the First 18 Aircraft Although Boeing schedule documents indicate that the company remains committed to delivering 18 fully capable aircraft by October 2018, a program office risk assessment, as well as our own analysis, project that Boeing will not deliver the aircraft until around May 2019, if risks are not mitigated. The company is taking steps to address several risks associated with developmental testing, but challenges remain. Boeing, not the government, is responsible for the cost of development delays based on the terms of the fixed-price contract. Schedule Risk Assessment Projects Additional Delays A program office schedule risk assessment from June 2017 projects that Boeing will not deliver the first 18 fully capable aircraft until May 2019, 7 months after the updated schedule and about 21 months later than the original plan, if Boeing does not mitigate existing program risks. Boeing has already missed delivery milestones in the updated schedule shown earlier in figure 3, because it had not yet completed developmental testing. Boeing still plans to deliver 18 fully capable aircraft by October 2018, but in a compressed time period. A comparison of the original, updated, and schedule risk assessment delivery schedules are shown in figure 4. Boeing Is Taking Steps to Mitigate Schedule Risks Boeing has efforts underway to mitigate several risks that threaten its ability to deliver the first 18 fully capable aircraft by October 2018. These key risks and efforts to address them are discussed below. Test aircraft configuration: Boeing needs to update test aircraft to the correct configuration before it can complete different types of testing that remain. For example, according to program officials, Boeing needs to ensure that test aircraft have up-to-date and approved wiring, software versions, and aircraft parts prior to Federal Aviation Administration testing for the supplemental type certificate and Air Force testing for the required military certificate. At a more basic level, Boeing also needs to finalize the design of the wing aerial refueling pods to start developmental testing on that subsystem. According to Boeing officials, the company and its wing aerial refueling pod supplier had underestimated the level of design drawing details the Federal Aviation Administration needed to review to certify that the parts conformed to the approved design. Over the past 4 years, this supplier has been negotiating with several key sub-tier suppliers for the necessary documentation and has obtained most of it. Boeing has co-located some of its employees with the supplier to provide technical support to complete the remaining documentation for certification. Boeing and the program office disagree on how long it will take to reach that certification milestone. Boeing projects it will have conformed wing aerial refueling pods to test in March 2018 and program officials said there is risk to that time frame. Flight test pace: Boeing plans to complete about 6,550 remaining developmental flight test points by the end of June 2018 at a pace that is nearly double its current average. For example, some test points involve a KC-46 and receiver aircraft maintaining a specific airspeed and altitude during refueling. On average, from February 2016 through January 2018, Boeing has completed about 689 test points per month. It would need to almost double this pace to about 1,310 test points and sustain that pace for a 5-month period to complete testing by June. Based on the average number of tests points that Boeing has completed per month, as shown in figure 5, we project Boeing would finish the remaining test points about 5 months later than expected in early November 2018. We also project that delivery of 18 fully capable aircraft would occur around May 2019, assuming the same 5.5 month delivery time frame included in the updated schedule. Boeing recognizes that achieving its planned flight test pace is one of the most significant program risks and has taken several actions to address this risk. For example, last year, Boeing moved from a “test once” approach—where testing would begin once a series of tests was approved by the Federal Aviation Administration and Department of Defense—towards a more incremental testing approach where a smaller set of tests could be conducted as soon as they are approved by a single entity. Program officials pointed out that, where possible, Boeing is still using a single test point to satisfy more than one requirement from both regulators. As of January 2018, Boeing also identified about 440 test points that could be eliminated because, according to program officials, data collected in other tests may provide sufficient knowledge to cover the eliminated test points. Boeing has also consolidated a large percentage of qualification testing resources at a single location to improve efficiency. Test planning: According to program officials, Boeing’s test plans do not fully account for the time needed to complete receiver aircraft certification testing. Program officials, government test officials, and Boeing officials said that tests for certifying F-16 fighters, C-17 cargo planes, and other aircraft to receive fuel from a KC-46 will take between 3 and 5 weeks to complete for each aircraft. This is longer than the 1 week for each aircraft that is currently included in Boeing’s test plan, according to company officials. Boeing officials said the company intends to update the test schedule in Spring 2018 to reflect more time to complete receiver aircraft certifications. Boeing has not yet quantified how much time will be added to the test schedule for these certifications or determined whether it will affect the overall delivery schedule. According to program officials, Boeing is required to have 8 receiver aircraft certified by the first KC-46 delivery. These officials stated that to avoid the risk of further delivery delays, the Air Force is discussing the possibility of reducing the number of receiver aircraft certifications needed if some, but not all, receiver aircraft are certified prior to first KC-46 delivery. This would allow the warfighter to start using KC-46 aircraft sooner rather than wait for all 8 receiver aircraft to be certified. Air Force officials still maintain, however, that 8 receiver certifications are required prior to operational testing, which is slated to begin in October 2018 and last for about 7 months. Retrofitting already produced aircraft: Based on the updated schedule, Boeing will be producing 49 aircraft, or about 27 percent of the total aircraft the Air Force plans to buy, before developmental testing is complete. Originally, the Air Force planned to buy 19 aircraft or about 11 percent of the total number concurrent with developmental testing. In general, DOD tries to limit the amount of concurrency because testing can reveal design or performance problems that need to be fixed, which could lead to costly retrofits or schedule delays. For example, Boeing already needs to retrofit 18 aircraft it has produced with an updated wiring design and 6 aircraft with new flooring and tires. The Under Secretary for Acquisition, Technology and Logistics allowed 27 percent concurrency on this program to avoid a break in production. Cost risk to the government is low because the KC-46 development contract specifies that Boeing must correct any deficiencies and bring development and production aircraft to the final configuration at no additional cost to the government. However, there could be schedule delays if continued testing reveals problems that need to be corrected on aircraft already built. As of January 2018, Boeing estimates KC-46 development will cost about $5.9 billion or about $1 billion over the contract ceiling price. KC-46 Development Problems Have Resulted in Less Refueling Capacity Than Currently Anticipated KC-46 development problems have resulted in delivery delays and kept the Air Force from achieving a higher level of refueling capacity it expected to achieve by this time. These problems have not resulted in additional costs to the government. However, if delivery delays continue past October 2018, the Air Force will need to maintain legacy aircraft such as the KC-135 longer than planned. The Air Force expected to have 470 tankers in January 2018—a combination of KC-46, KC-135, and KC-10 aircraft—for refueling missions, but only had 455 of these aircraft at that time. Since no KC-46 aircraft have been delivered, the Air Force has had to use KC-135 and KC-10 aircraft at a higher rate than expected. Air Force officials negotiated non-monetary considerations from Boeing to offset the lost military tanker capacity associated with the delay, such as obtaining additional training at no cost to the government for KC-46 pilots and maintenance personnel and support for the aircrew training system. According to program officials, Boeing has already provided almost all of these considerations even though the contract modification that includes them has not yet been signed by Boeing. According to Air Mobility Command officials, if there are delivery delays past October 2018, the Air Force would need to keep some KC-135 aircraft operational longer than planned. The cost of maintaining those KC-135 aircraft is estimated to be about $10.3 million per year per aircraft. Additionally, about $12 million per aircraft may also be needed, according to Command officials, for depot maintenance activities that are scheduled every 5 years. Command officials stated that the number of depot events that are needed will depend on how quickly Boeing can deliver expected KC-46 aircraft. We are not making any recommendations in this report, but believe the Under Secretary of Defense for Acquisition, Technology and Logistics should implement a prior recommendation to closely monitor the cost, schedule, and performance outcomes of the KC-46 program to identify positive or negative lessons learned. As one of only a few major acquisition programs to award a fixed-price incentive (firm target) development contract in recent years, evaluating performance and identifying lessons learned will be illustrative, important for informing decision makers, and help guide and improve future defense acquisition programs. Agency Comments We provided a draft of this report to DOD for comment. DOD did not provide any written comments, but the KC-46 program office provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; and the Secretary of the Air Force. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: KC-46 Performance Capabilities The program office has 21 performance goals that are critical to the KC- 46 aircraft’s military capability and track progress in meeting contract specifications. These performance goals include nine key performance parameters, five key system attributes, and seven technical performance measures. Table 2 provides a description of each key performance parameter and key system attribute and table 3 provides a description and status of each technical performance measure. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Cheryl Andrew, Assistant Director; Matt Crosby; Kurt Gurka; Stephanie Gustafson; Katheryn Hubbell; Zachary Sivo; Nate Vaught; and Robin Wilson made key contributions to this report. Related GAO Products KC-46 Tanker Modernization: Delivery of First Fully Capable Aircraft Has Been Delayed Over One Year and Additional Delays are Possible. GAO-17-370. Washington, D.C.: March 24, 2017. KC-46 Tanker Aircraft: Challenging Testing and Delivery Schedules Lie Ahead. GAO-16-346. Washington, D.C.: April 8, 2016. KC-46 Tanker Aircraft: Key Aerial Refueling Capabilities Should Be Demonstrated Prior to the Production Decision. GAO-15-308. Washington, D.C.: April 9, 2015. KC-46 Tanker Aircraft: Program Generally on Track, but Upcoming Schedule Remains Challenging. GAO-14-190. Washington, D.C.: April 10, 2014. KC-46 Tanker Aircraft: Program Generally Stable but Improvements in Managing Schedule Are Needed. GAO-13-258. Washington, D.C.: February 27, 2013. KC-46 Tanker Aircraft: Acquisition Plans Have Good Features but Contain Schedule Risk. GAO-12-366. Washington, D.C.: March 26, 2012.
Why GAO Did This Study The KC-46 tanker modernization program, valued at about $44 billion, is among the Air Force's highest acquisition priorities. Aerial refueling—the transfer of fuel from airborne tankers to combat and airlift forces—is critical to the U.S. military's ability to effectively operate globally. The Air Force initiated the KC-46 program to replace about a third of its aging KC-135 aerial refueling fleet. Boeing was awarded a fixed-price-incentive contract to develop the aircraft. Among other things, Boeing was contractually required to deliver 18 fully capable aircraft (KC-46 aircraft with 9 sets of wing aerial refueling pods that allow for simultaneous refueling of 2 aircraft) by August 2017. The program plans to eventually field 179 aircraft in total. GAO was asked to monitor the KC-46 program because of problems Boeing is experiencing developing the aircraft. This is GAO's 7th report on the KC-46 program. This report assesses program progress and challenges toward achieving its cost goals and delivery schedule. GAO analyzed cost, schedule, development, and test information contained in program documents; and discussed results with officials from the KC-46 program office, other defense offices, the Federal Aviation Administration (responsible for certifying the design of the KC-46), and Boeing. What GAO Found The total acquisition cost estimate for the KC-46 refueling tanker aircraft remained stable over the last year at $44.4 billion. As shown in the table below, the estimate has decreased about $7.3 billion, or 14 percent, since the initial estimate. This decrease is due in part to stable requirements. The program updated its delivery schedule in 2017 to allow Boeing to delay delivery of the first 18 fully capable aircraft from August 2017 to October 2018— 14 months. A schedule risk assessment, as well as GAO's analysis, however projects that deliveries could slip to May 2019, 21 months from the original schedule, if risks are not mitigated. See figure. Boeing faces the following risks and challenges and is trying to address them: updating test aircraft to the correct configuration to complete remaining tests; completing flight tests at a pace that is almost double its monthly average; updating test plans to reflect a more realistic schedule for certifying aircraft, such as F-16 fighters and C-17 cargo planes, to be refueled by a KC-46; retrofitting production aircraft to their final configuration for delivery; and fixing a critical deficiency to keep the boom from contacting receiver aircraft outside the refueling receptacle. Because of the terms of the contract, Boeing, not the government, is responsible for nearly $1 billion in additional development costs already incurred. Boeing is also providing additional training for KC-46 pilots, among other things, to compensate the Air Force for delivery delays. Meanwhile, the Air Force is continuing to use KC-135 and KC-10 tankers for refueling missions. What GAO Recommends GAO believes the Department of Defense should implement a prior recommendation to document lessons learned given the program's challenges.
gao_GAO-18-117
gao_GAO-18-117_0
Background Overview of Personnel Security Clearance Process ODNI estimates that, as of October 1, 2015, approximately 4.2 million government and contractor employees were eligible to hold a security clearance. Personnel security clearances are required for access to certain national security information. National security information may be classified at one of three levels: confidential, secret, or top secret. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national security. Specifically, unauthorized disclosure could reasonably be expected to cause (1) “damage,” in the case of confidential information; (2) “serious damage,” in the case of secret information; and (3) “exceptionally grave damage,” in the case of top secret information. According to the Office of Personnel Management (OPM) Federal Investigations Notice 16-02, tier 3 investigations are required for eligibility for access to secret and confidential information, or for noncritical sensitive positions, or “L” access. OPM Federal Investigations Notice 16-07 indicates that tier 5 investigations are required for eligibility for access to top secret or sensitive compartmented information, or for critical sensitive or special sensitive positions, or “Q” access. Once an executive branch agency determines that a position requires a certain level of access to classified information, the employee in that position completes a questionnaire for national security positions, which the requesting agency sends to an investigative service provider. NBIB— the bureau within OPM with responsibility for conducting personnel background investigations—conducts background investigations for most of the federal government; however, some agencies have authority delegated to them to conduct their own investigations. The investigative service provider then conducts a background investigation and submits an investigative report to the requesting agency. Adjudicators from the requesting agency use the information from the investigative report to determine whether to grant or deny the employee eligibility for a security clearance by considering guidelines in 13 specific areas that address (1) conduct that could raise security concerns and (2) factors that could allay those security concerns and permit granting a clearance. Individuals granted security clearances are investigated periodically—for as long as they remain in a position requiring access to classified information—to ensure their continued eligibility. The 2012 Federal Investigative Standards changed the frequency of periodic reinvestigations for certain clearance holders. Continuous Evaluation Is Intended to Supplement the Personnel Security Clearance Process According to Executive Order 13467, as amended, continuous evaluation is a vetting process to review the background of an individual who has been determined to be eligible for access to classified information or to hold a sensitive position at any time during the period of eligibility. Continuous evaluation is intended to fill the gap that exists between periodic reinvestigations in which issues relevant to an individual’s continued eligibility for a security clearance may go unreported or unknown. For example, while the Federal Investigative Standards have allowed for periodic reinvestigations to be conducted at any time following the completion of the previous investigation or reinvestigation, agencies have not been required to conduct them more frequently than every 5 years, at most, depending on the clearance level and investigative standards in effect. Like periodic reinvestigations, the purpose of continuous evaluation is to assist agencies in evaluating an individual’s continued eligibility for access to classified information. Continuous evaluation involves automated record checks conducted on a more frequent basis, whereas periodic reinvestigations are conducted less frequently and may include, among other things, subject and reference interviews. The types of records checked as part of continuous evaluation are the same as those checked for other personnel security purposes. Security-relevant information discovered in the course of continuous evaluation is to be investigated and adjudicated under the existing standards. According to ODNI, implementation of continuous evaluation will not alter clearance holders’ existing rights or responsibilities and it will incorporate protections for privacy and civil liberties. Continuous Evaluation Is a Key Initiative of the Personnel Security Clearance Reform Effort The enactment of the Intelligence Reform and Terrorism Prevention Act of 2004 initiated a reform effort including goals and requirements for improving the personnel security clearance process government-wide. In June 2008, Executive Order 13467 established the PAC as the government-wide governance structure responsible for driving the implementation of and overseeing security and suitability reform efforts. The PAC presently has four principal members: the Deputy Director for Management of OMB; the Director of National Intelligence, who is the Security Executive Agent; the Director of OPM, who is the Suitability Executive Agent; and the Under Secretary of Defense for Intelligence. The Executive Order also designated the Deputy Director for Management of OMB as the chair of the PAC. Among other things, the PAC is to work with agencies to implement continuous performance improvement programs, policies, and procedures; establish annual goals and progress metrics; and prepare annual reports on results. It is also to develop and continuously reevaluate and revise outcome-based metrics that measure the quality, efficiency, and effectiveness of the vetting enterprise. In April 2014, the PAC established the Program Management Office to implement security clearance reforms. This office includes subject-matter experts with knowledge of personnel security clearances and suitability determinations from OMB, ODNI, OPM, DOD, the Department of Homeland Security, the Department of Justice, the Department of the Treasury, and the Federal Bureau of Investigation. In March 2014, OMB established Insider Threat and Security Clearance Reform as a government-wide, cross-agency priority goal to improve interagency coordination and implementation within the area of personnel security clearances. Through this goal, the PAC and executive-branch agencies are to work to improve oversight to ensure that investigations and adjudications meet government-wide quality standards. Included among the goal’s key milestones are implementing a continuous evaluation policy for the executive branch that regularly assesses trusted insiders who have been granted, or are eligible for, access to classified national security information, and overseeing the establishment of continuous evaluation capabilities. ODNI is identified as the lead agency for achieving both of these milestones. In addition, continuous evaluation is identified as a key initiative in the PAC’s strategic framework for fiscal years 2017 through 2021 as part of an effort to modernize the vetting process. While the PAC is responsible for driving the implementation of and overseeing the overall government-wide reform effort, individual agencies are responsible for various aspects of the effort. For example, as the Security Executive Agent, ODNI is responsible for developing and issuing uniform and consistent policies and procedures to ensure the effective, efficient, timely, and secure completion of investigations, polygraphs, and adjudications relating to determinations of eligibility for access to classified information or eligibility to hold a sensitive position. In addition, Executive Order 12968, as amended, indicates that ODNI is responsible for setting the standards for continuous evaluation of those individuals who have access to classified information. According to ODNI, under these Executive Orders, it has responsibility for and oversight of continuous evaluation, as it is an investigative activity that supports eligibility determinations. As such, ODNI established a program office within the National Counterintelligence and Security Center to, among other things, establish policy, guidance, and standards for the implementation of continuous evaluation across the executive branch. DOD Has Conducted Research on Continuous Evaluation for More Than a Decade DOD has been piloting aspects of continuous evaluation for more than a decade—with pilot tests of automated record checks conducted as early as 2002. Specifically, PERSEREC has conducted several studies dating back to 2001 that have informed and evaluated DOD’s continuous evaluation pilots, including the utility of and costs associated with various data sources. These studies have focused on the technical capability to conduct automated record checks from over 40 government and commercial databases, the value and utility of automated record checks in tier 5 investigations, and investigative alternatives to the traditional periodic reinvestigation, among other things. The studies have also included recommendations to further improve DOD’s continuous evaluation program, as well as areas for future research. PERSEREC noted that it undertook these studies to identify ways to make the personnel security system more efficient, fair, and effective. According to PERSEREC, starting in 2004 with the formation of the government-wide security clearance reform effort, it began to plan for a broader application of its research beyond the department. Using this body of knowledge, DOD has incrementally improved its automated record check capabilities and therefore its ability to implement a continuous evaluation program, which it did in 2014 at the recommendation of the Secretary of Defense. Specifically, following the September 2013 shooting at the Washington Navy Yard, the Secretary of Defense directed concurrent internal and independent reviews to identify and recommend actions to address any gaps or deficiencies in DOD programs, policies, and procedures regarding, among other things, the granting and renewing of security clearances for department and contractor personnel. In March 2014, the Secretary of Defense identified four key recommendations based on the findings and recommendations from those reviews. One of those recommendations was to implement continuous evaluation to provide automated record checks of personnel with access to DOD facilities or classified information. In addition, DOD Instruction 5200.02, which was also issued in March 2014, states that all personnel in national security positions shall be subject to continuous evaluation. Consistent with the recommendation and the DOD Instruction, the department implemented a continuous evaluation pilot in October 2014, the details of which are discussed later in the report. ODNI Has Taken an Initial Step to Implement Continuous Evaluation across the Executive Branch but Has Not Determined Key Program Aspects or How it Will Monitor and Measure Performance In October 2016, ODNI took an initial step to implement continuous evaluation across the executive branch in a phased approach, but as of May 2017, it had not yet formalized the program in policy. The seven agencies we spoke with have been limited in their abilities to plan for the implementation of continuous evaluation, including developing estimated costs, in accordance with ODNI’s phased approach. This is due, in part, to the fact that ODNI has not yet determined key aspects of the program, such as when the future phases of implementation will occur or what they will entail, and none of the agencies has completed implementation plans. Further, ODNI lacks plans for monitoring and measuring the performance of continuous evaluation across the executive branch. ODNI Has Taken an Initial Step to Implement Continuous Evaluation across Executive Branch Agencies, but Has Not Yet Formalized the Program in Policy ODNI has taken an initial step to implement continuous evaluation across all executive branch agencies in a phased approach, but it has not yet formalized the program in policy. Specifically, in October 2016, ODNI initiated the first phase of continuous evaluation and outlined requirements for this phase in interim guidance distributed to implementing agencies in December 2016. For the first phase of implementation, executive branch agencies are to conduct certain continuous evaluation record checks against a portion of their national security population. Specific details of the requirements were omitted from this report because the information is sensitive. According to OPM Federal Investigations Notice 17-03, the first phase of continuous evaluation is to be implemented by the end of fiscal year 2017. These checks are conducted in addition to any initial investigations or periodic reinvestigations occurring in fiscal year 2017. ODNI provided agencies with prioritization guidance to help them select individuals for continuous evaluation. Nearly 80 executive branch agencies are subject to the requirements for this first phase of implementation. ODNI has taken steps to establish the executive branch-wide continuous evaluation program in coordination with key stakeholders. For example, in June 2013, ODNI established a Continuous Evaluation Working Group— consisting of 12 core voting member agencies—as a mechanism to effectively coordinate continuous evaluation implementation among executive branch departments and agencies. According to the group’s charter, it meets on at least a quarterly basis and is responsible for coordinating the development of continuous evaluation standards, policies, and procedures, among other things. Since January 2015, ODNI has also issued interim guidance to executive branch agencies that are subject to its continuous evaluation requirements informing them about the purpose of continuous evaluation and providing them with some details of the program. Further, to inform the establishment of the executive branch-wide program, ODNI itself began a 1-year continuous evaluation pilot in September 2016, according to ODNI officials. Specific details of ODNI’s pilot were omitted from this report because the information is sensitive. In addition to developing standards for continuous evaluation and its oversight role, ODNI is also developing a system that agencies can use to conduct continuous evaluation. According to ODNI, its system is under development and will be available to all executive branch agencies with a full suite of continuous evaluation data sources. Agencies may opt to: (1) use ODNI’s system, (2) develop their own technical solution, (3) partner with another agency to fulfill their continuous evaluation requirements, or (4) some combination of the above options. ODNI asked agencies in December 2016 to provide a preliminary determination as to how they will satisfy future automated records checks requirements to allow ODNI’s continuous evaluation program to adequately plan for system enrollee volume and data usage. Specific details regarding the response of executive branch agencies to this request were omitted from this report because the information is sensitive. Some executive branch agencies stated the following: Department of Justice and State officials stated that they plan to use ODNI’s system once its development is complete; DOD officials stated that they plan to use their own internal system that they are developing to conduct continuous evaluation, but that they may use ODNI’s system to conduct certain checks; and Department of Homeland Security officials noted that they plan to use a combination of existing internal agency systems and ODNI’s system. Standards for Internal Control in the Federal Government states that management should externally communicate the necessary information to achieve an entity’s objectives. Effective information and communication are vital for enabling an entity to achieve its objectives, which can be accomplished through written guidance. While ODNI has provided some details of the program to implementing executive branch agencies through interim guidance, it has not yet formalized the continuous evaluation program through a Security Executive Agent Directive. Specifically, in May 2017, ODNI officials stated that ODNI had not yet issued a Security Executive Agent Directive for continuous evaluation, but that a draft directive was undergoing interagency coordination. ODNI officials stated that the directive will contain a definition of continuous evaluation that is consistent with, but expands upon, the definition contained in the relevant Executive Order. These officials stated that the expanded definition will help to clarify continuous evaluation and ensure that agencies have a common understanding of the program. In addition, ODNI officials stated that they have developed draft implementation guidelines, which they plan to issue after the directive is finalized. ODNI officials stated that the interim guidance will remain in effect until the Security Executive Agent Directive or follow-on interim guidance is issued. DOD’s continuous evaluation program—which it began in October 2014, in advance of implementation of continuous evaluation executive branch- wide by ODNI—identified, in a requirements document for its continuous evaluation IT system, that the most important gap in the development of the department’s program was the lack of a national or DOD-level policy. Specifically, the requirements document notes the lack of a policy that fully describes the continuous evaluation process or purpose, or the end uses of data. The requirements document further notes that there are multiple definitions of continuous evaluation and, due to the lack of policy, there is not a common lexicon of terms used in the continuous evaluation program, thereby creating an additional gap. While ODNI reports that the policy is under review, it has not prioritized the implementation of continuous evaluation and, as a result, has missed numerous milestones in issuing the policy since 2014. Specifically, the original Insider Threat and Security Clearance Reform cross-agency priority goal milestone for ODNI to issue a continuous evaluation policy was July 2014. This milestone was not attained, and it was adjusted to September 2016, a milestone that was also missed. The current milestone for issuing the policy is October 2017. Additionally, ODNI has missed other milestones for implementing a continuous evaluation program, as discussed later in the report. Furthermore, ODNI has initiated the first phase of continuous evaluation without a government-wide issued policy or an expanded definition of continuous evaluation. As a result, agencies may develop inconsistent approaches to implementing continuous evaluation. For example, DOD officials stated that DOD has developed its own path for continuous evaluation from ODNI’s limited guidance and that in the absence of a government-wide policy, DOD is developing its own internal guidance. As a result, the approach to continuous evaluation taken by DOD—the executive branch agency with the majority of security clearance holders— may differ from that of other executive branch agencies once fully implemented. Ultimately, such inconsistent approaches to continuous evaluation could affect reciprocity among agencies—another key objective of government-wide security clearance reform efforts. Without issuing a Security Executive Agent Directive in advance of the next phase of implementation—the timeframe for which ODNI has not yet determined—that includes, among other things, an expanded definition of continuous evaluation, agencies may develop inconsistent approaches to continuous evaluation, resulting in an uneven and perhaps ineffective implementation across the federal government. ODNI Has Not Yet Determined Key Aspects of the Continuous Evaluation Program, and Executive Branch Agencies Have Been Limited in Their Ability to Plan for Implementation ODNI has not yet determined key aspects of its continuous evaluation program, which has limited the ability of executive branch agencies to plan for implementation in accordance with ODNI’s phased approach. For example, while ODNI has initiated the first phase of continuous evaluation in coordination with implementing executive branch agencies, it has not yet determined what the future phases of implementation will entail, or when they will occur. Specifically, ODNI officials stated that they have not set any further timeframes for implementing continuous evaluation or determined agency requirements for future phases. Moreover, the timeframes for the implementation of continuous evaluation across the executive branch have been extended over time. For example, the original milestone set by the government-wide reform effort for implementing continuous evaluation was the 4th quarter of fiscal year 2010, and it was not attained. The PAC subsequently set an Insider Threat and Security Clearance Reform cross-agency priority goal milestone for developing an initial continuous evaluation capability for the most sensitive top secret clearance holders by September 2014—which was extended to December 2014—and a milestone for implementing the capability for additional clearance holders by December 2016. These milestones were also missed. As of May 2017, continuous evaluation had not yet been fully implemented, and ODNI had not set a milestone for when it would occur. Although ODNI is one of the goal leaders for the Insider Threat and Security Clearance Reform cross-agency priority goal, a senior ODNI official stated that the milestones were arbitrarily set, and that implementing continuous evaluation has proven to be challenging as a result of several technical and legal issues that need to be resolved. Further, ODNI officials highlighted the complexities associated with developing a whole-of-government continuous evaluation program and noted that a number of challenges have come to light as they have been developing the program, which have contributed to missed milestones. However, ODNI has not prioritized the setting of internal milestones for the future phases of implementation that it considers to be reasonable. ODNI officials stated that because continuous evaluation is a new initiative, no realistic timeline for full implementation will be set until the initial results of implementation are analyzed and technical capabilities have matured. Further, they stated that although they are unable to develop a timeline for full implementation at this time, they are actively working to implement the program. In addition, as previously discussed, ODNI’s milestone for issuing a continuous evaluation policy has also been adjusted over time. Figure 1 shows the adjusted executive branch milestones for issuing a continuous evaluation policy and implementing a continuous evaluation program, including developing a technical capability. The uncertainty regarding the requirements and timeframes for the future phases of the program has affected the ability of executive branch agencies to plan to implement continuous evaluation and estimate the associated costs. First, although OPM Federal Investigations Notice 17- 03 notes that the first phase of continuous evaluation is to be implemented by the end of fiscal year 2017, none of the seven executive branch agencies we spoke with has completed an agency-specific implementation plan. While some agencies, such as DOD and State— both of which have established continuous evaluation programs in advance of implementation across the executive branch—have developed concepts of operations or standard operating procedures for continuous evaluation, all seven agencies we spoke with stated that they are waiting for additional information from ODNI before completing their implementation plans. Department of Homeland Security officials stated that they are waiting for ODNI to define and schedule the future phases of implementation and to finish developing its continuous evaluation IT system, because there could be unknown policy implications that would affect the Department’s planning efforts. In August 2017, ODNI officials described plans to distribute information to executive branch agencies regarding continuous evaluation requirements for fiscal year 2018. Specific details of these plans were omitted from this report because the information is sensitive. Second, six of the seven agencies we spoke with noted challenges associated with estimating the costs of implementation. For example, while the Federal Bureau of Investigation has developed some cost estimates for implementing continuous evaluation, officials noted that it is challenging to estimate the full costs of the program until they receive additional information from ODNI, such as the requirements for future phases of implementation, as well as information about record check, technology, and personnel requirements. DOD officials stated that the number of individuals enrolled in continuous evaluation directly relates to the amount of agency resources required, for example, to validate, respond to, and adjudicate alerts. Two agencies we spoke with stated that they had not yet taken any steps to estimate costs because they are waiting for additional information from ODNI. In August 2017, ODNI officials stated that they plan to leverage an upcoming OMB budget data request, administered through the PAC, to obtain agency funding estimates for expenses related to conducting continuous evaluation from fiscal years 2017 through 2019. We have previously identified weaknesses associated with estimating the costs of personnel security clearance reform. Specifically, in April 2015 we found, among other things, that long-term costs of implementing the 2012 Federal Investigative Standards—including the implementation of continuous evaluation—were not addressed in personnel security clearance background investigation reform planning documentation. Further, we found that OMB did not have current and detailed cost- estimate information from executive-branch agencies, because it did not begin to solicit the information from the agencies until almost 2 years after the updated standards were approved. As such, we recommended in April 2015, among other things, that the Deputy Director for Management of OMB, in the capacity as Chair of the PAC, develop long-term funding estimates for changes to the federal government’s investigation practices resulting from the implementation of the standards, including but not limited to costs related to: (1) information technology adjustments to enable government-wide data sharing; (2) implementation of continuous evaluation of clearance holders; and (3) additional personnel resources for twice-as-frequent reinvestigations. OMB concurred with the recommendation. However, as of October 2017, this recommendation remained open. We continue to believe that this recommendation is valid and should be implemented. In addition, the seven executive branch agencies we spoke with identified other areas related to agency expectations for which they need information from ODNI. For example, officials from the Department of Justice; the Bureau of Alcohol, Tobacco, Firearms, and Explosives; and the Federal Bureau of Investigation stated that while they would like to use ODNI’s IT system to conduct all or at least some of the record checks that will be required, they will need to develop an interface with ODNI’s system to do so. However, these officials stated that they were unaware of ODNI’s technical requirements for that interface. These officials further stated that without information related to the technical requirements, they are unable to sufficiently plan or budget for continuous evaluation. ODNI officials stated that although ODNI’s IT system remains under development, information on technical interface requirements is available to all stakeholders and that they meet with agencies to discuss agency- specific IT requirements. According to ODNI, several executive branch agencies have expressed an interest in using ODNI’s IT system to conduct at least some, if not all, of the checks that will be required once continuous evaluation is fully implemented. ODNI officials acknowledged that agencies will need to develop an interface to use the system, and that agencies will be responsible for the associated costs. The Project Management Institute’s Guide to the Project Management Body of Knowledge (PMBOK® Guide) provides guidelines for managing individual projects, including developing a project management plan—in advance of executing the project—that describes how the project will be executed, monitored, and controlled. The plan should include, among other things, project schedules and stakeholder roles and responsibilities. The guide notes that updates may be made to the project management plan as changes may occur as the project progresses. ODNI officials managing the continuous evaluation program stated that they have not developed a project management plan for the implementation of continuous evaluation, to include an implementation schedule, because they are still in the planning stage. However, ODNI has already started to implement the program. Without a plan that, among other things, identifies reasonable milestones for the future phases of implementation, ODNI does not have a schedule against which it can track its progress or to which it is accountable. Further, without a plan for implementing continuous evaluation executive branch-wide that includes a schedule and agency requirements for future implementation phases, full implementation—which has been delayed for almost 7 years—may be further delayed. While a phased approach to implementation provides agencies time to adapt their personnel security clearance programs to changing requirements, without an implementation plan outlining ODNI’s expectations of agencies’ roles and responsibilities, agencies are unable to sufficiently plan for the implementation of continuous evaluation, including identifying required resources and estimating potential costs. Further, without clearly defining expectations for agencies—including information such as the planned requirements for future phases of implementation—continuous evaluation may not be fully implemented across the executive branch. Incomplete implementation could potentially prevent the federal government from identifying security-relevant information in a timely manner, thereby exposing it to further national security risks, such as unauthorized disclosures of classified information. Limited planning, both by ODNI and at the agency level, ultimately puts the success of the continuous evaluation program—a key aspect of the security clearance reform effort—at risk. ODNI Lacks Plans for Monitoring and Measuring Continuous Evaluation Program Performance ODNI lacks a plan to monitor and measure the performance of continuous evaluation across executive branch agencies. Specifically, ODNI officials stated that ODNI has not developed a plan to monitor or assess the performance of continuous evaluation across the executive branch, including for the first phase of implementation, which is underway. ODNI officials stated that, ideally, agencies will report that they have met the fiscal year 2017 requirements for the first phase of implementation, and that ODNI will follow up with agencies that do not report. The officials added that, in the long term, ODNI would like to incorporate continuous evaluation into its Security Executive Agent National Assessment Program, through which it conducts oversight of the security clearance process at executive branch agencies, but that continuous evaluation is not currently included in the oversight program. As previously discussed, according to Executive Order 13467, ODNI, as the Security Executive Agent, is to direct the oversight of investigations, reinvestigations, adjudications, and, as applicable, polygraphs for individuals’ eligibility for access to classified information, or eligibility to hold a sensitive position made by any agency. Similarly, Executive Order 12968, as amended, indicates that ODNI is responsible for determining standards for continuous evaluation. According to ODNI, its authorities under the Executive Orders include responsibility for and oversight of continuous evaluation as it is an investigative activity that supports eligibility determinations. Standards for Internal Control in the Federal Government emphasizes the importance of assessing performance over time, noting that ongoing monitoring should be built into operations, performed continually, and responsive to change. The PMBOK® Guide also states that project management includes monitoring and controlling work to meet performance objectives. Without developing a plan to monitor continuous evaluation—including assessing continuous evaluation at various phases of implementation— ODNI cannot ensure that continuous evaluation is being conducted consistently across the executive branch, and it may experience challenges in identifying any needed modifications to the program. Further, ODNI cannot ensure that continuous evaluation is effectively meeting its critical purpose of filling the information gap between investigative cycles to identify risks to national security. Additionally, we reported in 2012 that federal agencies engaging in large projects can use performance measures to determine how well they are achieving their goals and to identify any areas for improvement. Reporting on these measures can help key decision makers within agencies, as well as stakeholders, to obtain feedback for improving both policy and operational effectiveness. Moreover, performance measures need to provide managers and other stakeholders with timely, action- oriented information in a format that helps them make decisions that improve program performance. Throughout our body of work on leading performance management practices we have identified several attributes of successful performance measures, which include, among other things, measures that are clear, quantifiable, and objective, and that are linked to measurable goals. However, ODNI has not developed and distributed to executive branch agencies performance measures to assess the effectiveness of continuous evaluation once it is implemented executive branch-wide. ODNI officials stated that they would like to collect metrics in order to determine the potential effects of continuous evaluation, in particular on agency resources. Although these officials stated that they have had some discussions with DOD about the types of metrics it might want to collect, such as the number of false positives and the resources required to address the workload, ODNI has not prioritized the development of performance measures. In February 2017, ODNI officials stated that they had not developed—or distributed to DOD or other agencies conducting continuous evaluation—any performance measures for continuous evaluation. These officials stated that once continuous evaluation has matured, ODNI plans to identify appropriate measures and determine a mechanism to collect and analyze them. In August 2017, ODNI officials stated that they had developed a draft list of metrics for fiscal year 2017. Once the metrics are finalized, these officials stated that they would issue guidance to agencies requesting them to report these metrics to ODNI. However, since ODNI initiated the first phase of continuous evaluation in October 2016, without developing and distributing performance measures to executive branch agencies, it is unclear whether agencies are positioned to collect and report the information to ODNI for fiscal year 2017. Developing performance measures before the program fully matures could help it to identify potential program modifications needed prior to the next phase of implementation, as well as prior to full implementation. Further, without developing clear, quantifiable, and objective performance measures that are linked to measurable goals for agencies to track, and without determining a process and schedule for agencies to regularly report those measures, ODNI cannot ensure that the first phase of the program it has already initiated is effective or achieving similar results at all agencies, which could ultimately affect reciprocity. DOD and State Have Designed, Piloted, and Evaluated Continuous Evaluation to Varying Extents DOD and State have designed, piloted, and evaluated continuous evaluation, although their respective approaches have varied in scope, size, and duration—with DOD’s pilot involving the most record checks, the largest population, and the longest duration. As previously discussed, DOD’s efforts to design, pilot, and evaluate continuous evaluation have been ongoing for more than a decade, and they pre-date efforts at ODNI to develop and implement an executive branch-wide continuous evaluation program. According to ODNI officials, as of February 2017, DOD and State were the only agencies, other than ODNI, that had piloted continuous evaluation. ODNI officials stated that DOD and State’s pilots were conducted at the discretion of those agencies, and that while ODNI did not oversee them, the results of the pilots have helped inform ODNI’s development of an executive branch-wide program. These pilots were ongoing prior to ODNI’s December 2016 interim guidance outlining the fiscal year 2017 continuous evaluation requirements for executive branch agencies, and as a result, both DOD and State have taken different approaches to developing their programs. DOD’s Continuous Evaluation Pilot In October 2014, consistent with the Secretary of Defense’s March 2014 recommendation to implement continuous evaluation and DOD Instruction 5200.02, DOD initiated a continuous evaluation pilot that included approximately 100,000 military, civilian, and contractor clearance holders, using a limited set of trusted commercial and government data sources. DOD has conducted this pilot in a phased approach, increasing the number of cleared individuals enrolled over time, in accordance with enrollment milestones set as part of the Insider Threat and Security Clearance Reform cross-agency priority goal. Specifically, the department expanded enrollment to 225,000 DOD clearance holders in December 2015 and 500,000 in December 2016, and it plans to increase the enrolled population to 1 million by the end of calendar year 2017. The department has also set an internal goal to enroll all clearance holders department-wide by the end of fiscal year 2021. DOD has developed its own continuous evaluation IT system—which is called Mirador, and is separate from the IT system that ODNI is developing—to conduct automated record checks of commercial and government data sources on the enrolled population, with the goal of near real-time identification of adverse information to be considered in the evaluation of an individual’s continued eligibility for access to classified information. DOD officials developing the system stated that while they are currently using Mirador to conduct automated record checks for continuous evaluation, the system remains under development, and they are integrating additional data sources and user requirements as those are identified. As of February 2017, the department had implemented seven data sources in Mirador, which provide information about suspicious financial and criminal activity, among other things. Another nine sources were undergoing testing or were otherwise in progress. The department expects Mirador to reach initial operating capacity in fiscal year 2018. DOD officials stated that aspects of Mirador are still manual, such as enrolling individuals, but that they plan to take steps to automate them. DOD officials stated that, depending on the data source, they run record checks on enrolled individuals daily, monthly, quarterly, or annually. According to DOD officials, if a record check results in an alert, such as for criminal activity, Mirador forwards the alert to DOD’s continuous evaluation validation cell—within the Defense Security Service, which manages the department’s continuous evaluation program—to ensure that: (1) the alert applies to the correct individual; (2) the issue was not previously known; and (3) the issue is adjudicatively relevant. DOD officials stated that if an analyst determines that an alert is valid— meaning that all three of the above statements are believed to be true— then the analyst generates a report and forwards it to the individual’s designated security manager. Alerts are prioritized for analyst review according to business rules designed around the severity of the alert, and according to DOD officials, all alerts are reviewed by a supervisor following an analyst’s initial determination. The officials stated that currently, if additional investigative work is required based on the alert, the results of that investigation are forwarded to an adjudicator to make a determination as to whether the alert affects the individual’s continued eligibility for a security clearance. The officials added that the due process safeguards in place for periodic reinvestigations are also in place for continuous evaluation. Figure 2 provides an overview of DOD’s continuous evaluation process. DOD has collected and analyzed metrics on the results of its current pilot. For example, according to DOD data, as of February 2017, continuous evaluation had identified 12,400 alerts. Of those alerts, 2,064—pertaining to 1,816 individuals—were determined to be valid, meaning that they were adjudicatively relevant and not previously known. According to DOD, action has been completed on 1,307 of those cases. Specifically, 859 cases were closed with a favorable decision, but context was added to the individuals’ records; in 375 cases the subject separated and/or no longer needed access; and 62 cases involved a clearance revocation, condition, or warning. For DOD’s secret-eligible population, continuous evaluation helped to identify risk, on average, 6 years 7 months sooner than the traditional 10-year periodic reinvestigation model, and 1 year 5 months earlier for the top secret-eligible population, which is to be reinvestigated every 5 years. DOD officials stated that these metrics are presently tracked manually by the Consolidated Adjudications Facility, and they identified a need to automate the process, going forward. In addition, DOD officials stated that they have shared the results of the pilot and lessons learned with ODNI through the Continuous Evaluation Working Group. For example, DOD identified lessons learned related to identifying the right data sources, eliminating duplicate alerts, the frequency of record checks, methods for achieving identity resolution, and the need for operational access to reporting data. Most recently, DOD issued Department of Defense Manual 5200.02 in April 2017, which includes continuous evaluation among the responsibilities and procedures of the DOD Personnel Security Program. State’s Continuous Evaluation Pilot State began its continuous evaluation pilot in January 2015 to evaluate the coverage and reliability of public records information, using a public records service provider. Specifically, it compared information received from public record checks, such as criminal and financial activity, against information contained in personnel security files for approximately 8,600 personnel. State found, among other things, that while public records can provide coverage beyond the traditional scope of investigations, the quality of the information varies, and not all jurisdictions participate. State continued its pilot in 2016 and expanded the enrolled population to include its entire tier 5 population. Additionally, the focus of the pilot shifted from evaluating the usefulness of public records information to evaluating the alerts received. State officials stated that the results of the public record checks are reviewed by the department’s continuous evaluation team, which determines whether the information is new, accurate, and relevant, and if so, whether it needs further review and investigation. These officials stated that because State has authority to conduct its own investigations, it is easy to conduct investigative follow- up. According to officials, minor issues, such as traffic violations, are added to personnel files for consideration during the individual’s next periodic reinvestigation. According to State officials, as of March 2017, they had not revoked any clearances as a result of the identification of derogatory information through continuous evaluation. As of April 2017, State had invested approximately $2.4 million in its continuous evaluation pilot for contract costs and personnel to administer the program, and, according to State officials, ODNI provided approximately one-third of that funding. State officials stated that because ODNI provided funding, State has voluntarily shared some lessons learned with ODNI, although it was not tasked to do so. Some details of State’s pilot were omitted because the information is sensitive. Number of Agencies Meeting Periodic Reinvestigation Timeliness Goals Decreased from Fiscal Years 2012- 2016, and Potential Continuous Evaluation Effects Are Unknown The number of executive branch agencies meeting established timeliness goals for completing periodic reinvestigations decreased from fiscal years 2012 through 2016. Additionally, while executive branch agencies have already initiated the first phase of continuous evaluation, the potential effects of continuous evaluation on periodic reinvestigations and agency resources are unknown, as they have not been assessed. Executive Branch Agencies Meeting Established Timeliness Goals for Completing Periodic Reinvestigations Decreased from Fiscal Years 2012 through 2016 Our analysis of timeliness data for specific executive branch agencies showed that the percent of agencies meeting timeliness goals decreased from fiscal year 2012 through 2016. As part of the Insider Threat and Security Clearance Reform cross-agency priority goal, since the second quarter of fiscal year 2014, the PAC has reported quarterly on agency timeliness. Among other things, the PAC reports on the average number of days taken, for the executive branch as a whole, to complete the end- to-end process for periodic reinvestigations, as compared with the following goals for the fastest 90 percent of periodic reinvestigations: 15 days to initiate a case, 150 days to conduct the investigation, and 30 days to adjudicate—totaling 195 days to complete the end-to-end processing of the periodic reinvestigation. For fiscal year 2016, the PAC reported that the executive branch as a whole: did not meet the goal of conducting the investigative portion of periodic reinvestigations within 150 days for the fastest 90 percent of cases for any quarter. The average number of days ranged from 175 days to 192 days. did not meet the goal of completing periodic reinvestigations—the end-to-end goal—within 195 days for any quarter of fiscal year 2016. The average ranged from 209 days to 227 days. Our analysis of timeliness data for specific executive branch agencies showed that the percent of agencies that reported meeting timeliness goals decreased from fiscal year 2012 through 2016. Specifically, while 84 percent of the executive branch agencies met the 150-day investigative goal for at least three of four quarters for the fastest 90 percent of periodic reinvestigations in fiscal year 2012, only 18 percent of the agencies met the investigative goal in fiscal year 2016. while 84 percent of the executive branch agencies met the end-to-end processing goal of 195 days for at least three of four quarters for the fastest 90 percent of periodic reinvestigations in fiscal year 2012, only 22 percent of the agencies completed their fastest 90 percent of periodic reinvestigations within 195 days for at least three of four quarters in fiscal year 2016. Of the agencies we reviewed, we found that agencies which use NBIB as their investigative service provider and agencies with delegated authority to conduct their own investigations both experienced challenges in meeting established timeliness goals for periodic reinvestigations in fiscal years 2015 and 2016. For example, 50 percent of the agencies with delegated authority completed investigations for at least three of four quarters for the fastest 90 percent of periodic reinvestigations within 150 days in fiscal year 2015, and 44 percent of agencies with delegated authority met the timeliness goal in fiscal year 2016. Of the executive branch agencies for which we obtained timeliness data from ODNI and which use NBIB as their investigative service provider, NBIB completed the investigative portion within 150 days for 0 percent of the agencies in fiscal year 2015, and completed it within that timeframe for 6 percent of the agencies in fiscal year 2016 for at least three of four quarters for the fastest 90 percent of reinvestigations. Of the executive branch agencies we reviewed, 67 percent met the adjudication timeliness goal of 30 days in fiscal year 2016 for at least three of four quarters for the fastest 90 percent of reinvestigations. Specific details on the timeliness of individual executive branch agencies’ periodic reinvestigations were omitted from this report because the information is sensitive. According to NBIB officials, as of June 2017, NBIB’s investigation backlog totaled approximately 673,000 cases—about 183,000 of which were periodic reinvestigations for both tier 3 and tier 5 clearances. NBIB cited the September 2014 decision to not exercise the option of one of its investigative fieldwork contracts—which led to a loss in capacity and an increase in the program’s contract costs—and difficulties attracting and retaining investigative resources as two main challenges to timeliness. NBIB officials stated that they are taking steps to address the backlog for background investigations, including periodic reinvestigations. These steps include hiring additional federal and contract investigators, implementing a number of workload management initiatives, and conducting a business process reengineering review to identify potential process efficiencies. Additionally, executive branch agencies noted the increased requirements stemming from the 2012 Federal Investigative Standards, such as continuous evaluation and more frequent periodic reinvestigations for certain clearance holders, as additional challenges to meeting timeliness goals. In 2008, the Joint Security and Suitability Reform Team issued Security and Suitability Process Reform, a report to the President that, among other things, includes OMB-issued interim government-wide processing goals for security clearances for calendar year 2008. The calendar year 2008 government-wide goal for the fastest 90 percent of periodic reinvestigations is the same as the goal currently in place: 195 days to complete the end-to-end processing of the periodic reinvestigation. The report states that OMB issued the interim goal to assist agencies in projecting workload and resource requirements. However, the timeliness goals on which the PAC currently reports for periodic reinvestigations are the same as those identified by OMB as interim goals for calendar year 2008. Unlike initial investigations, for which timeliness objectives are established by statute, the 195-day goal for the end-to-end timeliness of periodic reinvestigations was an interim goal set by OMB for calendar year 2008. The 2008 report to the President does not detail how the goals were developed or what data, if any, were used to establish them. ODNI officials initially stated that they did not know how the 195-day goal was developed or where it was documented, and did not know whether subsequent, finalized goals were ever established, but they later provided a copy of the 2008 report. A senior NBIB official stated that OMB’s interim calendar year 2008 timeliness goals were developed based on the average timeliness of the fastest 90 percent of periodic reinvestigations at that time. Since the establishment of OMB’s interim goals, the executive branch has measured periodic reinvestigation timeliness against those goals, and it has not conducted an evidence-based review to ensure that 195 days— and the associated goals of the different phases of periodic reinvestigations—are realistic goals for periodic reinvestigations. Standards for Internal Control in the Federal Government states that management evaluates and, if necessary, revises defined objectives so that they are consistent with requirements and expectations. Without conducting an evidence-based review of the goals, the executive branch will continue to compare the timeliness of its periodic reinvestigations against goals that it established almost a decade ago and that may no longer be appropriate. Further, without ensuring that 195 days, along with the associated goals of the different phases of periodic reinvestigations, are appropriate goals, agencies may not be adequately planning for the amount of time and resources actually required to conduct periodic reinvestigations, and, as a result, they may experience further timeliness delays. Moreover, if an agency does not plan for sufficient time to conduct periodic reinvestigations, it may allow individuals to retain access to sensitive documents when it has not yet confirmed those individuals’ continued eligibility, which could have potential repercussions for national security. Potential Effects of Continuous Evaluation on Periodic Reinvestigations Are Unknown The potential effects of continuous evaluation on periodic reinvestigations, such as possible changes to their frequency or scope, remain unknown. In addition, the executive branch’s plans for replacing periodic reinvestigations with continuous evaluation have evolved over time. For example, the 2008 Security and Suitability Process Reform report to the President outlined plans to replace the periodic reinvestigation model with continuous evaluation, conducting continuous evaluation annually or at least once every 5 years, depending on an individual’s security clearance level. The report identified a June 2009 milestone to develop an implementation plan to transition from periodic reinvestigations to continuous evaluation, and as previously discussed, an estimated operational date of the fourth quarter of fiscal year 2010 (see figure 1). The purpose of the change was to reveal security-relevant information earlier and to provide increased scrutiny on populations that could potentially represent risk to the government because they already have access to classified information. However, ODNI documentation states that continuous evaluation supplements and enhances, but does not replace, established personnel security processes. Executive branch agencies have expressed varying views about potential changes to the periodic reinvestigation model. For example, DOD officials stated that with workload and funding issues, they see no alternative but to replace periodic reinvestigations for certain clearance holders with continuous evaluation, as the record checks conducted are the same for both processes. In addition, DOD officials stated that they believe continuous evaluation will not only result in the more timely identification of security-relevant information, but will also help to change individuals’ behaviors—for example, that individuals will be more likely to self-report such information once they are enrolled in the program. DOD officials also noted that if changes are not made to the periodic reinvestigation process, the investigation backlog will persist, because continuous evaluation alerts will continue to add to the investigative workload. In addition, in September 2016, PERSEREC issued a report on a study it conducted on the effectiveness, timeliness, and cost of various automated record checks-based investigative strategies as compared with traditional periodic reinvestigations. The analysis found that some of the automated record checks strategies were effective, improved the timeliness of issue detection, and lowered costs. However, DOD officials noted that because ODNI is the Security Executive Agent, it must approve the change to the investigative process. These officials stated that they hope to influence this change by demonstrating the effectiveness of continuous evaluation at DOD. Additionally, NBIB officials stated that continuous evaluation will increase their workload and costs, since it is an additional layer to the personnel security clearance process. Accordingly, they hope that ODNI will identify efficiencies that can be made to the process. Further, PAC Program Management Office officials stated that there may be changes to the periodic reinvestigation model in the future, but that any changes to the model will be determined by data and will be made under the authority of ODNI and OPM as the Security Executive Agent and the Suitability Executive Agent, respectively. Other agencies, such as State, do not share DOD’s view. For example, State officials stated that although a reduction in costs would result from replacing periodic reinvestigations with continuous evaluation, they have concerns that relevant information, such as state and local law enforcement records that are not yet automated, would be missed if they did not conduct periodic reinvestigations. Similarly, officials from the Department of Justice and the Department of Homeland Security stated that they do not intend to replace periodic reinvestigations, and that continuous evaluation is to be a supplement to the personnel security clearance process. However, officials from all three of these agencies stated that it may be possible to change the frequency or scope of periodic reinvestigations at some point in the future. ODNI officials stated that, at this time, they have no intention of replacing periodic reinvestigations with continuous evaluation, and that the Security Executive Agent Directive for continuous evaluation, once issued, will clarify that continuous evaluation is intended to supplement and not replace periodic reinvestigations. In May 2017, ODNI officials stated that ODNI is not opposed to further improving the security clearance process, and that once continuous evaluation is operational, it plans to determine the efficiencies and mitigation of risks associated with the approach. Specifically, these officials stated that once continuous evaluation is further implemented and ODNI has gathered sufficient data—which they estimated would take about a year from May 2017—they can perform analysis and research to determine whether any changes are needed to the periodic reinvestigation model. While executive branch agencies have different views about potential changes to the periodic reinvestigation process, officials from five of the seven executive branch agencies we spoke with identified the potential expenditure of increased resources, such as workload and costs, as a risk associated with the implementation of continuous evaluation. Specifically, all five agencies stated that continuous evaluation will increase their workloads—and therefore costs—if no other changes are made to the personnel security process. For example, DOD officials noted that adjudicator workloads will increase as new investigative leads—identified through continuous evaluation—require adjudication. Senior DOD officials stated that DOD cannot afford to conduct both continuous evaluation and periodic reinvestigations. Specifically, DOD estimates that implementing the 2012 Federal Investigative Standards requirement to conduct more frequent periodic reinvestigations for certain clearance holders will cost approximately $1.8 billion for fiscal years 2018 through 2022. In addition, State officials stated that they anticipate that continuous evaluation will increase their personnel security workload because alerts will have to be validated, and potentially investigated, and then adjudicated. Standards for Internal Control in the Federal Government states that management should identify, analyze, and respond to risks related to achieving defined objectives. Risk assessment is the identification and analysis of risks related to achieving defined objectives to form a basis for designing risk responses. In addition, the PMBOK® Guide states that entities should perform a quantitative risk analysis to numerically analyze the effect of identified risks on overall project objectives. The key benefit of this process is that it produces quantitative risk information to support decision-making in order to reduce project uncertainty. Although executive branch agencies have identified increased resources as a risk associated with implementing continuous evaluation, and ODNI has acknowledged that risk, ODNI, in coordination with the PAC, has not assessed the potential effects of continuous evaluation on an agency’s resources. Further, ODNI has not developed a plan, in consultation with implementing agencies, to address such effects, to include modifying the scope or frequency of periodic reinvestigations or replacing periodic reinvestigations for certain clearance holders. While ODNI is implementing continuous evaluation in a phased approach, having a plan in place to address the increased workload once continuous evaluation is fully implemented is critical to ensuring the sustainability and effectiveness of executive branch agencies’ personnel security programs. Further, without assessing the potential impacts on agency resources and developing a plan to address them—once ODNI has further defined the program—implementing continuous evaluation could further increase the periodic reinvestigation backlog and agency costs. With delays in determining continued eligibility, executive branch agencies are assuming greater risk, which runs counter to the purpose of continuous evaluation. Conclusions Continuous evaluation has been a key and long-standing initiative of security clearance reform efforts, intended to assist agencies in the timely identification of security-relevant information that may affect an individual’s continued eligibility for access to classified information. However, ODNI has not demonstrated the leadership necessary to make continuous evaluation a priority. Accordingly, the program’s implementation has been delayed for almost 7 years. Although ODNI has taken an initial step to implement it in a phased approach, it has not yet formalized the program in policy or provided an expanded definition of continuous evaluation to implementing agencies. In addition, ODNI has not yet determined key aspects of the program, including future phases of implementation and agency requirements. Key executive branch agencies have deemed information about the future phases necessary to plan for the implementation of continuous evaluation and to estimate potential costs. The absence of this information has limited their ability to prepare for the next phases of implementation. This could further delay the full implementation of continuous evaluation executive branch-wide and result in inconsistencies among agencies’ approaches. Specifically, in the absence of ODNI policy and comprehensive guidance, DOD and State continue to develop their current continuous evaluation programs. The ultimate effects of such inconsistencies could negatively affect reciprocity—another key government-wide security clearance reform effort. Although ODNI is to have oversight of continuous evaluation, it has not incorporated it into its oversight program or developed a plan to ensure that agencies implement it. Without a Security Executive Agent Directive for continuous evaluation that provides an expanded definition of continuous evaluation and relevant terms to help ensure consistent use; a plan for implementing continuous evaluation across the executive branch, that includes future phases of implementation and expectations for agencies; and a plan for monitoring program performance throughout the implementation process, as well as performance measures by which to track and report progress, ODNI is not well-positioned to ensure the success and effectiveness of the continuous evaluation initiative. Further, ODNI does not know whether it is meeting the critical purpose of filling the information gap between investigative cycles to identify risks to national security. Executive branch timeliness in completing periodic reinvestigations has declined over the past five years. Further, the executive branch does not know whether the timeliness goals—set nearly a decade ago—are still relevant and appropriate, given changes to the personnel security clearance process. Without conducting an evidence-based review to ensure that goals for the timely completion of periodic reinvestigations are appropriate, executive branch agencies may not be planning sufficient time and resources to complete periodic reinvestigations and therefore may be challenged to ensure the continued eligibility of the entire national security workforce. Finally, executive branch agencies have identified increased resources, such as workload and costs, as a challenge to implementing continuous evaluation. However, the executive branch has not determined the potential expected effects of continuous evaluation on periodic reinvestigations, and agencies have varying views about what, if any, additional changes should be made to the personnel security clearance process. Without an assessment of the potential effects of continuous evaluation and a plan to address those effects—once ODNI has further defined the program—agencies may not be able to effectively integrate continuous evaluation into their personnel security clearance programs, which in turn could lead to further delays in the clearance process. Recommendations for Executive Action We are making the following six recommendations to ODNI: The Director of National Intelligence should issue a Security Executive Agent Directive for continuous evaluation to formalize the program, which includes, among other things, an expanded definition of continuous evaluation in advance of the next phase of implementation. (Recommendation 1) The Director of National Intelligence should, in coordination with the Continuous Evaluation Working Group, develop an implementation plan for continuous evaluation across the executive branch that includes a schedule with timeframes and expectations for agencies, such as the requirements (e.g., the size of the enrolled population in continuous evaluation) for future phases of implementation. (Recommendation 2) The Director of National Intelligence should develop a plan for monitoring continuous evaluation performance, to include assessing continuous evaluation at various phases of implementation. (Recommendation 3) The Director of National Intelligence should develop performance measures for continuous evaluation that agencies must track and determine a process and schedule for agencies to regularly report those measures to ODNI. At minimum, these performance measures should be clear, quantifiable, objective, and linked to measurable goals. (Recommendation 4) The Director of National Intelligence should, in coordination with the Deputy Director for Management of the Office of Management and Budget in the capacity as Chair of the Security, Suitability, and Credentialing Performance Accountability Council, conduct an evidence-based review of the timeliness goal of 195 days for completing the fastest 90 percent of periodic reinvestigations and the associated goals for the different phases of periodic reinvestigations, and adjust the goal if appropriate, taking into consideration available resources, the additional workload of continuous evaluation, and the risks associated with individuals retaining access to classified information without determining their continued eligibility. (Recommendation 5) The Director of National Intelligence should, once ODNI has further defined the continuous evaluation program, to include issuing a Security Executive Agent Directive and developing an implementation plan, in coordination with the Deputy Director for Management of the Office of Management and Budget in the capacity as Chair of the Security, Suitability, and Credentialing Performance Accountability Council, assess the potential effects of continuous evaluation on agency resources and develop a plan, in consultation with implementing agencies, to address those effects, such as modifying the scope of periodic reinvestigations, changing the frequency of periodic reinvestigations, or replacing periodic reinvestigations for certain clearance holders. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to ODNI, DOD, OMB, State, NBIB, the Department of Justice, and the Department of Homeland Security for review and comment. Written comments from ODNI are reprinted in their entirety in appendix I. DOD, OMB, NBIB, and the Department of Homeland Security did not provide comments. ODNI, State, and the Department of Justice provided technical comments, which we incorporated in the report as appropriate. In its written comments, ODNI stated that it generally concurred, with comments, with our six recommendations. However, ODNI stated that it did not concur with aspects of our overall conclusions and provided observations in four specific areas. We continue to believe that our conclusions are valid, as discussed below. First, ODNI disagreed with our conclusion that it has not demonstrated the leadership necessary to make continuous evaluation a priority. ODNI noted that it has taken recent actions to better prioritize the implementation of continuous evaluation. While these recent steps are positive and may help position ODNI for success, historically ODNI has not demonstrated the leadership necessary to make the implementation of a continuous evaluation program a priority. Specifically, while ODNI refers to continuous evaluation as a new initiative, the original milestone for implementing the program was the fourth quarter of fiscal year 2010, which was not attained. Since then, as discussed in the report, a number of revised milestones for implementing the program have been missed. For example, the PAC, of which ODNI is a principal member, subsequently set a milestone for developing an initial continuous evaluation capability for other clearance holders by September 2014— which was extended to December 2014—and a milestone for implementing the capability for other clearance holders by December 2016. These milestones were also missed. As of August 2017, continuous evaluation has not yet been fully implemented, and ODNI has not set a milestone for when full implementation would occur. As such, we recommended specific actions that are needed to better position ODNI for success, including issuing a Security Executive Agent Directive for continuous evaluation, developing plans for implementing the program and monitoring its performance, and developing performance measures. Second, ODNI disagreed with our conclusion that it has not yet determined key aspects of the continuous evaluation program, including future phases of implementation and agency requirements. ODNI stated that the Security Executive Agent Directive for continuous evaluation is undergoing interagency coordination and that it has provided executive branch agencies with interim guidance until that process is completed, which we acknowledge in the report. While ODNI has provided interim guidance for continuous evaluation, it only details the requirements for fiscal year 2017 and not for the future phases of implementation. In August 2017, after receiving a draft of our report, ODNI officials stated that they planned to provide additional guidance to agencies clarifying that the requirements for fiscal year 2018 will be the same as those for fiscal year 2017. While this correspondence, once issued, will help agencies with their immediate program planning, ODNI officials stated that they have not yet determined the requirements for fiscal year 2019 or beyond, which limits agencies’ abilities to plan beyond the next fiscal year for the future phases of implementation. Additionally, ODNI stated that the technical development milestones of the Continuous Evaluation System it is developing are well-established, tracked, and shared with stakeholders. As discussed in the report, according to ODNI officials, they have established technical milestones for the development of ODNI’s Continuous Evaluation System. While this is an important step in implementing the program, ODNI has not developed similar programmatic milestones for the overall implementation of the program, such as when future phases of implementation will occur, to include full implementation. As discussed in the report, this has limited the ability of executive branch agencies to plan for implementation in accordance with ODNI’s phased approach. As a result, full implementation—which has been delayed for almost 7 years—may be further delayed. Third, ODNI did not agree with our conclusion that although it is to have oversight of continuous evaluation, it has not incorporated it into its oversight program or developed a plan to ensure agencies implement it. In its response, ODNI identified its intention to take certain actions and future mechanisms that could position it to monitor continuous evaluation. Specifically, ODNI stated that continuous evaluation metrics will be collected and analyzed when the initial phase of continuous evaluation implementation ends on September 30, 2017. Additionally, ODNI stated that it will leverage a pending OMB budget data request and that its Security Executive Agent National Assessments Program will be responsible for analysis and oversight of agency implementation and operation of continuous evaluation. However, as we note in the report, ODNI has not developed and distributed plans to monitor or assess the performance of continuous evaluation across the executive branch, including for the first phase of implementation. As we note in our report, ODNI officials stated that ODNI did not oversee the pilots that were conducted by DOD and State, as they were performed at the discretion of those agencies. State officials noted that while they have shared lessons learned on their continuous evaluation pilot, they were not tasked to do so. While ODNI stated in its written comments that it has specific expertise in researching, measuring, analyzing, and monitoring personnel security performance across the executive branch, it has not yet demonstrated these actions with regard to continuous evaluation. For example, DOD—the executive branch agency with the majority of security clearance holders—has conducted research on continuous evaluation since 2001, piloted its program since October 2014, and plans to increase the number of personnel enrolled in the program to 1 million by the end of calendar year 2017. However, ODNI, in the capacity as the Security Executive Agent, has not overseen DOD’s pilot. Moreover, as discussed in the report, as of August 2017—10 months into fiscal year 2017—ODNI has not yet developed and distributed to executive branch agencies continuous evaluation performance measures. At the end of our review, in August 2017, ODNI officials stated that they have developed a draft list of metrics for continuous evaluation for fiscal year 2017 and that once the metrics are finalized, they will issue guidance to executive branch agencies requesting them to report these metrics to ODNI. While metrics can help to establish a baseline and inform aspects of a program’s status—and ODNI’s development of draft metrics is a positive step—performance measures are linked to a goal and inform how well an agency is doing against that goal. As ODNI has not developed and distributed performance measures that are clear, quantifiable, and objective, and that are linked to measurable goals prior to initiating, or earlier in the first phase of implementation, executive branch agencies may not be positioned to collect and report these metrics at the end of the fiscal year. Additionally, as discussed in the report, according to ODNI officials, while they would like to incorporate continuous evaluation into their Security Executive Agent National Assessments Program, it is not currently part of the program. While ODNI has identified steps that could position it to monitor continuous evaluation in the future, it has not yet implemented mechanisms to monitor and measure program performance. Fourth, ODNI did not agree with our conclusions that it is not well- positioned to ensure the success and effectiveness of the continuous evaluation initiative, and that it does not know if it is meeting the critical purpose of filling the information gap between investigative cycles to identify risks to national security. However, in its written comments, ODNI stated that successful implementation of continuous evaluation across the executive branch requires formal Security Executive Agent policy guidance, implementation and technical guidance and milestones, performance measures, and a monitoring program, which we recommended in the report. ODNI states that it is well-postured to achieve these goals, and refers to its intention to apply Security Executive Agent National Assessments Program best practices as a mechanism to use to monitor and ensure compliance. Although this action could be a step in better positioning ODNI as continuous evaluation implementation further proceeds, as noted above and in our report, ODNI has not yet finalized, distributed, and implemented these and other actions to ensure that it is currently positioned to ensure success, even while it has initiated the first phase of continuous evaluation implementation. As noted in our report, although ODNI has taken steps to implement continuous evaluation in a phased approach, executive branch efforts to implement continuous evaluation have been a long-standing component of overall security clearance reform. The actions ODNI intends to take as it further implements continuous evaluation, as well as the mechanisms it identified, may better position it and the implementing agencies for success. However, given the challenges that the executive branch has faced in implementing continuous evaluation thus far and the continued delays it has faced, without a fully defined program in place, we believe that our conclusions remain valid. Finally, in its written comments, ODNI suggested a revision to our sixth recommendation. Specifically, ODNI suggested adding an explicit timeframe for completing the action. We believe that ODNI is best positioned to set an appropriate timeframe for completion based on its familiarity with the progress of the program and, as such, did not incorporate this change in our report. We agree with ODNI that establishing such a timeframe is a positive step. We are sending copies of this report to the appropriate congressional committees, the Director of National Intelligence, the Secretary of Defense, the Director of OMB, the Secretary of State, the Secretary of Homeland Security, the Director of OPM, the Director of NBIB, the Attorney General, the Director of the Federal Bureau of Intelligence, and the Director of the Bureau of Alcohol, Tobacco, Firearms, and Explosives. In addition, this report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your members of your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. GAO staff who made significant contributions to this report are listed in appendix II. Appendix I: Comments from the Office of the Director of National Intelligence Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kimberly C. Seay (Assistant Director), Chris Businsky, Molly Callaghan, Jenny Chanley, Dawn Godfrey, Saida Hussain, James Krustapentus, Michael Shaughnessy, Rachel R. Stoiko, John Van Schaik, Cheryl Weissman, and Jina Yu made significant contributions to this report. Related GAO Products High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Personnel Security Clearances: Funding Estimates and Government- Wide Metrics Are Needed to Implement Long-Standing Reform Efforts. GAO-15-179SU. Washington, D.C.: April 23, 2015. Personnel Security Clearances: Additional Guidance and Oversight Needed at DHS and DOD to Ensure Consistent Application of Revocation Process. GAO-14-640. Washington, D.C.: September 8, 2014. Personnel Security Clearances: Actions Needed to Ensure Quality of Background Investigations and Resulting Decisions. GAO-14-138T. Washington, D.C.: February 11, 2014. Personnel Security Clearances: Opportunities Exist to Improve Quality Throughout the Process. GAO-14-186T. Washington, D.C.: November 13, 2013. Personnel Security Clearances: Full Development and Implementation of Metrics Needed to Measure Quality of Process. GAO-14-157T. Washington, D.C.: October 31, 2013. Personnel Security Clearances: Further Actions Needed to Improve the Process and Realize Efficiencies. GAO-13-728T. Washington, D.C.: June 20, 2013. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. Personnel Security Clearances: Continuing Leadership and Attention Can Enhance Momentum Gained from Reform Effort. GAO-12-815T. Washington, D.C.: June 21, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Clearances: Preliminary Observations on DOD’s Progress on Addressing Timeliness and Quality Issues. GAO-11-185T. Washington, D.C.: November 16, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005.
Why GAO Did This Study Continuous evaluation is a key executive branch initiative to more frequently identify and assess security-relevant information, such as criminal activity. Implementing a continuous evaluation program has been a long-standing goal, with implementation milestones as early as 2010 and DOD pilots dating back to the early 2000's. GAO was asked to review efforts to implement continuous evaluation. This report assesses the extent to which (1) ODNI has implemented an executive branch-wide program and developed plans to monitor and measure its performance; (2) DOD and other agencies have designed, piloted, and evaluated continuous evaluation and (3) agencies completed timely periodic reinvestigations from fiscal years 2012-2016, and the potential effects of continuous evaluation on reinvestigations. GAO reviewed documentation, analyzed timeliness data, and interviewed officials from ODNI and other agencies. This is a public version of a sensitive report that is being issued concurrently. Information that ODNI and State deemed sensitive has been omitted. What GAO Found In October 2016, the Office of the Director of National Intelligence (ODNI) took an initial step to implement continuous evaluation—a process to review the background of clearance holders and individuals in sensitive positions at any time during the eligibility period—across the executive branch, but it has not yet determined key aspects of the program, and it lacks plans for implementing, monitoring, and measuring program performance. For the first phase, agencies are to conduct certain continuous evaluation record checks against a portion of their national security population by the end of fiscal year 2017. However, ODNI has not formalized its policy on what continuous evaluation encompasses, determined what the future phases will entail or when they will occur, or developed an implementation plan. According to all seven agencies GAO interviewed, this uncertainty has affected their ability to plan for the program and estimate its costs. Without a continuous evaluation policy and a fully developed plan, full implementation—which has been delayed since 2010—may be further delayed. Moreover, ODNI lacks a plan to monitor and measure program performance, including for the first phase, which is underway. Without developing such a plan, ODNI cannot ensure that the program is being implemented consistently across the executive branch or that it is effectively identifying risks to national security. The Department of Defense (DOD) and the Department of State (State) have designed, piloted, and evaluated continuous evaluation. Their approaches have varied in scope, size, and duration, as they pre-date ODNI's efforts to implement continuous evaluation executive branch-wide. DOD's pilot involves the most record checks and the largest population. DOD had 500,000 employees enrolled in December 2016, and it plans to enroll 1 million by the end of calendar year 2017 and all clearance holders by the end of fiscal year 2021. Executive branch agencies meeting established timeliness goals for completing periodic reinvestigations decreased from fiscal years 2012 through 2016, and the potential effects of continuous evaluation, including on reinvestigations and resources, are unknown. While 84 percent of the executive branch agencies reviewed by GAO reported meeting the executive branch's 195-day timeliness goal for at least three of four quarters in fiscal year 2012, only 22 percent did so in fiscal year 2016. Also, a 2008 report outlined a plan to replace reinvestigations with continuous evaluation, but ODNI documentation indicates that this is no longer the intent. While agencies expressed varying views about changes to reinvestigations—such as modifying their scope—officials from five agencies stated that the continuous evaluation program will increase their workloads and costs if no other changes are made to the requirements. DOD officials said they cannot afford to conduct both continuous evaluation and reinvestigations, as DOD estimates that more frequent reinvestigations for certain clearance holders will cost $1.8 billion for fiscal years 2018 through 2022. Although agencies have identified increased resources as a risk of the program, ODNI has not assessed the program's potential effects on agency resources. Without assessing the potential effects once ODNI has further defined the program, implementing continuous evaluation could lead to further delays and backlogs in reinvestigations, and could increase agency costs. What GAO Recommends GAO is making six recommendations, including that ODNI formalize its policy on continuous evaluation, develop an implementation plan as well as a plan to monitor and measure program performance, and assess the potential effects of continuous evaluation on agency resources. ODNI concurred with the recommendations, but disagreed with aspects of GAO's conclusions. GAO continues to believe the conclusions are valid, as discussed in the report.
gao_GAO-18-450
gao_GAO-18-450_0
Background Mental health disorders affect millions of adults and children in the United States and can range in severity. In 2016, an estimated 4.2 percent of the adult population—more than 10.4 million individuals—were considered to have a serious mental illness based on federal survey data. Individuals with mental illness may reside and receive care in a variety of settings, including inpatient institutional settings, such as public or private hospitals, other residential treatment facilities, or community-based settings. When originally established under the PAIMI Act, state PAIMI programs were required to investigate reports of potential abuse and neglect of individuals with significant mental illness residing in institutional facilities and to protect and advocate the rights of these individuals. Examples of institutional facilities covered under the PAIMI Act include hospitals, nursing homes, and correctional facilities. In 2000, the PAIMI Act was amended to allow certain PAIMI programs to also assist eligible individuals who live in community settings, including their own homes, although programs must still prioritize services for eligible individuals residing in institutional settings. For example, state PAIMI programs assist individuals with abuse, neglect, and rights violation cases in school settings. Key State PAIMI Program Requirements and Activities State PAIMI programs are administered by either state agencies or non- profit organizations that have been designated by the governor of each state to operate a protection and advocacy system. The state PAIMI programs are allotted federal grants through a formula that is based equally on (1) the population in each state, and (2) the population in each state weighted by its relative per capita income. In 2016, state PAIMI program grants ranged from $229,300 to $3,133,536. (See appendix I for allotment by program.) To receive a PAIMI grant, each protection and advocacy organization must submit an annual application, and the PAIMI programs they operate must meet applicable statutory and regulatory requirements. (See table 1.) Approved state PAIMI programs use their grants to protect and advocate for individual clients, such as investigating specific complaints. They may also conduct broader system-level protection and advocacy activities, such as facility monitoring, intended to benefit larger groups of individuals with significant mental illness. These systemic activities, as we refer to them in this report, include efforts to drive changes in policies and practices of the state’s mental health agency, treatment facilities, and other systems, such as school systems, that impact people with significant mental illness. (See table 2.) Each state PAIMI program, with input from the advisory council and governing authority, sets priority goals and short-term, measurable objectives and targets annually as performance benchmarks for the work it plans to conduct. Programs can also revise these benchmarks during the year to align with changing needs. For example, the types of individual cases programs accept and work on may depend on the types of complaints that are received, which may vary over time. SAMHSA Oversight of State PAIMI Programs SAMHSA administers the PAIMI grants and is responsible for oversight and monitoring of the state PAIMI programs. To oversee the state PAIMI programs, SAMHSA conducts both ongoing reviews of the annual application and performance information submitted by the programs, and periodic, in-depth reviews: Ongoing monitoring activities. PAIMI grant applications are effective for 4-year periods, but programs submit additional grant applications annually to update certain information, such as the program budget and goals. SAMHSA awards PAIMI grants based on criteria such as whether the grantee submitted a statement of annual program priorities, including quantifiable targets and measurable outcomes. In addition to the application, programs must submit key data annually in a program performance report. The performance report must describe a program’s individual and systemic activities, accomplishments, and expenditures during the most recent fiscal year and must include a section prepared by the advisory council. The performance report requires programs to report on both standard measures required of all programs and on progress towards the program-specific priority goals, objectives, and targets. SAMHSA reviews information submitted by the programs annually through grants applications and performance reports, including completing a review checklist and following up with programs with questions. Periodic monitoring. SAMHSA conducts four to five onsite monitoring reviews of state PAIMI programs each year, which officials told us means a given program would be reviewed approximately every 10 years. Programs are reviewed on a rotating basis, but some may be reviewed more frequently if concerns have been identified, according to officials. The onsite monitoring process, which includes an onsite visit and review of program documentation, is intended to monitor program compliance and provide guidance on improving program effectiveness. SAMHSA has procedures for the scope and time frame of the reviews. Selected State PAIMI Programs Reported Achievements in Ending and Preventing Abuse, Neglect, and Rights Violations of Those with Significant Mental Illness The eight selected state PAIMI programs reported favorably resolving a majority of individuals’ cases related to alleged abuse, neglect, or rights violations. In addition, these selected programs reported concluding a variety of systemic activities, with a significant focus on monitoring and addressing issues of abuse or neglect at facilities. Through their work with individuals and completion of systemic activities, the selected programs reported meeting a majority of their priority goals and objectives. Outcomes of Individual Cases Selected programs reported favorably resolving about 74 percent of individual cases related to alleged abuse, neglect, or rights violations in fiscal year 2016, on average (see table 3). The remaining 26 percent of cases were reported as withdrawn by the client, closed due to lack of merit, or were not resolved in the individual’s favor. Across the programs there was variation in the percentage of cases resolved favorably, with two of the selected programs reporting half, or less than half, of their cases resolved favorably, and one program reporting nearly 100 percent of cases closed favorably. SAMHSA officials and NDRN staff cited a number of factors that could contribute to the variation, including complexity of the complaint, variation in the programs’ criteria for accepting cases, program resources, or characteristics of the court or state mental health system. For example, SAMHSA officials told us that possible explanations for variation could include a program accepting particularly challenging cases, or a program obtaining additional funding from other nonfederal grants that could provide greater legal staff support in addressing complaints. All eight selected programs reported closing cases in each of the three categories of complaints: abuse, neglect, and rights violations during fiscal year 2016. Five of the eight programs reported that a majority of their cases were related to complaints about rights violations, which occurred in both facility- and community-based settings (see fig. 1). These complaints included denials of legal assistance or privacy rights, employment discrimination, or—the most frequently reported case complaint—failure to provide special education consistent with state requirements. Issues of abuse and neglect of individuals with mental illness were also common. The most frequent complaint reported by the eight selected programs related to neglect was a lack of discharge planning for release from a facility, and for alleged abuse, it was failure to provide appropriate mental health treatment. Program staff reported examples of how state PAIMI programs resolved cases related to abuse, neglect, and rights violations for individuals in institutions and the community: Program staff in California described a rights violation case of a young girl with a mental health disability who was eligible for special education services, but the district placed her in a restricted, segregated school setting where she was restrained multiple times. The program staff negotiated her move to a general education campus with classroom behavior support. The PAIMI program monitored her transition, including ensuring her inclusion in school activities, academic remediation, and social skill development. Program staff in Georgia reported that they were contacted by a woman in a hospital who was overmedicated such that they could not initially understand what she was saying. The staff worked with her hospital treatment team to adjust her medication and the woman became more articulate. In working to address her overmedication, the staff further discovered there were not appropriate discharge plans for her and so they worked to ensure that she was discharged into an appropriate facility. To address individual cases, selected programs reported using a variety of strategies, ranging from administrative actions to legal remedies. Programs reported that the most frequently utilized strategy (used 62 percent of the time in fiscal year 2016) was “short-term assistance”— time-limited advice or counseling, such as assisting a client with preparing a letter or making a phone call to resolve an issue. Selected programs reported using legal remedies about 5 percent of the time in fiscal year 2016. Outcomes of Systemic Activities The eight selected programs conducted a range of systemic activities, and reported successfully concluding a total of 367 of these activities in fiscal year 2016 (see figure 2). Facility monitoring was reported as the most frequent systemic activity in fiscal year 2016, comprising about 71 percent of the total systemic activities concluded by the selected programs. The selected programs described a range of activities involving facility monitoring. For example, California reported that the program had an effort focused on monitoring the conditions at selected county jail systems and juvenile halls. As part of that work, the program reported that it released five public reports and worked with counties on policy improvements, such as reducing the use of pepper spray on youth. Another program, Louisiana, reported that staff used to conduct regular monitoring visits to a state’s psychiatric hospital and addressed patient complaints that they heard during these visits. However, with limited resources and other emerging urgent issues at other facilities, the program decided to cease the regular monitoring and now conducts as-needed visits to the hospital in response to specific complaints from the patients or staff. In addition to facility monitoring activities, other systemic activities conducted varied across the selected programs, reflecting differences in their resources and priorities. Some systemic activities—such as class action litigation—take significant time and resources to undertake, and program staff may consider various factors before beginning one. For example, program staff from Indiana told us the program filed a lawsuit alleging restrictive housing of prisoners with significant mental illness that involved 4 years of negotiations. In addition, program staff from Vermont told us after engaging in successful litigation against hospitals that helped reduce unnecessary force, isolation, and coercion tactics, the program re- prioritized and focused on other issues, such as helping individuals integrate into the community from facilities. However, the program recently noticed an increase in force, isolation, and coercion tactics and predicted another shift in focus to once more address those issues. Performance on Program Priority Goals Through their efforts to resolve individual cases and systemic activities, selected programs reported largely meeting the performance benchmarks—priority goals, objectives, and targets—they determine for themselves. For example, the Georgia program reported that to meet its fiscal year 2016 priority goal of protecting individuals with psychiatric disabilities in Georgia from abuse and neglect, its objective was to investigate and advocate to address allegations of abuse and neglect, including suspicious or unexplained deaths and inappropriate treatment or medication issues for people with psychiatric disabilities. The measurable target for this objective was to conduct 50 such investigations. In its performance report for the fiscal year, the program reported that it had completed 51 investigations of allegations of extensive abuse and neglect during the performance year. Overall, the selected programs reported meeting more than 95 percent of their priority goals in fiscal year 2016. While selected programs varied in their priority goals, all had a goal that focused on protecting individuals from abuse, neglect, and rights violations. (See Appendix II for more information about the types of priority goals set by the selected programs.) When objectives were not met, the programs reported, for instance, focusing on other priorities or that an activity was still ongoing and could not be included as part of their performance for the year. Although the eight selected PAIMI programs reported that they largely met their goals, they also reported several overarching challenges to their efforts to do so, such as limited resources, lack of access authority, or delays in access (e.g., to documents, records, or institutions). For instance, the selected programs collectively reported that 617 PAIMI- eligible clients were not served within 30 days due to insufficient funding in fiscal year 2016. Additionally, five selected programs reported delays in access to records. For example, Vermont program staff reported delays in receiving records related to the status of prisoner grievances or medical records, and Texas program staff reported delays and use of significant attorney resources to address facilities that challenge their ability to access records or premises. SAMHSA Has Controls in Place to Oversee Program Compliance with PAIMI Requirements, but Oversight of Program Effectiveness Is More Limited SAMHSA Has Controls in Place to Monitor Compliance with Program Requirements SAMHSA has controls in place for monitoring the PAIMI programs’ compliance with statutory and regulatory requirements through its ongoing and periodic in-depth monitoring activities. We found evidence that SAMHSA had identified and resolved a variety of compliance issues through these activities. Ongoing Monitoring On an annual basis, SAMHSA monitors compliance with statutory and regulatory program requirements by reviewing information reported by the programs through the application and program performance report. (See table 4.) SAMHSA’s project officers review and approve the applications and performance reports submitted by the state PAIMI programs using a checklist developed by the agency that prompts them to record specific information, such as whether there are vacant advisory council seats. Not all areas of compliance are covered by the checklist; however, SAMHSA officials told us that the entire application and performance report are reviewed, and that a project officer’s approval signature on a checklist indicates that potential issues observed during a review have been resolved satisfactorily. In our review of fiscal year 2015 and 2016 documentation, we found evidence that the application and performance report review process helped identify and resolve a range of potential compliance issues. For example, SAMHSA followed up with one program in which the advisory council had failed to meet the threshold of 60 percent of its membership being individuals who have received or are receiving mental health services, or are family members of such individuals. Failing to meet this threshold could raise concerns about whether a program is sufficiently engaging individuals and family members affected by mental illness as required by regulation. In this instance, SAMHSA requested a plan of action to recruit and maintain members to meet the threshold, which the program provided along with updated information that they had successfully recruited an additional member that put the council make-up over the threshold. In another example, SAMHSA followed up with one program that had reported not meeting 3 of 6 objectives and requested a plan of action for reducing the number of unmet objectives. The program subsequently provided information that it had incorrectly categorized some objectives they had met as “not met.” (See table 5.) In addition to the annual application and performance report reviews, SAMHSA officials told us that they use monthly conversations with other federal agencies, referred to as federal partners, to help them identify potential compliance issues. These federal partners oversee federal grants for other populations of people with disabilities made to the protection and advocacy systems that administer the PAIMI program. SAMHSA officials told us that coordination with these federal partners helped identify risks in at least two of our selected programs, Puerto Rico and Oklahoma. For example, one of the federal partners conducted an onsite monitoring visit to Puerto Rico and found several issues with its protection and advocacy system, such as inadequately trained staff and conflicts of interest arising from a lack of independence from the governor’s office. Puerto Rico’s protection and advocacy system failed to develop an adequate corrective action plan to address the federal partner’s findings, leading the federal partner to place the system in restricted—that is, high-risk—status. According to SAMHSA officials, these actions led them to more closely monitor Puerto Rico’s PAIMI program, resulting in the identification of the protection and advocacy system’s failure to comply sufficiently with PAIMI program requirements. For example, SAMHSA found that Puerto Rico’s PAIMI program did not have the capacity to protect and advocate for individuals with mental illness, as required by statute, because they had an insufficient number of attorneys. Furthermore, the federal partner that originally placed Puerto Rico’s protection and advocacy system in restricted status requested that SAMHSA do so as well. As a result, SAMHSA also placed the Puerto Rico PAIMI program in restricted status. Periodic Onsite Monitoring Reviews In addition to its ongoing monitoring, SAMHSA has procedures to oversee state PAIMI program compliance during its periodic onsite monitoring reviews. When SAMHSA conducts an onsite monitoring review, its procedures specify that officials are to interview program staff, governing board members, and advisory council members; as well as review a sample of case record files and other documentation of program activities. The state PAIMI program is also to submit a detailed set of documentation to support the program’s compliance with statutory and regulatory requirements. Agency officials are to review this information and report back to the programs on any compliance issues or recommendations to improve program processes. In our review of fiscal year 2015 and 2016 documentation for the nine onsite monitoring reviews SAMHSA conducted, we found evidence that this process helped identify and resolve a range of potential compliance issues. For example, SAMHSA found that one program’s bylaws could be misinterpreted to permit lobbying for legislation for PAIMI-eligible individuals using PAIMI funding, when federal law prohibits grants programs from using federal funds to engage in such activity. As a result, the program’s governing board reviewed and modified the bylaws to clearly indicate that PAIMI funds are not to be used for lobbying. As another example, SAMHSA found that one program did not have sufficient documentation to support that the advisory council chair was an individual who had received or was receiving mental health services, or a family member of such an individual, as required by regulations. As a result, the program revised its practice to include having the advisory council chair verify in writing that he or she meets the criteria for serving in the position. (See table 6.) SAMHSA Has Not Consistently Examined Changes to Program Benchmarks or Completed and Provided Onsite Review Findings in a Timely Manner We identified two weaknesses that could be limiting SAMHSA’s oversight of program effectiveness. First, SAMHSA’s PAIMI program monitoring did not consistently record changes to program priority goals, objectives, and targets—collectively, “benchmarks”—made during a performance year, and the agency did not have procedures for examining such changes over time. Second, the agency did not provide timely information to programs on identified deficiencies from onsite monitoring. As of March 2018, SAMHSA was in the process of implementing new processes for its oversight of state PAIMI programs that officials believe will streamline the agency’s monitoring activities. However, these changes may not fully address the weaknesses we identified. Inconsistent Recording of Changes to Performance Benchmarks and Lack of Procedures for Examining Changes across Years We found that SAMHSA did not always record changes programs made to their performance benchmarks and did not have procedures for examining benchmark changes over time. According to federal internal control standards, an agency should evaluate the results of its monitoring—in this case, the information collected regarding benchmark modifications—to determine program performance. In our review of SAMHSA’s oversight of 10 programs for fiscal years 2015 and 2016, we found that SAMHSA did not consistently record program modifications to performance benchmarks. Specifically, we found that four programs appeared to have modified their performance benchmarks during the year—in some cases upward when results exceeded original targets, and in other cases downward when results were lower than original targets. However, these changes were not recorded by SAMHSA reviewers in the review checklists. For instance, one program revised 17 of its 21 targets to closely match the program’s actual results, but these changes were not recorded in the area of the review checklist that prompts the project officer to note if such changes were made. According to SAMHSA officials, in fiscal year 2017, SAMHSA transitioned from paper forms to a web-based system for submission and review of applications and performance reports. Officials told us that under the new system, programs will be required to consult with SAMHSA officials about and submit modifications to performance benchmarks through the system. The system will record and display both the original priority goals, objectives, and targets as approved at the time of the application, as well as any modifications a program submits throughout the year. The system will also record that information over time, providing the ability to review and track program modifications to benchmarks over multiple years. SAMHSA’s new system should improve recording of benchmark changes, however, SAMHSA lacks procedures for examining such changes across years to assess whether the changes could indicate larger performance issues. SAMHSA officials acknowledged that they did not have specific procedures in place directing project officers to examine changes to performance benchmarks across multiple years, but said that other relevant procedures were in place. For example, officials noted that programs are not able to modify benchmarks without approval by SAMHSA project officers. However, without implementing procedures aimed specifically at examining trends in benchmark modifications across years, SAMHSA lacks assurances that its project officers will consistently examine whether a particular program is regularly making changes to benchmarks that may be indicative of a potential performance problem, such as revising its targets downwards over multiple years. Failure to Provide Timely Information on Identified Deficiencies We found that SAMHSA generally failed to meet its timelines for producing and providing onsite monitoring review reports to the state PAIMI programs under review during fiscal years 2015 and 2016. This inability to produce and provide onsite monitoring reports to PAIMI programs in a timely manner is inconsistent with SAMHSA’s internal requirements and with federal internal control standards regarding evaluating issues and remediating deficiencies on a timely basis. Specifically, for onsite monitoring reviews, SAMHSA’s procedures specify the agency is to provide an initial report to the reviewed program within 150 days of the onsite visit. However, for eight of the nine monitoring review reports we reviewed for fiscal years 2015 and 2016, SAMHSA provided the report more than a year after the visit. One program that had just received its report at the time of our review told us that it was difficult to plan the necessary changes to its work without an official report with findings and recommendations to help guide them in restructuring their operations. Program staff said they had moved ahead and made some changes but were uncertain whether those changes would be deemed sufficient because of the lack of feedback from the agency. SAMHSA officials told us that they may have missed some deadlines as a result of competing priorities and restricted resources—for example, recently only two of four PAIMI project officer positions have been occupied. Officials reported that the agency was taking steps to streamline the process to make it more efficient and to bring on more staff resources. The officials said that in 2018 SAMHSA planned to shift responsibility for the project officers’ portion of the onsite reviews to a dedicated onsite monitor, which they hoped would expedite the review process. In addition, the agency had taken steps to streamline its onsite monitoring review process, such as by revising and standardizing its reporting template. There are uncertainties with regard to how effective these changes will be in increasing timeliness. For example, the planned efficiencies target some, but not all, of the key components of the reviews. In particular, SAMHSA officials told us that these review process changes do not pertain to the portion of the onsite review that focuses on state PAIMI program compliance with applicable fiscal requirements. Officials noted that the SAMHSA office that conducts the fiscal portion of the review has had staff shortages for the past 16 months and is not able to operate within normal time frames for completing this portion of the report. Without meeting its deadlines for completing its review and providing timely, detailed information and feedback to PAIMI programs, SAMHSA cannot ensure that identified issues are resolved in a timely manner, thus potentially endangering the effectiveness of the programs. Conclusions Individuals with mental illness can face abuse, neglect, and rights violations in both institutional and community treatment settings, including their own homes. The protection and advocacy services provided by state PAIMI programs play an important role in reducing these serious issues for this vulnerable population. Therefore, it is important to monitor how effective the programs are in addressing such issues. SAMHSA has a number of procedures in place to monitor program compliance with statutory and regulatory requirements, which enable the agency to identify and resolve potential issues with program compliance, and it is taking steps to streamline and improve its compliance oversight. At the same time, the agency’s processes for oversight of program effectiveness could be improved, such as by examining trends in mid-performance changes programs make to their priority goals, objectives, and targets across multiple years. Without such monitoring, SAMHSA may not recognize a pattern of changes that signal larger concerns about that program’s effectiveness. Finally, SAMHSA has not been timely in completing its onsite monitoring reviews or providing the results of these reviews to the programs. Although SAMHSA has plans to make reviews more efficient and to add resources, it is unclear to what extent these steps will resolve the lack of timeliness. Recommendations for Executive Action We are making the following two recommendations to SAMHSA: The Assistant Secretary for Mental Health and Substance Use should establish procedures to better ensure that mid-performance changes to program priority goals, objectives, and targets are examined across multiple years. (Recommendation 1) The Assistant Secretary for Mental Health and Substance Use should take steps, including the steps it has planned, to ensure onsite reviews are completed and findings are provided to programs on a timely basis. (Recommendation 2) Agency Comments We provided a draft of this report to HHS for comment. In its written comments, HHS concurred with both of our recommendations and indicated that it will examine ways to implement them. HHS’s comments are reprinted in appendix III. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Protection and Advocacy for Individuals with Mental Illness (PAIMI) Grants by Program, Fiscal Year 2016 Appendix II: Selected State PAIMI Program Priority Goal Categories in Fiscal Year 2016 State Protection and Advocacy for Individuals with Mental Illness (PAIMI) programs determine their priority goals each fiscal year to prioritize the work they hope to accomplish. Our analysis of the priority goals reported in the annual program performance reports by eight selected state PAIMI programs found that all programs had at least one priority goal focused on Protection and Civil Rights in fiscal year 2016 (see fig. 3). Access/Discrimination was the next most frequently set priority goal category—with seven of the eight programs establishing these goals. We also reviewed program goal categories from fiscal year 2015 and identified few significant differences between 2015 and 2016. Eight priority goal categories emerged from our analysis: Access/Discrimination: This category refers to issues broadly related to access to services or benefits, and reduction of discrimination, e.g., advocating for access to legal services or elimination of barriers to housing, employment, and education services. Community Integration: This category refers to issues of integrating the individual into community facilities or ensuring they can be independent outside of a facility. Education: This category refers to specific issues related to access or equality in education services. Employment: This category refers to specific issues related to access to employment. Health Care Services: This category refers to specific issues related to access to health care services within the community or state. Housing: This category refers to specific issues related to access to housing. Information/Outreach: This category refers to activities related to distributing publications or performing outreach to individuals. Protection and Civil Rights: This category refers to issues broadly related to rights violations and protection from restraint, seclusion, or other abuse or neglect. Appendix III: Agency Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Susan Barnidge, Assistant Director; Hannah Marston Minter, Analyst-in-Charge; Joanna Wu Gerhardt; and Emily Beller Holland made key contributions to this report. Also contributing were Jennie Apter, Muriel Brown, and Emily Wilson.
Why GAO Did This Study PAIMI grant awards, established by Congress in 1986 and totaling $36 million in 2016, are administered by SAMHSA to support state protection and advocacy programs. PAIMI programs protect and advocate for the rights of individuals with significant mental illness by investigating reports of incidents of abuse and neglect of such individuals in facilities such as hospitals, and in the community, among other activities. The 21st Century Cures Act included a provision for GAO to review the PAIMI programs and their compliance with federal statutory and regulatory requirements. This report examines (1) the outcomes reported by PAIMI programs in selected states, and (2) SAMHSA's oversight of state PAIMI programs, including their compliance with federal requirements. GAO reviewed FY 2015 and 2016 PAIMI program documentation for eight of 57 programs selected for variation in funding amount, geographic location, and other factors. GAO also reviewed relevant SAMHSA policies and procedures and assessed them against federal standards for internal control. What GAO Found The eight selected state Protection and Advocacy for Individuals with Mental Illness (PAIMI) programs GAO reviewed reported a range of positive outcomes from their work on behalf of individuals with mental illness. For example, in fiscal year (FY) 2016, the selected programs reported resolving in the individual's favor 1,772 out of 2,390 cases (74 percent) related to complaints of alleged abuse, neglect, and rights violations. The remaining cases were reported as withdrawn by the client, closed due to lack of merit, or not resolved in the individual's favor. These programs also reported concluding a variety of broader, system-level activities—referred to as systemic activities—intended to benefit groups of individuals with mental illness. These systemic activities resulted in, for example, changes to procedures in mental health institutions and correctional facilities. Source: GAO analysis of 2016 Substance Abuse and Mental Health Services Administration data. | GAO-18-450 The Substance Abuse and Mental Health Services Administration (SAMHSA), which oversees the state PAIMI programs, has a variety of procedures in place to monitor performance and compliance. However, two areas warrant additional attention, as follows: SAMHSA has not consistently examined changes to performance benchmarks—the goals, objectives, and targets that PAIMI programs set annually for their planned work. Programs are permitted to modify these benchmarks, and GAO found that four had done so. A new SAMHSA system implemented in 2017 could improve recording of benchmark changes, but SAMHSA lacks procedures to examine changes across years, which could help identify performance concerns. SAMHSA often failed to complete its periodic, in-depth reviews of programs and to provide findings of identified deficiencies to PAIMI programs on a timely basis. SAMHSA has plans to improve the efficiency of its review process. However, it is unclear the extent to which these plans will resolve the timeliness issues, which could delay resolution of any issues found in the reviews. What GAO Recommends GAO recommends that SAMHSA take steps to ensure that changes to performance benchmarks are examined over time, and to ensure onsite reviews are completed—and findings are provided to state programs—in a timely manner. The Department of Health and Human Services concurred with GAO's recommendations.
gao_GAO-18-556T
gao_GAO-18-556T_0
Background The 340B Program was created following the enactment of the Medicaid Drug Rebate Program and gives 340B covered entities discounts on outpatient drugs comparable to those made available to state Medicaid agencies. HRSA is responsible for administering and overseeing the 340B Program. Program Participants Eligibility for the 340B Program, which is defined in the PHSA, has expanded over time, most recently through the Patient Protection and Affordable Care Act (PPACA), which extended eligibility to additional types of hospitals. Entities generally become eligible by receiving certain federal grants or by being one of six hospital types. Eligible grantees include clinics that offer primary and preventive care services, such as Federally Qualified Health Centers, clinics that target specific conditions or diseases that raise public health concerns or are expensive to treat, and AIDS Drug Assistance Programs, which serve as a “payer of last resort” to cover the cost of providing HIV-related medications to certain low-income individuals. Eligible hospitals include certain children’s hospitals, free-standing cancer hospitals, rural referral centers, sole community hospitals, critical access hospitals, and general acute care hospitals that serve a disproportionate number of low-income patients, referred to as disproportionate share hospitals (DSH). To become a covered entity and participate in the program, eligible entities must register with HRSA and be approved. Entity participation in the 340B Program has grown over time to include more than 38,000 entity sites, including more than 21,000 hospital sites and nearly 17,000 federal grantee sites (see fig. 1). To be eligible for the 340B Program hospitals must meet certain requirements intended to ensure that they perform a government function to provide care to the medically underserved. First, hospitals generally must meet specified DSH adjustment percentages to qualify. Additionally, they must be (1) owned or operated by a state or local government, (2) a public or private nonprofit corporation that is formally delegated governmental powers by a unit of state or local government, or (3) a private, nonprofit hospital under contract with a state or local government to provide health care services to low-income individuals who are not eligible for Medicaid or Medicare. All drug manufacturers that supply outpatient drugs are eligible to participate in the 340B Program and must participate in order to have their drugs covered by Medicaid. To participate, manufacturers are required to sign a pharmaceutical pricing agreement with HHS in which both parties agree to certain terms and conditions. Program Structure, Operation, and Key Requirements The 340B price for a drug—often referred to as the 340B ceiling price—is based on a statutory formula and represents the highest price a participating drug manufacturer may charge covered entities. Covered entities must follow certain requirements as a condition of participating in the 340B Program. For example covered entities are prohibited from subjecting manufacturers to “duplicate discounts” in which drugs prescribed to Medicaid beneficiaries are subject to both the 340B price and a rebate through the Medicaid Drug Rebate Program. covered entities are also prohibited from diverting any drug purchased at the 340B price to an individual who does not meet HRSA’s definition of a patient. This definition, issued in 1996, outlines three criteria that generally state that diversion occurs when 340B discounted drugs are given to individuals who are not receiving health care services from covered entities or are only receiving non-covered services, such as inpatient hospital services. (See table 1 for more information on HRSA’s definition of an eligible patient.) Covered entities are permitted to use drugs purchased at the 340B price for all individuals who meet the 340B Program definition of a patient regardless of whether they are low-income, uninsured, or underinsured. A covered entity typically purchases and dispenses 340B drugs through pharmacies—either through an in-house pharmacy, or through the use of a contract pharmacy arrangement, in which the covered entity contracts with an outside pharmacy to dispense drugs on its behalf. The adoption and use of contract pharmacies in the 340B Program is governed by HRSA guidance. HRSA’s original guidance permitting the use of contract pharmacies limited their use to covered entities that did not have in-house pharmacies and allowed each covered entity to contract with only one outside pharmacy. However, March 2010 guidance lifted the restriction on the number of pharmacies with which a covered entity could contract. Since that time, the number of unique contract pharmacies has increased significantly, from about 1,300 at the beginning of 2010 to around 18,700 in 2017 (see fig. 2); and, according to HRSA data, in 2017, there were more than 46,000 contract pharmacy arrangements. HRSA guidance requires a written contract between the covered entity and each contract pharmacy. Covered entities are responsible for overseeing contract pharmacies to ensure compliance with prohibitions on drug diversion and duplicate discounts. HRSA guidance indicates that covered entities are “expected” to conduct annual independent audits of contract pharmacies, leaving the exact method of ensuring compliance up to the covered entity. Drug manufacturers also must follow certain 340B Program requirements. For example, HRSA’s nondiscrimination guidance prohibits manufacturers from distributing drugs in ways that discriminate against covered entities compared to other providers. This includes ensuring that drugs are made available to covered entities through the same channels that they are made available to non-340B providers, and not conditioning the sale of drugs to covered entities on restrictive conditions, which would have the effect of discouraging participation in the program. HRSA Has Implemented GAO’s Recommendation to Improve Its Oversight of the 340B Program by Conducting Audits In our September 2011 report, we found that HRSA’s oversight of the 340B Program was weak because it primarily relied on covered entities and manufacturers to police themselves and ensure their own compliance with program requirements. Upon enrollment into the program, HRSA requires participants to self-certify that they will comply with applicable 340B Program requirements and any accompanying agency guidance, and expects participants to develop the procedures necessary to ensure and document compliance, informing HRSA if violations occur. HRSA officials told us that covered entities and manufacturers could also monitor each other’s compliance with program requirements, but we found that, in practice, participants could face limitations to such an approach. Beyond relying on participants’ self-policing, we also found that HRSA engaged in few activities to oversee the 340B Program and ensure its integrity, which agency officials said was primarily due to funding constraints. Further, although HRSA had the authority to conduct audits of program participants to determine whether program violations had occurred, at the time of our 2011 report, the agency had never conducted such an audit. In our 2011 report, we concluded that changes in the settings where the 340B Program was used may have heightened the concerns about the inadequate oversight we identified. In the years leading up to our report, the settings where the 340B Program was used had shifted to more contract pharmacies and hospitals than in the past, and that trend has continued in recent years. We concluded that increased use of the 340B Program by contract pharmacies and hospitals may have resulted in a greater risk of drug diversion to ineligible patients, in part because these facilities were more likely to serve patients that did not meet the definition of a patient of the program. To address these oversight weaknesses, we recommended that the Secretary of HHS instruct the Administrator of HRSA to conduct selective audits of covered entities to deter potential diversion. In response to that recommendation, in fiscal year 2012, HRSA implemented a systematic approach to conducting annual audits of covered entities that is outlined on its website. HRSA audits include entities that are randomly selected based on risk-based criteria (approximately 90 percent of the audits conducted each year), and entities that are targeted based on information from stakeholders (10 percent of the audits conducted). HRSA currently audits a total of 200 entities per year, which accounts for less than 2 percent of covered entities. (See table 2.) As a result of the audits already conducted, HRSA has identified instances of non-compliance with program requirements, including violations related to drug diversion and the potential for duplicate discounts. The agency has developed a process to address non- compliance through corrective action plans. The results of each year’s audits are available on HRSA’s website, and we currently have work underway reviewing HRSA’s efforts to ensure compliance with 340B Program requirements at contract pharmacies that includes an examination of HRSA’s audits of covered entities. HRSA Implemented One of Three GAO Recommendations to Clarify Program Guidance In our 2011 report, we found that HRSA’s guidance on three key program requirements lacked the necessary level of specificity to provide clear direction, making it difficult for participants to self-police or monitor others’ compliance, and raising concerns that the guidance could be interpreted in ways that were inconsistent with its intent. First, we found that HRSA’s nondiscrimination guidance was not sufficiently specific in detailing practices manufacturers should follow to ensure that drugs were equitably distributed to covered entities and non- 340B providers when distribution was restricted. Some stakeholders we interviewed for the 2011 report, such as covered entities, raised concerns about the way certain manufacturers interpreted and complied with the guidance in these cases. We recommended that HRSA further clarify its nondiscrimination guidance for cases in which distribution of drugs is restricted and require reviews of manufacturers’ plans to restrict distribution of drugs at 340B prices in such cases. In response, HRSA issued a program notice in May 2012 that clarified HRSA’s policy for manufacturers that intend to restrict distribution of a drug and provided additional detail on the type of information manufacturers should include in such restricted distribution plans. In addition, we found a lack of specificity in HRSA’s guidance on two other issues—the definition of an eligible patient and hospital eligibility for program participation. Specifically, we found that HRSA’s guidance on the definition of an eligible patient lacked the necessary specificity to clearly define the various situations under which an individual was considered eligible for discounted drugs through the 340B Program. As a result, covered entities could interpret the definition either too broadly or too narrowly. At the time of our report, agency officials told us they recognized the need to provide additional clarity around the definition of an eligible patient, in part because of concerns that some covered entities may have interpreted the definition too broadly to include non-eligible individuals, such as those seen by providers who were only loosely affiliated with a covered entity. HRSA had not issued guidance specifying the criteria under which hospitals that were not publicly owned or operated could qualify for the 340B Program. For example, we found HRSA guidance lacking on one of the ways hospitals could qualify for the program, namely by executing a contract with a state or local government to provide services to low-income individuals who are not eligible for Medicaid or Medicare. Specifically, we found that HRSA did not outline any criteria that must be included in such contracts, such as the amount of care a hospital must provide to these low-income individuals, and did not require the hospitals to submit their contracts for review by HRSA. As a result, hospitals with contracts that provided a small amount of care to low-income individuals not eligible for Medicaid or Medicare could claim 340B discounts, which may not have been what the agency intended. Given the lack of specificity in these areas, we recommended that HRSA (1) finalize new, more specific guidance on the definition of an eligible patient, and (2) issue guidance to further specify the criteria that hospitals not publicly owned or operated must meet to be eligible for the 340B Program. HRSA agreed with these recommendations and had planned to address them in a comprehensive 340B Program regulation that it submitted to the Office of Management and Budget for review in April 2014. However, HRSA withdrew this proposed regulation in November 2014 following a May 2014 federal district court ruling that the agency had not been granted broad rulemaking authority to carry out all the provisions of the 340B Program. After this ruling, the agency issued a proposed Omnibus Guidance in August 2015 to interpret statutory requirements for the 340B Program in areas where it did not have explicit rulemaking authority, including further specificity on the definition of a patient of a covered entity and hospital eligibility for 340B Program participation. However, in January 2017, the agency withdrew the guidance following the administration’s January 20 memorandum directing agencies to withdraw or postpone regulations and guidance that had not yet taken effect. In March 2018, HRSA indicated that it was working with HHS to determine next steps regarding the proposed Omnibus Guidance, which included the patient definition, but that it was unable to further clarify guidance on hospital eligibility without additional authority. HRSA also noted that the administration’s fiscal year 2019 budget proposal requests rulemaking authority, which, if enacted, would provide the agency with the authority to regulate hospital eligibility for the 340B Program. GAO Has Ongoing Work Related to the 340B Program GAO has ongoing work related to 340B contract pharmacies and the characteristics of hospitals participating in the program. Specifically, given the increase in the number of contract pharmacies in the 340B Program and concerns that contract pharmacy arrangements present an increased risk to the integrity of the program, we were asked to review contract pharmacy use under the 340B Program. In our forthcoming report, we plan to describe the extent to which covered entities contract with pharmacies to distribute 340B drugs, and the characteristics of these pharmacies; describe financial arrangements selected covered entities have with contract pharmacies and third-party administrators related to the administration and dispensing of 340B drugs; describe the extent to which selected covered entities provide discounts on 340B drugs dispensed by contract pharmacies to low- income, uninsured patients; and examine HRSA’s efforts to ensure compliance with 340B Program requirements at contract pharmacies. In addition, with the growth in the number of hospitals participating in the 340B Program and Medicaid coverage expansions as a result of PPACA, we were asked to review how hospitals that participate in the 340B Program compare to other hospitals. In our forthcoming report, we plan to address how hospitals that participate in the 340B Program compare to non- 340B hospitals in terms of certain characteristics; and how, if at all, the characteristics of 340B and non-340B hospitals changed after state Medicaid coverage was expanded under PPACA. We expect to issue these reports this summer. Chairman Alexander, Ranking Member Murray, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. GAO Contacts and Staff Acknowledgments For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this statement were Michelle Rosenberg, Assistant Director; Amanda Cherrin, Sandra George, and David Lichtenfeld. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study According to HRSA, the purpose of the 340B Program, which was created in 1992, is to enable covered entities to stretch scarce federal resources to reach more eligible patients, and provide more comprehensive services. Covered entities can provide 340B drugs to eligible patients regardless of income or insurance status and generate revenue by receiving reimbursement from patients' insurance. The program does not specify how this revenue is to be used or whether discounts are to be passed on to patients. The number of participating covered entity sites—currently about 38,000—has almost doubled in the past 5 years and the number of contract pharmacies increased from about 1,300 in 2010 to around 18,700 in 2017. In recent years, questions have been raised regarding oversight of the 340B Program, particularly given the program's growth over time. In September 2011, GAO identified inadequacies in HRSA's oversight of the 340B Program and made recommendations for improvement. Among other things, this statement describes HRSA actions in response to GAO recommendations to improve its program oversight. For this statement, GAO obtained information and documentation from HRSA officials about any significant program updates and steps they have taken to implement the 2011 GAO recommendations. More detailed information on the objectives, scope, and methodology can be found in GAO's September 2011 report. What GAO Found The 340B Drug Pricing Program requires drug manufacturers to sell outpatient drugs at discounted prices to covered entities—eligible clinics, hospitals, and others—in order to have their drugs covered by Medicaid. Covered entities are only allowed to provide 340B drugs to certain eligible patients. Entities dispense 340B drugs through in-house pharmacies or contract pharmacies, which are outside pharmacies entities contract with to dispense drugs on their behalf. The number of contract pharmacies has increased significantly in recent years. In its September 2011 report, GAO found that the Health Resources and Services Administration's (HRSA) oversight of the 340B Program was inadequate to ensure compliance with program rules, and GAO recommended actions that HRSA should take to improve program integrity, particularly given significant growth in the program in recent years. HRSA has taken steps to address two of GAO's four recommendations: HRSA initiated audits of covered entities . GAO found that HRSA's oversight of the 340B Program was weak because it primarily relied on covered entities and manufacturers to ensure their own compliance with program requirements and HRSA engaged in few oversight activities. GAO recommended that HRSA conduct audits of covered entities and in fiscal year 2012, HRSA implemented a systematic approach to conducting annual audits of covered entities. HRSA now audits 200 covered entities a year, which is less than 2 percent of entities participating in the 340B Program. Audits conducted to date have identified instances of non-compliance with program requirements, including the dispensing of drugs to ineligible patients. GAO currently has work underway reviewing HRSA's efforts to ensure compliance at contract pharmacies, which includes an examination of HRSA's audits of covered entities. HRSA clarified guidance for manufacturers. GAO found a lack of specificity in guidance for manufacturers for handling cases in which distribution of drugs is restricted, such as when there is a shortage in drug supply. GAO recommended that HRSA refine its guidance. In May 2012, HRSA clarified its policy for manufacturers that intend to restrict distribution of a drug and provided additional detail on the type of information manufacturers should include in their restricted distribution plans. HRSA has not clarified guidance on two issues. GAO also found that HRSA guidance on (1) the definition of an eligible patient and (2) hospital eligibility criteria for program participation lacked specificity and recommended that HRSA clarify its guidance. HRSA agreed that clearer guidance was necessary and, in 2015, released proposed guidance that addressed both issues. However, in January 2017, the agency withdrew that guidance in accordance with recent directives to freeze, withdraw, or postpone pending federal guidance. In March 2018, HRSA indicated it was in the process of determining next steps related to guidance on the patient definition, but would need additional authority to further clarify guidance on hospital eligibility; rulemaking authority for the 340B Program was requested in the administration's fiscal year 2019 budget proposal.
gao_GAO-18-499
gao_GAO-18-499_0
Background U.S. agencies implementing foreign assistance have individually and jointly developed strategies to guide their efforts. While State’s, USAID’s, and MCC’s strategies focus exclusively on foreign affairs or foreign assistance, DOD’s, HHS’s, and USDA’s strategies—as well as those of other agencies—address foreign assistance as part of larger portfolios of programs. State and USAID, which provide the majority of all foreign assistance, develop joint foreign assistance-related strategies. The State-USAID Joint Strategic Plan outlines top-level goals for State and USAID efforts, including the use of foreign assistance, to inform strategies developed by State and USAID bureaus, offices, and country teams. Six joint State-USAID regional strategies (e.g., the State Bureau of African Affairs–USAID Bureau for Africa Joint Regional Strategy) identify regional bureau priorities that are intended to align with the State-USAID Joint Strategic Plan and guide country-level planning for joint integrated country strategies. State, the lead U.S. foreign affairs agency, also develops strategies for its functional bureaus, which implement foreign assistance programs, and has participated in the development of a number of multisectoral and global strategies. State’s Office of U.S. Foreign Assistance Resources is responsible for coordinating foreign assistance programs, including providing strategic direction for both State and USAID. According to State documents, the Office of U.S. Foreign Assistance Resources strengthens the integration of foreign assistance with U.S. foreign policy priorities by guiding the development of coordinated strategic plans for each U.S. overseas mission at the country level (i.e., integrated country strategies), aiming for a holistic, whole-of-government approach. It provides tools and resources to assist bureaus, offices, and country teams in designing foreign assistance programs, projects, and processes that can help align with, and advance, broader strategic goals as well as monitoring and evaluation of progress and results. USAID, the lead U.S. foreign assistance agency, develops global, regional, and country strategies in the areas of health, democracy and human rights, water and sanitation, food security, education, poverty, and the environment, among others. MCC has developed one overall strategy document, related to its mission of reducing poverty through country-led economic growth. MCC also collaborates with stakeholders in and outside government to develop and implement foreign assistance programs. DOD performs security cooperation strategic planning, implementation, and oversight to achieve national defense strategy objectives. DOD also develops country-specific strategies for security cooperation and other assistance, including humanitarian assistance and efforts to build foreign partner security capacity. HHS has developed, or is a party to, a number of strategies related to global health, including strategies for specific diseases, such as HIV/AIDS, malaria, and Ebola, and for immunization and emergency preparedness. The Centers for Disease Control and Prevention (CDC), a component of HHS, develops its own strategies, which discuss CDC’s plans to combat infectious diseases worldwide. USDA has contributed to jointly issued strategies in food security related to two food aid programs that it administers—the Food for Progress program and the McGovern-Dole International Food for Education and Child Nutrition program. In addition, these agencies implement foreign assistance programs under the auspices of government-wide foreign assistance strategies developed by the National Security Council, the Executive Office of the President, and the Office of Management and Budget. These government-wide strategies include, for example, the National Security Strategy and the National Action Plan for Women, Peace, and Security. The geographic focus of these six agencies’ foreign assistance strategies ranges from country level to regional to global. For example, State, USAID, and DOD have developed integrated country strategies, country development cooperation strategies, and country cooperation plans, respectively, applicable to the countries where they implement foreign assistance. Similarly, State and USAID have six joint regional strategies and DOD has strategies focusing on its various geographic areas of command. In addition, various agencies, working both jointly and independently, have developed a wide variety of sectoral, multisectoral, agency-specific, and multi-agency strategies to guide global assistance efforts. Foreign assistance strategies are continuously developed and updated. Some strategies emerge after the launch of a specific initiative, such as the President’s Emergency Plan for AIDS Relief (PEPFAR), while others are updated as part of agencies’ strategic management processes. For example, State’s functional bureau strategies and its joint regional strategies with USAID are periodically updated as bureau-level components of State’s planning, budgeting, and performance management cycle. Planning at the agency level is reflected in the State- USAID Joint Strategic Plan, updated most recently in February 2018, with which bureau- and country-level strategies are expected to align. As we have previously reported, strategies that consider relationships among goals and objectives, interagency collaboration, and performance assessment can improve federal management. In particular, these considerations can help identify, eliminate, or better manage fragmentation, overlap, and duplication in the federal government. Many Selected Foreign Assistance Strategies Addressed Key Elements We Identified That Help Promote Alignment, but Some Did Not While many of the 52 foreign assistance strategies that we reviewed at least partially addressed the key elements we identified related to alignment of foreign assistance strategies, some did not address these elements. Regarding interagency coordination, 40 percent of the strategies generally identified roles and responsibilities for implementing the strategies, while 33 percent generally identified interagency coordination mechanisms; 23 percent and 38 percent, respectively, did not address these elements. Regarding strategic integration, 58 percent of the strategies we reviewed described linkages with U.S. foreign assistance strategies in the same sector and 54 percent generally described linkages with relevant higher- or lower-level U.S. foreign assistance strategies; 21 percent and 25 percent, respectively, did not identify such linkages. Regarding assessment of progress toward strategic goals, almost all of the strategies generally established desired results and a framework of goals and objectives and described activities to achieve results; however, 21 percent did not identify milestones or performance indicators and 21 percent did not outline plans for monitoring and evaluation. We also found that the six agencies implementing most U.S. foreign assistance do not have consistent guidance for strategy development that could help ensure their strategies address the key elements we identified. We Identified Nine Key Elements That Help Ensure Strategies Are Aligned and Planning Is Not Fragmented On the basis of our prior reporting about U.S. government strategic planning and interagency collaboration, we identified nine key elements that are important for helping to ensure that agencies’ foreign assistance strategies are well aligned in terms of implementation approach and desired results and that planning among multiple agencies is not fragmented. The nine elements we identified are associated with (1) interagency coordination, (2) strategic integration, and (3) assessment of progress toward strategic goals (see table 1). As we have previously reported, fragmentation in the U.S. government refers to circumstances in which multiple federal agencies are involved in serving the same broad area of national need and opportunities exist to improve service delivery. Many Strategies We Reviewed Addressed Elements Related to Interagency Coordination, Strategic Integration, and Assessment of Progress, but Some Did Not Interagency Coordination Implementing foreign aid involves the collaborative efforts of multiple U.S. agencies, each of which brings specific contributions and statutory authorities and has its own organizational structure, culture, and priorities. Our prior work has shown that foreign assistance strategies that consistently address (1) agencies’ roles and responsibilities and (2) interagency coordination mechanisms can help guide the implementation of various aspects of a strategy and the identification of agreed-on processes for effective collaboration to resolve conflicts and better manage fragmentation. Strategies that do not consistently address elements related to interagency coordination miss opportunities to ensure that agencies’ roles and responsibilities are clear and distinct and that coordination mechanisms are well defined. As figure 1 shows, of the 52 strategies we reviewed, 40 percent generally identified agencies’ roles and responsibilities and 23 percent did not address this element. In addition, while 33 percent generally identified interagency coordination mechanisms, 38 percent did not identify any such mechanisms. Agencies’ roles and responsibilities. Forty percent (21 of 52) of the strategies we reviewed generally defined agencies’ roles and responsibilities. For example, USAID’s Strategy on Democracy, Human Rights and Governance identified all agencies involved in its implementation and laid out the roles and responsibilities of each agency as well as USAID offices. Thirty-seven percent (19 of 52) of the strategies partially defined agencies’ roles and responsibilities, which suggests the potential for improvement in this area. For example, State-USAID joint regional strategies identified the partners and stakeholders and enumerated the activities that State and USAID or the embassy and missions would undertake. However, most of those strategies did not specify the individual agencies’ roles and responsibilities. Twenty-three percent (12 of 52) of the strategies contained no information about agencies’ lead, support, and partner roles. Interagency coordination mechanisms. Thirty-three percent (17 of 52) of the strategies we reviewed generally identified interagency coordination mechanisms. For example, USAID’s Multi-Sector Nutrition Strategy identified joint planning, funding, and programming mechanisms for coordination among development and humanitarian assistance agencies at country and regional levels in USAID and the U.S. government as a whole. Twenty-nine percent (15 of 52) of the strategies partially identified coordination mechanisms. For example, CDC’s Global Health Strategy and USAID’s Global Health Strategic Framework both described the agencies’ respective unique roles in global health but did not specifically discuss how the agencies would work together to achieve their goals. Thirty-eight percent (20 of 52) of the strategies did not discuss interagency coordination mechanisms. Integration with Other Related Strategies As our prior work has shown, agencies that establish strategies that align with partner agencies’ activities, processes, and resources are better positioned to accomplish common goals, objectives, and outcomes. Our prior work has also determined that collaboration among federal agencies working toward similar results can help ensure consistent goals and mutually reinforcing program efforts that effectively manage fragmentation. These agencies can use higher-level strategic plans as a tool to drive interagency collaboration to ensure complementarities in goals and objectives. To improve alignment of related strategies, each strategy should address (1) integration with relevant sectoral strategies and (2) integration with relevant higher- or lower-level strategies. Strategies that do not consistently address elements related to strategic integration do not clearly show whether objectives and activities align with existing strategic priorities at the government-wide, sectoral, regional, and country levels. As figure 2 shows, 58 percent of the strategies we reviewed generally described linkages with at least one relevant sectoral strategy, while 21 percent did not mention such linkages at all. In addition, 54 percent of the strategies generally described linkages with at least one higher- or lower-level foreign assistance strategy, while 25 percent did not describe any such linkages. Integration with relevant sectoral strategies. Fifty-eight percent (30 of 52) of the strategies we reviewed generally identified or described linkages with other, related U.S. government strategies. For example, State’s Strategy for Women’s Economic Empowerment discussed how its activities are designed to complement and reinforce those of the U.S. National Action Plan on Women, Peace and Security, the U.S. Strategy to Prevent and Respond to Gender-Based Violence Globally, and the U.S. Global Strategy to Empower Adolescent Girls. About 21 percent (11 of 52) of the strategies we reviewed partially addressed this element. For example, the strategy PEPFAR 3.0—Controlling the Epidemic: Delivering on the Promise of an AIDS-Free Generation explicitly referred to the PEPFAR Blueprint for Creating an AIDS-Free Generation and stated that targeting interventions for populations at greatest risk for HIV incidence is an important activity. However, the strategy did not discuss how its goals and objectives relate to the strategies of the various agencies implementing PEPFAR and did not refer to the other strategies pertaining to PEPFAR. The remaining 21 percent (11 of 52) of strategies did not mention any other relevant U.S. government strategies. (See app. II for additional analysis of strategies by sector.) Integration with relevant higher- or lower-level strategies. Fifty-four percent (28 of 52) of the strategies we reviewed generally described their relationship to relevant strategies at higher or lower levels of government. For example, the U.S. Global Strategy to Empower Adolescent Girls discussed its relationship to a policy framework that, according to the strategy, is embodied in three higher-level strategies establishing gender equality as an important element of U.S. foreign policy—the National Security Strategy, the U.S. Global Development Policy, and the Quadrennial Diplomacy and Development Review. About 21 percent (11 of 52) of the strategies we reviewed partially addressed this element— that is, they discussed their relationship with higher- or lower-level strategies in a limited way. For example, the U.S. Government Approach on Business and Human Rights discussed priorities outlined in the National Security Strategy, aligning activities of business with those priorities, and noted efforts by State’s Bureau of Democracy, Human Rights, and Labor to discuss human rights with businesses. However, the U.S. Government Approach on Business and Human Rights did not reference common goals or activities outlined in other relevant higher- level strategies, such as the U.S. Global Development Policy or the Quadrennial Diplomacy and Development Review. The remaining 25 percent (13 of 52) of strategies did not address their relationship with strategies at other levels of government. Assessment of Progress toward Strategic Goals Our prior work has shown that effective strategies clearly identify goals and objectives and a means for assessing progress in achieving them and that alignment of strategies and other plans can improve the management of fragmentation. Therefore, our prior work has called for agencies to develop strategies that identify and describe (1) desired results, (2) activities to achieve results, (3) a hierarchy of goals and subordinate objectives, (4) milestones and indicators, and (5) plans for monitoring and evaluation. Strategies that do not consistently address elements related to assessing progress may limit agencies’ ability to specify and assess common goals and objectives and mutually reinforcing results. As figure 3 shows, most of the strategies we reviewed generally identified desired results, activities to achieve those results, and a hierarchy of goals and subordinate objectives. However, fewer strategies addressed how progress toward those goals and objectives would be assessed. In particular, 63 percent generally identified milestones and performance indicators, while 21 percent did not address this element. In addition, 42 percent of the strategies generally outlined plans for monitoring and evaluation, while 21 percent did not outline such plans. Desired results, activities to achieve results, and hierarchy of goals and objectives. Ninety-two percent (48 of 52) of the strategies we reviewed generally included a statement of desired results, and 90 percent (47 of 52) generally included a description of activities to achieve these results. For example, MCC’s Next: A Strategy for MCC’s Future stated the agency’s overall mission of reducing poverty through economic growth and listed priority actions for each goal, such as exploring new data sources for accurately identifying countries with high poverty rates. In addition, about 83 percent (43 of 52) of the strategies generally included a hierarchy of strategic goals and subordinate objectives. For example, CDC’s Global Health Strategy included a clear hierarchy of goals and subordinate objectives (see table 2). Six percent (3 of 52) of the strategies did not identify desired results, 2 percent (1 of 52) did not describe activities to achieve these results, and 10 percent (5 of 52) did not include a hierarchy of goals and objectives. Milestones and performance indicators. Sixty-three percent (33 of 52) of the strategies we reviewed generally included milestones or performance indicators. These strategies often incorporated milestones or indicators as discrete components of each goal or subordinate objective. For example, DOD’s Kenya Country Cooperation Plan tracked discrete tasks with specific time frames, using color-coding to designate stages of implementation. Fifteen percent (8 of 52) of the strategies partially addressed milestones or indicators. For example, the 2016 updated joint State-USAID Strategy to Prevent and Respond to Gender-Based Violence Globally included an annex listing indicators but did not link them to the strategic objectives and planned actions. Twenty- one percent (11 of 52) of the strategies did not include any milestones or performance indicators. Monitoring and evaluation plans. Forty-two percent (22 of 52) of the strategies we reviewed generally outlined monitoring and evaluation plans. These strategies typically outlined such plans in a specific goal or in a designated section or appendix. For example, USAID’s Kenya Country Development Strategy included a section on monitoring and evaluation planning. In this strategy, USAID committed to host donor coordination and other stakeholder forums to monitor progress and to establish a monitoring and evaluation “core team” to ensure that learning is incorporated in decision making. Thirty-seven percent (19 of 52) of the strategies partially addressed monitoring and evaluation planning. Some of these strategies emphasized the importance of monitoring and evaluation or made broad statements without outlining more specific plans. For example, the State-USAID Joint Strategy on Countering Violent Extremism noted that State and USAID will develop a results framework for measuring progress that will be accompanied by clear, well-developed, and well-resourced monitoring and evaluation plans. The strategy also noted that State and USAID will, to the extent possible, develop a common set of indicators to measure outputs and outcomes. However, the strategy provided no additional details. Twenty-one percent (11 of 52) of the strategies did not outline any monitoring and evaluation plans. Agencies Do Not Have Consistent Guidance for Foreign Assistance Strategy Development That Addresses the Key Elements We Identified The six agencies implementing most of U.S. foreign assistance do not have consistent guidance for strategy development that could help ensure their strategies address the key elements we identified. For example, State and USAID guidance for strategy development includes many of these elements but does not cover all strategies that these agencies are involved in developing. Additionally, guidance for State’s and USAID’s joint regional strategies, State’s functional bureau strategies, and USAID’s country development cooperation strategies does not apply to other State and USAID strategies, such as the joint State-USAID integrated country strategies. DOD has also established guidance for developing security assistance programs that addresses the key elements we identified. However, DOD’s guidance does not explicitly apply to the development of foreign assistance strategies. HHS, MCC, and USDA have not established any guidance on foreign assistance strategy development. Inconsistent guidance for developing foreign assistance strategies has contributed to variations in the strategies’ addressing the key elements we identified related to interagency coordination, strategic integration, and assessing progress toward strategic goals. Existing government-wide guidance requires agencies to address some of the key elements of assessment of progress toward strategic goals that we identified as being important for ensuring alignment of agencies’ foreign assistance strategies. In January 2018, the Office of Management and Budget issued new guidance for agencies that administer foreign assistance that includes some of the elements we used to assess the strategies we reviewed. For example, the guidance recommends that agencies ensure their programs have clear goals and objectives, align their programs with higher-level strategies or objectives, and plan for monitoring and evaluation while developing policies and strategies. In addition, the Government Performance and Results Act, as amended, requires agencies to submit strategic plans for program activities that include general goals and objectives for the major functions and operations of the agency, a description of how the goals are to be achieved, and a description and schedule of program evaluations. The act’s provisions were among the sources we used to develop the desirable characteristics from which we derived the key elements we identified. However, according to officials of State’s Office of U.S. Foreign Assistance Resources, there is no government-wide guidance that incorporates interagency coordination, strategic integration, and assessment of progress toward strategic goals into the interagency strategic planning process. In addition, the officials stated that there is no overarching review mechanism for strategies outside of the core strategic planning process for joint State-USAID strategies. According to State officials, State’s Office of U.S. Foreign Assistance Resources plays a significant role in promoting interagency coordination by convening roundtables and working groups. By collaborating with the five other agencies that implement most of U.S. foreign assistance to establish guidance for developing foreign assistance strategies, the office could help the agencies ensure that future strategies address the key elements we identified. Consistent guidance for strategy development could help the agencies align their strategies and better identify and manage fragmentation in foreign assistance planning. Conclusions U.S. foreign assistance often involves multiple agencies or a whole-of- government approach. Alignment of related foreign assistance strategies can help agencies better identify and manage fragmentation. Moreover, consistently addressing the key elements we identified related to interagency coordination, strategic integration, and assessment of progress toward strategic goals can help ensure that strategies provide a clear and comprehensive picture of alignment. Several of the six largest providers of U.S. foreign assistance in the three sectors we reviewed have not issued consistent guidance for foreign assistance strategy development that incorporates these key elements. For example, some agencies have issued guidance that addresses many of the key elements we identified related to interagency coordination, strategic integration, and assessment of progress toward strategic goals, but this guidance does not apply to all of these agencies’ strategies. State’s Office of Foreign Assistance Resources leads interagency strategic planning for the implementation of foreign assistance. This office—which has responsibility for, and experience in, promoting coordination among agencies involved in foreign assistance—is uniquely placed to collaborate with other agencies implementing foreign assistance to establish guidance for developing foreign assistance strategies that addresses the key elements we identified. Such guidance would improve the agencies’ ability to align future strategies and to identify and manage fragmentation in foreign assistance planning. Recommendation for Executive Action We are making the following recommendation to the Department of State: The Secretary of State should ensure that the Director of the Office of U.S. Foreign Assistance Resources leads an effort to establish, in collaboration with the five other agencies that implement most of U.S. foreign assistance, guidance for strategy development that addresses the key elements we identified related to interagency coordination, strategic integration, and assessment of progress toward strategic goals. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to State, USAID, MCC, DOD, HHS, and USDA for review and comment. We received substantive comments from State, USAID, and MCC, which are reprinted in appendixes IV through VI, respectively. In addition, we received technical comments from HHS, which we incorporated as appropriate. State, USAID, MCC, USDA, and DOD did not provide technical comments about our draft report. In their substantive comments, State and MCC concurred with our recommendation. USAID’s comments expressed support for our goal of strengthening interagency coordination, strategic integration, and assessment of progress across the federal departments and agencies that implement U.S. foreign assistance. However, USAID suggested that we issue our recommendation to the National Security Council or address it jointly to State and USAID. We believe that our recommendation is appropriately addressed to State, given the responsibility of State’s Office of U.S. Foreign Assistance Resources for coordinating foreign assistance programs, including providing strategic direction for both State and USAID. We are sending copies of this report to the appropriate congressional committees and to the Secretaries of Agriculture, Defense, Health and Human Services, and State; the Chief Executive Officer of MCC; and the Administrator of the USAID. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology This report examines the extent to which foreign assistance strategies address key elements that we identified related to alignment of agencies’ efforts—specifically, elements related to (1) interagency coordination, (2) strategic integration, and (3) assessment of progress toward strategic goals. We focused on the six agencies that administer the largest amounts of foreign assistance, according to fiscal year 2016 obligations data: the Departments of Agriculture (USDA), Defense (DOD), Health and Human Services (HHS), and State (State); the Millennium Challenge Corporation (MCC); and the U.S. Agency for International Development (USAID). We limited our review to foreign assistance strategies that were in effect during 2017. We further focused on strategies relating to health, security, and democracy assistance, which account for the majority of total foreign assistance obligations, according to fiscal year 2016 data. We excluded strategies for other assistance sectors, such as counternarcotics and other law enforcement activities that require interagency coordination with domestically focused agencies outside the scope of our review, such as the Departments of Homeland Security and Justice. To identify the strategies for this review, we asked the six agencies to update a list of 63 government-wide, agency, multi-agency, regional, sector-specific, and multisectoral strategies that they had provided for a related report that we published in June 2017. We also asked the agencies to provide country-level strategies for Afghanistan and Kenya, two of the largest recipients of U.S. security and development assistance, based on fiscal year 2016 obligations data. We obtained and initially reviewed 72 strategies, which included the 63 strategies we identified for the June 2017 report; 6 country-level strategies for Afghanistan and Kenya; and 3 updated strategies covering national security, the President’s Emergency Plan for AIDS Relief, and water and sanitation. We determined that 52 of these 72 strategies incorporated goals or activities related to health, security, or democracy assistance (see fig. 4). These 52 strategies, which had been issued by December 2017 and were current in that year, include 44 of those listed in our June 2017 report and 8 of those subsequently identified by the agencies. We reviewed the 52 strategies to determine the extent to which they addressed nine key elements we identified relating to the alignment of multiple strategies. We identified these nine elements by reviewing prior reports focused on foreign assistance in the security sector that assessed the quality of various U.S. government strategies; articulated practices for enhancing collaboration among federal agencies; or discussed fragmentation, overlap, and duplication among government programs. Those reports identified six desirable characteristics for government-wide strategies and practices for enhancing agency collaboration. For the purposes of this report, we selected three of these characteristics, related to interagency coordination, strategic integration, and assessment of progress toward strategic goals. We excluded three characteristics— purpose, scope, and methodology; detailed discussion of problems, risks, and threats; and description of future costs and resources needed— because we did not consider them to be directly related to alignment of strategies. The three characteristics we included comprised 15 elements, 9 of which we considered to be directly related to the alignment of health, security, and democracy assistance sector strategies across multiple agencies. We excluded 6 elements—for example, potential changes to structure and details on subordinate strategies and plans for implementation (e.g., enterprise architecture)—that we did not consider to be directly related to this topic. We reviewed the selected strategies using NVivo, a qualitative data analysis software package. For each strategy, two reviewers, including at least one with expertise in the area of foreign assistance addressed by each strategy, independently identified text related to each of the key elements we had identified. We used a standardized set of criteria in an assessment instrument to consistently judge whether each strategy sufficiently addressed these elements. This instrument contained evaluative questions intended to gauge the presence of each element— for example, “To what extent does the strategy address the agencies involved and their roles and responsibilities?”. Given the variety of strategies we reviewed and reviewers’ varying expectations for the detail and emphasis accorded the key elements we had identified, we rated the strategies using a three-part scale focused on the presence of these elements. We rated a strategy as generally addressing an element if the strategy provided sufficient detail to understand the element in that strategy; as partially addressing an element if the strategy mentioned it but lacked sufficient detail; and as not addressing an element if the strategy did not mention it. The two reviewers for each strategy independently documented their judgments on the extent to which the strategy addressed the key elements we had identified. Our initial coding shows that the reviewers agreed in about 78 percent (363 of 468) of these initial judgments. The reviewers reconciled their judgments, with resolution of differences split roughly evenly between accepting the higher and lower of the initial ratings. A supervisor reviewed each set of ratings for internal consistency. The supervisor related any identified issues, as appropriate, to the reviewers, who addressed them before the supervisor recorded the review as final. We examined these strategies and any appendixes included in the documents that the agencies submitted, because these strategic documents should broadly describe objectives and efforts—including interagency coordination, strategic integration, and assessment of progress toward strategic goals—needed to achieve them. We did not review agencies’ efforts to implement the strategies and did not assess the overall effectiveness of such efforts. Instead, we focused on the extent to which the strategies we reviewed provided a clear picture of the organization and management of U.S. foreign assistance efforts. To measure the extent of strategies’ integration with other relevant sectoral strategies and with higher- and lower-level strategies, we performed a word search for references to the other selected strategies in the same sector and to other strategies or sets of strategies (e.g., regional or country-level strategies) that we classified as either higher- or lower-level strategies. We searched for such references in each of the 14 strategies that we classified as covering the health sector, the 12 strategies that we classified as covering the security sector, and the 8 strategies that we classified as covering the democracy assistance sector. See appendix III for the results of this analysis. We also reviewed agency guidance related to foreign assistance strategies. We requested current versions of any relevant documentation from each of the six agencies. State provided us with agency guidance for developing its functional bureau strategies and joint State-USAID regional strategies as well as a related template. State also provided guidance documents related to its monitoring and evaluation policy and performance management. USAID provided strategic planning and implementation guidance for its country development and cooperation strategies. HHS, USDA, and MCC did not provide—and, according to agency officials, do not have—specific guidance related to what constitutes a foreign assistance strategy. DOD provided guidance for developing security assistance programs. We conducted this performance audit from May 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Listing of 52 Selected Foreign Assistance Strategies The following list shows the 52 foreign assistance strategies that we reviewed. 1. Quadrennial Diplomacy and Development Review: Enduring Leadership in a Dynamic World (2015) 2. U.S. Global Development Policy (Sept. 22, 2010) 3. State-USAID Joint Strategic Plan FY2014-2017 (Mar. 17, 2014) 4. State Department, Office of U.S. Foreign Assistance Resources (F), Functional Bureau Strategy (2016) 5. Millennium Challenge Corporation, NEXT: A Strategy for MCC’s Future (Feb. 24, 2016) 6. USAID Multi-Sectoral Nutrition Strategy 2014-2025 (May 2014) Regional strategies (not specific to any single sector) 7. State Bureau of East Asian and Pacific Affairs/USAID Bureau for Asia Joint Regional Strategy (approved May 24, 2016) 8. State Bureau of African Affairs/USAID Bureau for Africa Joint Regional Strategy (approved Apr. 5, 2016) 9. State Bureau of Near Eastern Affairs/USAID Bureau for Middle East Joint Regional Strategy, FY 2016-2018 10. State Bureau of European and Eurasian Affairs/USAID Bureau for Europe and Eurasia Joint Regional Strategy, FY 2015-2018 (approved April 2015) 11. State Bureau of Western Hemisphere Affairs/USAID Bureau for Latin America and the Caribbean Joint Regional Strategy, FY 2015-2018 12. State and USAID Joint Regional Strategy for South and Central Asia, and Afghanistan and Pakistan, FY 2015-2018 (June 2014) 13. PEPFAR: Strategy for Accelerating HIV/AIDS Epidemic Control 2017- 2020 (September 2017) 14. 2016-2020 CDC Strategic Framework for Global Immunization (May 2016) 15. “U.S. Government Strategy for Reducing Transmission of the Ebola Virus Disease in West Africa” (draft strategy, Sept. 30, 2015) 16. President’s Malaria Initiative Strategy 2015-2020 (April 2015) 17. President’s Emergency Plan for AIDS Relief (PEPFAR) Human Resources for Health Strategy PEPFAR 3.0 (February 2015) 18. CDC Division of Parasitic Diseases and Malaria Strategic Priorities 19. The Global Strategy of the U.S. Department of Health and Human Services (2015-2019) 20. State Department, Office of the U.S. Global AIDS Coordinator, 21. PEPFAR 3.0 Controlling the Epidemic: Delivering on the Promise of an AIDS-Free Generation (December 2014) 22. HHS Strategic Plan, 2014-2018 (updated March 10, 2014) 23. HHS Assistant Secretary for Preparedness and Response Strategic Plan (February 2014) 24. PEPFAR Blueprint: Creating an AIDS-Free Generation (November 2012) 25. USAID’s Global Health Strategic Framework: Better Health for 26. CDC Global Health Strategy 2012-2015 (June 29, 2012) 27. National Security Strategy of the United States of America (December 2017) 28. State Bureau of Political-Military Affairs, Office of Weapons Removal and Abatement, Conventional Weapons Destruction Strategic Plan, 2017-2019 29. Department of Defense Guidance for Security Cooperation (Aug. 29, 2016) 30. Department of State & USAID Joint Strategy on Countering Violent Extremism (May 2016) 31. State Department, Arms Control, Verification and Compliance, Functional Bureau Strategy (approved December 2015) 32. State Bureau of Political-Military Affairs, Office of Plans & Initiatives, Peace Operations Capacity Building Division, U.S. Global Peace Operations Initiative Strategy: Strengthening the Effectiveness of United Nations and Regional Peace Operations (October 2015) 33. National Security Strategy (February 2015) 34. State Department, Bureau of International Security and Nonproliferation, Functional Bureau Strategy, FY 2015-2018 (January 2015) 35. State Department, Bureau of Political-Military Affairs, Functional Bureau Strategy, FY 2015-2018 (January 2015) 36. State Department, Bureau of Counterterrorism, Functional Bureau Strategy, FY 2015-2017 (January 2015) 37. National Strategy for Counterterrorism (June 2011) 38. Security Sector Reform (February 2009) 39. State Department, The Secretary’s Office of Global Women’s Issues, Functional Bureau Strategy (approved Mar. 27, 2017) 40. United States Strategy to Prevent and Respond to Gender-based Violence Globally (June 2016) 41. United States National Action Plan on Women, Peace, and Security (June 2016) 42. U.S. Department of State Strategy for Women’s Economic Empowerment (June 2016) 43. United States Global Strategy to Empower Adolescent Girls (March 2016) 44. State Department, Bureau of Democracy, Human Rights, and Labor, Functional Bureau Strategy, FY 2015-2018 (approved 2014) 45. U.S. Government Approach on Business and Human Rights (2013) 46. USAID Strategy on Democracy, Human Rights and Governance (June 2013) Country strategies (for Afghanistan) 47. Department of Defense, Enhancing Security and Stability in Afghanistan. Report to Congress in Accordance With Section 1225 of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 (P.L. 113-291), as Amended (June 2017) 48. USAID Afghanistan Plan for Transition 2015-2018 (Jan. 6, 2016) 49. State/USAID Integrated Country Strategy: Afghanistan (February 2015) Country strategies (for Kenya) 50. State/USAID Integrated Country Strategy: Kenya (approved Feb. 1, 2017) 51. DOD/USAFRICOM: Kenya Country Cooperation Plan FY 2017-2021 (Nov. 8, 2016) 52. USAID Kenya Country Development Cooperation Strategy 2014-2018 (May 2014) Appendix III: Extent to Which Sectoral Strategies Addressed Interagency Coordination, Strategic Integration, and Assessment of Progress toward Strategic Goals Our analysis of strategies we reviewed in the health, security, and democracy assistance sectors found inconsistency in the extent to which the strategies addressed selected, or key, elements that we identified related to interagency coordination, strategic integration, and assessment of progress toward strategic goals. Interagency Coordination As figure 5 shows, about 30 percent (4 of 14) of the strategies in the health sector and about 17 percent (2 of 12) in the security sector generally identified interagency coordination mechanisms, while about 33 percent (4 of 12) in the security sector addressed agencies’ roles and responsibilities. In contrast, 75 percent (6 of 8) of the strategies in the democracy assistance sector generally addressed interagency coordination mechanisms and 63 percent (5 of 8) addressed agencies’ roles and responsibilities. Strategic Integration As figure 6 shows, in the health sector, 50 percent (7 of 14) of the strategies generally addressed their relationship with at least one other strategy in the same sector and about 43 percent (6 of 14) generally addressed their relationship with at least one higher- or lower-level strategy. In the security sector, about 58 percent (7 of 12) of the strategies generally addressed their relationship with at least one other strategy in the same sector and their relationship with at least one higher- or lower-level strategy. In the democracy assistance sector, about 75 percent (6 of 8) of the strategies we reviewed generally addressed their relationship with at least one other strategy in the same sector, while about 63 percent (5 of 8) generally addressed their relationship with at least one higher- or lower-level strategy. Figures 7, 8, and 9 show the strategies in the health, security, and democracy assistance sectors, respectively, that refer to higher- and lower-level strategies as well as to other strategies in the same sector. Assessment of Progress toward Strategic Goals As figure 10 shows, most strategies in the health, security, and democracy assistance sectors generally identified desired results, a hierarchy of goals and subordinate objectives, and activities to achieve results. However, strategies in all three sectors were less consistent in identifying milestones and performance indicators. Specifically, 57 percent (8 of 14) of health sector strategies, 50 percent (6 of 12) of security sector strategies, and 50 percent (4 of 8) of democracy assistance strategies generally addressed this element. In addition, while 71 percent (10 of 14) of strategies in the health sector outlined plans for monitoring and evaluation, 17 percent (2 of 12) of security sector strategies and 50 percent (4 of 8) of democracy assistance sector strategies generally addressed this element. Appendix IV: Comments from the Department of State Appendix V: Comments from the U.S. Agency for International Development Appendix VI: Comments from the Millennium Challenge Corporation Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, James Michels (Assistant Director), Gergana Danailova-Trainor (Analyst-in-Charge), Timothy Young, Kay Halpern, Steven Putansu, Mona Sehgal, Drew Lindsey, Judith Williams, Leslie Holen, Ming Chen, Aniruddha Dasgupta, Mark Dowling, Giff Howland, Neil Doherty, and Reid Lowe made key contributions to this report.
Why GAO Did This Study More than 20 federal agencies spend billions of dollars on U.S. foreign assistance each year. Six agencies—the Departments of Agriculture, Defense, Health and Human Services, and State; the Millennium Challenge Corporation; and the U.S. Agency for International Development—implement most of this assistance, using multiple strategies. State is responsible for coordinating their efforts. Questions have been raised about potential inefficiencies in implementing multiple foreign assistance strategies. GAO was asked to review the alignment of U.S. foreign assistance strategies. This report examines the extent to which strategies include key elements GAO identified, related to interagency coordination, strategic integration, and assessment of progress, that help ensure alignment. These elements are based on GAO's prior work on strategic planning and interagency collaboration. GAO reviewed 52 strategies related to health, security, and democracy assistance that were current in 2017. These included government-wide, agency, multi-agency, and regional strategies as well as strategies for two countries. GAO also reviewed agency guidance and interviewed agency officials. What GAO Found Many foreign assistance strategies related to health, security, and democracy assistance that GAO reviewed at least partially addressed key elements GAO identified that help ensure the strategies are aligned. Prior work has found that consistently addressing these elements, related to interagency coordination, strategic integration, and assessment of progress, is important for, among other things, better managing fragmentation in strategic planning. However, some strategies did not address these elements (see figure). For example: Interagency coordination . Twenty-three percent of the strategies (12 of 52) did not address agencies' roles and responsibilities, and 38 percent (20 of 52) did not identify specific interagency coordination mechanisms. Strategic integration . Twenty-one percent of the strategies (11 of 52) did not address linkages with other related strategies, and 25 percent (13 of 52) did not address linkages with higher- or lower-level strategies. Assessment of progress toward strategic goals . Twenty-one percent of the strategies (11 of 52) did not include milestones and performance indicators, and 21 percent (11 of 52) did not outline plans for monitoring and evaluation. The six agencies implementing most U.S. foreign assistance do not have consistent guidance for strategy development that could help ensure their strategies address these key elements. Some agencies' guidance addresses many of the elements but does not apply to all of their foreign assistance strategies, while other agencies have no such guidance. The Department of State (State) plays a significant role in interagency coordination. By collaborating with other agencies to establish guidance that addresses the key elements GAO identified, State could help the agencies improve their ability to align future strategies and identify and manage fragmentation in foreign assistance planning. What GAO Recommends GAO recommends that State lead an effort to establish, in collaboration with the five other agencies, guidance for developing foreign assistance strategies that addresses the key elements GAO identified related to interagency coordination, strategic integration, and assessment of progress. State concurred with GAO's recommendation.
gao_GAO-19-158
gao_GAO-19-158_0
Background CFPB’s Research, Markets, and Regulations Division has primary responsibility for CFPB’s efforts to monitor market developments and risks to consumers and to retrospectively assess rules. As shown in figure 1, the division is composed of the Office of Research, the Office of Regulations, and the following four offices (collectively known as the “Markets Offices”), which are focused on different consumer financial markets: The Office of Card, Payment, and Deposit Markets monitors credit cards, deposit accounts, prepaid cards, and remittances, as well as other emerging forms of payment and related technologies, such as mobile payments and virtual currencies. It also monitors data aggregation services. The Office of Consumer Lending, Reporting, and Collection Markets monitors debt collection, debt relief, and consumer reporting and scoring, as well as student, auto, and the small-dollar and personal lending markets. The Office of Mortgage Markets monitors the mortgage markets, including originations, servicing, and secondary markets. The Office of Small Business Lending Markets monitors credit to small businesses, including traditional lenders, specialty financing, and emerging technologies. The four Markets Offices are responsible for collecting and sharing market intelligence, helping to shape CFPB policy (including through participation on rulemaking teams), and helping to inform the marketplace through research and outreach. The Office of Research is responsible for conducting research to support the design and implementation of CFPB’s consumer protection policies, including developing and writing any required cost-benefit analyses for rulemakings. Among other things, these offices research, analyze, and report on consumer financial markets issues. These offices also help inform the work of the Office of Regulations, which supports and provides strategic direction for CFPB’s rulemaking, guidance, and regulatory implementation functions. The Markets Offices and the Office of Research contribute to CFPB’s efforts to address the Dodd-Frank Act requirement that CFPB monitor for certain risks to consumers in support of its rulemaking and other functions. This provision states that CFPB may consider a number of factors in allocating its resources for risk-monitoring efforts with regard to consumer financial products and the markets for those products, such as consumers’ understanding of a type of product’s risks, the extent to which existing law is likely to protect consumers, and any disproportionate effects on traditionally underserved consumers. Further, the Dodd-Frank Act gives CFPB authority in connection with such monitoring to gather information from time to time regarding the organization, business conduct, markets, and activities of covered persons or service providers from a variety of sources, including several sources specified in the act. Finally, this provision requires CFPB to issue at least one report of significant findings from its risk monitoring each calendar year. The Office of Research has led CFPB’s efforts to address the Dodd-Frank Act requirement that CFPB conduct assessments of each significant final rule or order it adopts and publish a report of the assessment no later than 5 years after the rule or order’s effective date. Before publishing a report of its assessment, CFPB must invite public comment on whether the rule or order should be modified, expanded, or eliminated. In addition, the Dodd-Frank Act provides CFPB authority to require covered persons or service providers to provide information to help support these assessments, as well as to support its risk-monitoring activities. In addition to the Research, Markets, and Regulations Division, other CFPB divisions and offices conduct outreach to help inform CFPB policy making. For example, CFPB’s External Affairs Division facilitates conversation with stakeholders, such as Congress, financial institutions, state governments, and the public. In addition, in the Consumer Education and Engagement Division, the Office of Consumer Response manages the intake of and response to complaints about consumer financial products and services. All of the divisions report to the Director. In November 2017, the President designated a new Acting Director of CFPB, and in December 2018, the Senate voted to confirm a new Director of the bureau. CFPB Monitors Consumer Financial Markets to Inform Policy but Does Not Systematically Prioritize Consumer Risks CFPB Routinely Monitors Market Trends and Collects Targeted Information for Rulemaking and Other Purposes To address the Dodd-Frank Act consumer risk-monitoring requirement, CFPB routinely monitors consumer financial markets through a variety of methods. It also conducts more targeted market monitoring to support rulemaking and other agency functions. Routine Monitoring CFPB collects and monitors routine market data and other market intelligence through a combination of internal and external data sources and outreach (see fig. 2). Markets Offices staff use information from these sources to analyze market trends and identify emerging risks that may require greater attention. Staff produce monthly and quarterly reports that summarize or analyze observed market developments and trends, and they distribute them bureau-wide. CFPB internal data and research. Staff in CFPB’s Markets Offices use CFPB data and research to identify and monitor risks. For example, in our review of CFPB’s market intelligence reports from July 2016 through July 2018, we observed the following frequently cited internal CFPB data sources: Consumer complaints submitted to CFPB. Markets Offices staff monitor consumer complaints to track trends and potential problems in the marketplace. For example, monthly mortgage trend reports we reviewed cited changes in total numbers of mortgage complaints, as well as in complaints related to private mortgage insurance, escrow accounts, and other mortgage-related topics. Consumer Credit Trends tool. This tool is based on a nationally representative sample of commercially available, anonymized credit records. Markets Offices staff use this tool to monitor conditions and outcomes for specific groups of consumers in markets for mortgages, credit cards, auto loans, and student loans. For example, CFPB monthly auto market trend reports cited the tool as a source for information on changes in the volume of auto loans by neighborhood income. Home Mortgage Disclosure Act data. CFPB maintains loan-level data that mortgage lenders report pursuant to the Home Mortgage Disclosure Act. According to CFPB, Markets Offices staff use the data for their market monitoring, which can include analysis to determine whether lenders are serving the housing needs of their communities and to identify potentially discriminatory lending patterns. External data and research. In addition to its internal databases, CFPB obtains external market data from a number of public and proprietary data sources. The market intelligence reports we reviewed included the following commonly cited external sources, among others: federal databases and research, such as the Federal Reserve Bank of New York’s Quarterly Report on Household Debt and Credit; publicly available information from sources such as industry websites, mainstream news publications, and publicly traded companies’ financial statements. proprietary data from sources such as data analytics services and credit reporting agencies. Engagement with industry representatives. CFPB also gathers market intelligence from engagement with industry representatives. Market intelligence reports we reviewed cited several meetings with industry representatives and regular CFPB attendance at industry conferences. Representatives of two trade groups we interviewed told us that CFPB had sometimes proactively reached out to them regarding areas of potential risk. According to CFPB, in fiscal year 2018, Markets Offices staff conducted an average of about 50 meetings with industry per month and held intelligence-gathering meetings across various consumer financial markets throughout the year. Engagement with consumer organizations. CFPB’s External Affairs Division, which is responsible for engagement with the nonprofit sector, facilitates most communication between Markets Offices staff and consumer organizations to help inform staff’s risk monitoring efforts. According to CFPB, between January and September 2018, staff from the External Affairs and Research, Markets, and Regulation divisions held an average of about four meetings per month with consumer organizations and nonprofit stakeholders, and Markets Offices staff said these meetings provided information useful in monitoring markets. Two of the three consumer organizations we interviewed noted that their communication with CFPB had decreased since late 2017. However, one group noted that external engagement has typically been greater when CFPB is going through a rulemaking and that rulemaking activity had slowed in the last year. Advisory committees and other formal outreach. CFPB obtains information on consumer financial issues and emerging market trends from various advisory groups and other formal outreach. In 2012, CFPB established a consumer advisory board, in accordance with a Dodd-Frank Act requirement. It also established three additional advisory councils (community bank, credit union, and academic) to obtain external perspectives on issues affecting its mission. The groups, which include subgroups focused on various consumer financial market areas or issues, met regularly through 2017. CFPB dismissed the existing members of the consumer advisory board and community bank and credit union advisory councils in June 2018 and reconstituted the groups with new, smaller memberships that resumed meeting in September 2018. In addition, from July 2016 to mid-November 2018, CFPB solicited public input through public field hearings and town hall meetings on issues such as debt collection, consumer access to financial records, and elder financial abuse, among other issues. Coordination with other regulators. CFPB engages with the federal prudential regulators and other federal and state agencies to inform its routine market-monitoring efforts. This engagement can occur through mechanisms such as working groups, task forces, and information- sharing agreements. For example, CFPB is a member of a working group of federal housing agencies, whose members share market intelligence and discuss risks they have observed in the mortgage markets. Markets Offices staff also receive quarterly, publicly available bank and credit union call report data through the Federal Financial Institutions Examination Council and the National Credit Union Administration, with which it has information-sharing agreements. Targeted Monitoring CFPB has supplemented its routine monitoring by conducting targeted research and data collection to inform rulemaking efforts, meet statutory reporting requirements, and learn more about a particular market for consumer financial products. As noted earlier, the Dodd-Frank Act authorizes CFPB to collect certain data from covered persons and service providers. Since July 2016, to support bureau rulemaking efforts, Markets Offices staff have augmented their routine monitoring with targeted use of supervisory data collected through CFPB’s examinations of covered persons and service providers. The Research, Markets, and Regulations Division has a formal information-sharing agreement with CFPB’s Supervision, Enforcement, and Fair Lending Division. Under this agreement, staff in the Office of Small Business Lending Markets used supervisory information on common data terminology used by business lenders to inform recommendations on data elements that should be included in a potential small business data collection rule. In addition, as discussed below, Markets Offices staff reviewed aggregated and anonymized supervisory information from CFPB’s examinations of payday lenders for research that informed the November 2017 Payday, Vehicle Title, and Certain High-Cost Installment Loans Rule, also referred to as the Payday Rule. In addition to rulemaking, CFPB has conducted targeted risk-monitoring activities to support certain statutory reporting requirements. For its mandated biennial credit card study, CFPB used its data-collection authorities under the Dodd-Frank Act to make four mandatory information requests to a total of 15 credit card issuers. According to CFPB officials, this study and other statutory reporting efforts—such as the bureau’s annual report on the Fair Debt Collection Practices Act—also support their market-monitoring efforts under the Dodd-Frank Act. CFPB notified the relevant federal and state regulators of its impending requests to the credit card issuers under those regulators’ supervision. Finally, CFPB has sometimes engaged in targeted data collection to learn more about specific areas of potential consumer financial risk. In some cases, CFPB has used its Dodd-Frank Act data collection authority under Section 1022 to require a company to provide data. For example, to understand developments with respect to person-to-person payments, CFPB required a payment processing company to provide certain information regarding its system. In other cases, CFPB has obtained targeted data through voluntary agreements with other regulators. For instance, in January 2018, CFPB reached an agreement with the Federal Reserve to obtain supervisory data on bank holding companies’ and intermediate holding companies’ mortgage and home equity loan portfolios. According to CFPB officials, they plan to use the data to monitor trends and risks in the mortgage market and inform bureau policy making. Monitoring of Consumer Risks Has Informed CFPB’s Rulemaking and Other Efforts The market monitoring conducted by CFPB’s Markets Offices staff contributes to bureau rulemaking and other functions, such as supervision, guidance to industry, consumer education, and reporting. Rulemaking. Since July 2016, CFPB’s market-monitoring efforts have informed certain rulemaking efforts. For example, Markets Offices analysis of the small-dollar lending market informed CFPB’s November 2017 Payday Rule, according to staff and the proposed and final rules. Staff said they had found that some borrowers were caught in a cycle of using payday loan products without the ability to repay the loans. Under the final rule, lenders for certain loans must reasonably determine up front that borrowers can afford to make the payments on their loans without needing to re-borrow within 30 days, while still meeting their basic living and other expenses. In addition, CFPB’s November 2016 Prepaid Accounts Rule reflected market-monitoring information and other research that staff helped collect on prepaid accounts. The rule incorporated findings from CFPB’s 2014 analysis of prepaid account agreements, which CFPB conducted to understand the potential costs and benefits of extending existing regulatory provisions—such as error resolution protections—to such agreements. Further, CFPB’s market intelligence reports we reviewed from 2017 and 2018 reflected Markets Offices staff’s communication with industry regarding a debt-collection rule—a topic that has been on CFPB’s public rulemaking agenda since 2013, based in part on market-monitoring findings. Industry supervision and policy positions. Markets Offices staff’s market-monitoring findings have informed CFPB’s efforts to supervise institutions and communicate policy positions to industry participants. Staff assist the Supervision, Enforcement, and Fair Lending Division in its annual risk-based prioritization process. In 2018, for example, staff provided information on market size and risk for more than a dozen market areas, which helped the supervision division prioritize its coverage of those market areas in its examination schedule. Markets Offices staff told us they also have met frequently with supervision staff to share issues identified through monitoring and determine whether supervisory guidance or related actions would be appropriate to address them. Further, according to CFPB, market-monitoring information supported bureau leadership’s public statements on selected market developments and informed policy documents, such as consumer protection principles on financial technology. Consumer education. CFPB’s risk monitoring has informed its broader consumer education efforts. CFPB’s Consumer Education and Engagement Division provides financial education tools, including blogs and print and online guides on financial topics such as buying a home, choosing a bank or credit union, or responding to debt collectors. Markets Offices staff provided us with several examples of consumer education materials for which they had contributed subject-matter expertise since July 2016. Examples included a consumer advisory on credit repair services and blog posts on mortgage closing scams and tax refund advance loans. Public reports. CFPB’s market-monitoring findings have informed several of its public reports since July 2016. According to CFPB officials, when Markets Offices staff identify risks they think could be mitigated by public communications to consumers, they work with the Consumer Education and Engagement Division, as well as other divisions, to publish relevant material. As noted earlier, the Dodd-Frank Act requires CFPB to issue at least one report annually of significant findings from its monitoring of risks to consumers in the offering or provision of consumer financial products or services. CFPB officials stated that this requirement is addressed by the first section of CFPB’s semiannual reports to Congress, which discusses significant problems consumers face in shopping for or obtaining consumer financial products and services. CFPB officials further noted that other public CFPB reports include information related to risks to consumers and may also respond to the annual Dodd-Frank Act reporting requirement. For example, CFPB’s December 2017 biennial report on the consumer credit card market discussed credit card debt collection and persistent indebtedness faced by some consumers, among other consumer financial risks. In addition, CFPB’s quarterly consumer credit trend reports have discussed risks related to consumers financing auto purchases with longer-term loans. CFPB Lacks a Systematic Bureau-Wide Process for Prioritizing Which Consumer Financial Risks to Address CFPB currently lacks a systematic, bureau-wide process for prioritizing financial risks facing consumers—using information from its market monitoring, among other sources—and for considering how it will use its tools to address those risks. In 2015, CFPB initiated such a process, but CFPB officials said that the most recent round of this process was completed in 2017 and that its leadership has not yet decided whether to continue using the process. In a February 2016 public report, CFPB described this process (which CFPB refers to internally as “One Bureau”) for deploying shared bureau-wide resources to address some of the most troubling problems facing consumers. According to the report, through this One Bureau process, CFPB prioritized problems that pose risks to consumers based on the extent of the consumer harm CFPB had identified and its capacity to eliminate or mitigate that harm. The report identified near-term priority goals in nine areas where CFPB hoped to make substantial progress within 2 years. It provided evidence of the nature or extent of risks facing consumers and described how CFPB planned to use its tools—such as rulemaking, supervision, enforcement, research, and consumer education—to address the priority goals. As part of the One Bureau process, CFPB created several cross-bureau working groups, which were focused on specific market areas and tasked with helping ensure progress toward CFPB’s near-term priority goals, among other responsibilities. The bureau revisited its stated priorities in June 2017 to guide its work through fiscal year 2018. However, officials said that while the working groups continue to facilitate communication, informal collaboration, and strategy-setting across the bureau, CFPB has not decided whether to engage in a third round of prioritization under the One Bureau process. The bureau was without a permanent Director from November 2017 until December 2018, when the Senate confirmed a new Director. CFPB officials told us that CFPB may revise its approach to prioritization under new leadership. Federal internal control standards state that management should use quality information to achieve agency objectives, such as by using quality information to make informed decisions. In addition, the standards state that management should identify, analyze, and respond to risks related to achieving the defined objectives. Through One Bureau, CFPB had a process to use the large amount of data and market intelligence it collected on consumer risks to make informed decisions about its bureau- wide policy priorities and how it would address them. CFPB has mechanisms in place for the Markets Offices to inform the work of individual divisions. For example, as noted, Markets Offices staff contribute to rulemaking efforts (including through participation on rulemaking teams) and to the annual setting of supervisory priorities. However, although the Markets Offices continue to collect market intelligence and contribute to cross-bureau working groups, CFPB currently lacks a process for systematically prioritizing risks or problems facing consumers and identifying the most effective tools to address those risks. CFPB officials noted that the bureau issued 12 requests for information in early 2018 to seek public input to inform its priorities. Topics covered by these requests for public input have included the bureau’s rulemaking process and its inherited and adopted rules. In an October 2018 statement, CFPB announced that it expected to publish an updated statement of rulemaking priorities by spring 2019 based on consideration of various activities, including its ongoing market monitoring and its analysis of the public comments from the requests for information. However, this prioritization effort focuses on setting rulemaking priorities and does not incorporate all of CFPB’s other tools to respond to consumer financial risks. While CFPB has continued to take steps to consider information to inform its policy priorities, a systematic, bureau-wide process to prioritize risks to consumers and consider how CFPB will use its full set of tools to address them could help to ensure that CFPB effectively focuses its resources on the most significant risks to consumers. This, in turn, could enhance CFPB’s capacity to meet its statutory consumer protection objectives. CFPB Has Taken Steps to Meet Statutory Requirements for Retrospectively Assessing Significant Rules CFPB Developed Criteria to Identify and Assess Relevant Rules In two internal memorandums, CFPB documented an initial process for meeting the Dodd-Frank Act requirement to retrospectively assess significant rules or orders and issue reports of such assessments within 5 years of the rule or order’s effective date. According to CFPB officials, the bureau may modify the process for future work after it has completed its first three assessments. The assessments will be in addition to other regulatory reviews conducted by CFPB. To determine which of its final rules were significant for purposes of the Dodd-Frank Act retrospective assessment requirement, CFPB created a four-factor test. In applying this test, CFPB analyzes the rule’s 1. cumulative annual cost to covered persons of over $100 million, 2. effects on the features of consumer financial products and services, 3. effects on business operations of providers that support the product or 4. effects on the market, including the availability of consumer financial products and services. The memorandums recommended weighing the first factor more heavily and considering factors two through four cumulatively, so that high-cost rules tend to be considered significant. If a rule’s cumulative annual costs exceed $100 million, CFPB may consider the rule to be significant even if the cumulative effect from factors two through four is small. If the rule’s costs do not exceed $100 million, there must be a large cumulative effect from factors two through four for the rule to be considered significant. After applying the test to nine rules in early 2017, CFPB determined that three were significant for retrospective assessment purposes: Remittance Rule. This rule covers remittances, which are a cross- border transfer of funds. Ability-to-Repay/Qualified Mortgage Rule (ATR/QM Rule). This rule covers consumers’ ability to repay mortgage loans and categories of mortgage loans that meet the ability-to-repay requirement (qualified mortgages). Real Estate Settlement Procedures Act (RESPA) Servicing Rule. This rule covers loan servicing requirements under RESPA. CFPB staff told us that in the future they plan to apply the four-factor test to rules not already subject to an assessment within 3 years of the rules’ effective dates, pending new leadership’s review of the test. As of November 2018, staff told us they had not yet formally applied the test to any additional rules. However, they told us that they plan to apply the test to the TILA-RESPA Integrated Disclosure Rule in 2019. If CFPB determines that the rule is significant, CFPB officials said they plan to complete an assessment in late 2020. In addition to outlining the four-factor test, a March 2016 memorandum documented CFPB’s decision to generally focus any significant new data collection efforts on a rule’s effects on consumer and market-wide outcomes rather than effects on businesses. In the memorandum, CFPB noted that the objectives of many of its rules focus on improved consumer experiences and outcomes, such as reductions in loan-default risk and improved access to financial product information and credit. However, the memorandum also noted that CFPB would assess outcomes for businesses when data were available at minimal cost. In addition, the memorandum explained that CFPB would consider spending additional resources to collect data on business outcomes under certain conditions, such as when unfavorable outcomes for businesses could meaningfully affect significant numbers of consumers. Although CFPB stated in its March 2016 memorandum that it did not plan to formally assess the previously mentioned three rules’ costs or benefits to providers, it stated in its October 2018 Remittance Rule Assessment Report that it may reconsider that decision for future rule assessments. In the March 2016 memorandum, CFPB also documented a decision to not make specific policy recommendations in the final reports for the retrospective assessments. CFPB expects the findings from its final assessment reports to inform its policy development process, through which it makes decisions about future rulemaking efforts. In the March 2016 memorandum, CFPB explained that separating the assessments from policy recommendations would keep the assessments focused on evidence-based descriptions. As previously described, CFPB also issued requests for information to obtain public input on effects of its inherited and adopted rules, in addition to the required retrospective assessments. CFPB staff stated that they plan to use the lessons learned from the initial assessment process to inform their procedures for future assessments. According to CFPB, a future procedures document is to outline its process for the retrospective assessments required by the Dodd-Frank Act as well as for similar assessments CFPB may conduct pursuant to other statutes or executive orders. CFPB Has Made Progress toward Completing Its First Three Assessments For each of the three rules it determined to be significant, CFPB created detailed assessment plans and a timeline for completion (see table 1). Each plan defined which aspects of the rules the assessment would focus on; outlined the scope and methodology, including challenges for the assessment and potential limitations of methodology; and identified data CFPB planned to gather and compile, including CFPB’s own and third-party data, and explained how the data will be used to evaluate the effects of the rule. CFPB issued requests for information between March and June of 2017 to collect public input on each assessment and created plans for incorporating the comments in each assessment report. As required by the Dodd-Frank Act, these requests solicited comments on modifying, expanding, or eliminating the rules. In addition, CFPB requested comments on the assessment plans and invited suggestions on other data that might be useful for evaluating the rules’ effects. In a document provided to us, CFPB described its preliminary plan to summarize comments received from the public and use the information received. CFPB staff told us they adjusted their research questions and data sources on all three assessments in response to comments. For example, based on comments, they added a question to an industry survey about a provision of the Remittance Rule and incorporated a new data source into the ATR/QM Rule and RESPA Servicing Rule assessments. Other data sources used for the assessments include federal and state agencies, voluntary surveys of providers of consumer financial products, and loan data from servicers. For example, for the Remittance Rule assessment, CFPB sent a voluntary industry survey to 600 money transmitters, banks, and credit unions on how the rule has affected their business practices and costs, as well as potential problems in specific market segments. For the RESPA Servicing Rule assessment, CFPB conducted qualitative structured interviews with mortgage servicers to learn about changes servicers had to make in response to the rule. CFPB published its Remittance Rule Assessment Report in October 2018. The report analyzed trends in the volume of remittance transfers, the number of providers, and the price of transfers. For example, CFPB found that declining remittance prices and an increase in the volume of remittances—trends that had begun before the rule’s effective date— continued afterward. However, CFPB was unable to conclude whether these trends would have changed without the rule. In addition, the report noted that new technology has increased access to remittances but has also complicated CFPB’s attempts to measure the effects of the Remittance Rule on consumers. The report also estimated the rule’s initial and continued compliance costs for businesses, estimating that they added between 30 and 33 cents for the one-time cost in 2014 and between 7 and 37 cents in continuing costs per remittance in 2017. In addition, the report summarized comments and information CFPB received from a request for information in March 2017. Conclusions In monitoring risks of financial products and services to consumers, CFPB has drawn from a wide range of sources, and its findings have informed its key consumer protection tools, such as rulemakings and consumer education materials. In 2016 and 2017, CFPB’s One Bureau process allowed it to consider the market information it collected to prioritize the most important risks to consumers and determine how to most effectively address those risks on a bureau-wide basis. However, CFPB has not yet decided whether to use the One Bureau process to reexamine its priorities and has instead relied on prioritization mechanisms that focus on its use of individual policy tools, such as its processes for setting rulemaking and supervision priorities. Putting a systematic bureau-wide prioritization process in place could help CFPB ensure that it focuses on the most significant risks to consumers and effectively meets its statutory consumer protection objectives. Recommendation for Executive Action The Director of CFPB should implement a systematic process for prioritizing risks to consumers and considering how to use the bureau’s available policy tools—such as rulemaking, supervision, enforcement, and consumer education—to address these risks. Such a process could incorporate principles from the prior One Bureau process, such as an assessment of the extent of potential harm to consumers in financial markets, to prioritize the most significant risks. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this product to CFPB for comment. We also provided the relevant excerpts of the draft report to the Federal Housing Finance Agency, the Federal Reserve, and the Office of the Comptroller of the Currency for their review and technical comments. CFPB provided oral and written comments, which are summarized below. CFPB’s written comments are reproduced in appendix I. In addition, CFPB and the Federal Housing Finance Agency provided technical comments, which we incorporated as appropriate. The Federal Reserve and the Office of the Comptroller of the Currency had no comments. In oral comments provided on November 29, 2018, CFPB’s Acting Deputy Director and other CFPB officials clarified the status of the One Bureau process. The officials clarified that while CFPB officials had previously told us that the One Bureau process was on hold, work on One Bureau priorities has continued with support from a set of cross-bureau working groups. The officials noted that CFPB had not yet determined whether to engage in another round of the One Bureau priority-setting process. In addition, in its written comments, CFPB highlighted the role of the cross- bureau working groups in its market monitoring and other efforts. In response to these comments, we made edits to clarify the status of the One Bureau process and describe the role of the cross-bureau working groups. In its written comments, CFPB did not agree or disagree with our recommendation but stated that it will endeavor to improve its processes for identifying and addressing consumer financial risks. CFPB stated that it recognizes the importance of having processes in place to prioritize and address risks to consumers in the financial marketplace. CFPB cited examples of existing processes—such as its processes for setting its rulemaking agenda and supervisory priorities—that were designed to ensure that its risk monitoring informs its work. In the oral comments, CFPB officials expressed concern that the draft report’s characterization of a lack of a systematic process for prioritizing risks to consumers might suggest that CFPB entirely lacks processes in this regard. We note that the draft report described CFPB’s existing processes for setting rulemaking and supervisory priorities. While we agree that these processes help CFPB to prioritize work in these areas, we maintain that these processes do not reflect a systematic, bureau-wide process for prioritizing risks to consumers and determining how to most effectively address them. We made minor edits to the report to clarify that the process CFPB lacks is a bureau-wide process that considers how it will use its full set of tools to address risks to consumers. We maintain that having such a process would help to ensure that CFPB focuses its resources on the most significant consumer risks and is well positioned to meet its consumer protection objectives. We are sending copies of this report to CFPB, the Federal Housing Finance Agency, the Federal Reserve, the Office of the Comptroller of the Currency, the appropriate congressional committees and members, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Consumer Financial Protection Bureau Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, John Fisher (Assistant Director), Lisa Reynolds (Analyst-in-Charge), Bethany Benitez, Joseph Hackett, Marc Molino, Jennifer Schwartz, and Tyler Spunaugle made key contributions to this report.
Why GAO Did This Study The Dodd-Frank Act created CFPB to regulate the provision of consumer financial products and services. Congress included a provision in statute for GAO to study financial services regulations annually, including CFPB’s related activities. This eighth annual report examines steps CFPB has taken to (1) identify, monitor, and report on risks to consumers in support of its rulemakings and other functions and (2) retrospectively assess the effectiveness of certain rules within 5 years of their effective dates. GAO reviewed CFPB policies and procedures, internal and public reports, and memorandums documenting key decisions, assessment plans, and requests for public comment. GAO also interviewed officials from CFPB, three federal agencies with which it coordinated, and representatives of consumer and industry groups. What GAO Found In accordance with the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), the Consumer Financial Protection Bureau (CFPB) has routinely monitored the consumer financial markets to identify potential risks to consumers related to financial products and services. CFPB monitors consumer complaints, analyzes market data, and gathers market intelligence from external groups (see figure for sources of CFPB’s monitoring). CFPB has used risk-monitoring findings to inform its rulemakings, supervision, and other functions. In 2015, CFPB initiated a bureau-wide process for using market data and other information to set policy priorities related to addressing risks to consumers. However, CFPB has not yet decided whether it will continue to use this process to set priorities. CFPB currently lacks a systematic, bureau-wide process for prioritizing financial risks to consumers and considering how it will use its tools—such as rulemaking, supervision, and consumer education—to address them. Federal internal control standards state that management should use quality information to achieve agency objectives and that it should also identify, analyze, and respond to risks related to achieving those objectives. Implementing a bureau-wide prioritization process could help to ensure that CFPB effectively focuses its resources on the most significant financial risks to consumers and enhances its ability to meet its statutory consumer protection objectives. CFPB has taken steps to retrospectively assess its significant rules within 5 years of these rules becoming effective, as required by the Dodd-Frank Act. CFPB developed and applied criteria to identify three rules as significant and requiring a retrospective assessment. For these three rules, CFPB created assessment plans, issued public requests for comment and information, and reached out to external parties for additional data and evidence. In October 2018, CFPB issued its first assessment report on a rule related to cross-border money transfers. Among other things, the report found that certain trends, such as increasing volume of these transfers, continued after the rule took effect. CFPB expects to complete the other two assessments by the January 2019 deadline. What GAO Recommends GAO recommends that CFPB implement a systematic process for prioritizing risks to consumers and considering how to use its available policy tools—such as rulemaking, supervision, enforcement, and consumer education—to address these risks. CFPB did not agree or disagree with the recommendation but agreed with the importance of having processes in place to prioritize and address consumer financial risks.
gao_GAO-18-462
gao_GAO-18-462_0
Background The primary source of federal funding for new fixed-guideway projects or extensions to existing fixed-guideway systems is FTA’s Capital Investment Grants program, which is a discretionary and competitive grant program funded through annual appropriations. The program is governed by statutory provisions, and funding is provided in the form of a construction grant agreement, which is subject to congressional appropriations. Projects that compete for funding through the Capital Investment Grants program are designed and implemented by project sponsors, which are usually local transit agencies. Prior to 2012, project sponsors typically applied for funding as either a New Starts or Small Starts project. New Starts projects are capital investments whose sponsors request $100 million or more in Capital Investment Grants funding or have an anticipated capital cost of $300 million or more. Small Starts projects are capital investments whose sponsors request less than $100 million in Capital Investment Grants funding and have an anticipated capital cost of less than $300 million. In 2012, Congress created a third category of eligible projects called Core Capacity projects. Unlike New Starts and Small Starts, for which the amount of funding project sponsors request and the anticipated capital cost of a project are key factors, Core Capacity projects are not defined by cost. Instead, Core Capacity projects are “corridor-based capital investments” in existing fixed-guideway systems that increase the capacity of a corridor by not less than 10 percent, in a corridor that is at or above capacity or is expected to be within 5 years. Examples of Core Capacity projects include capital investments to expand a transit system’s platforms and acquire real property, rights of way, or rail cars associated with corridor improvements increasing capacity. To enter the Capital Investment Grants program, project sponsors submit an application to FTA with information on the proposed project, such as a description of the transportation problem the project seeks to address, among other requirements. If accepted into the program, project sponsors must then follow a multi-step, multi-year development process outlined in statute during which FTA determines if the project is eligible for funding through the Capital Investment Grants program. The development process that project sponsors must follow varies depending on whether the project is a New Starts or Core Capacity project, or a Small Starts project. For example, New Starts and Core Capacity projects are required to complete a two-phase development process. During the first phase, called Project Development, project sponsors must complete an environmental review process outlined in the National Environmental Policy Act of 1969 and address other statutory requirements. Project sponsors must also provide FTA with sufficient information for FTA to evaluate and rate the project, among other FTA requirements. To complete the second phase, called Engineering, project sponsors must, among other requirements, develop a firm and reliable cost, scope, and schedule for their project and obtain all non-Capital Investment Grants program funding commitments. Small Starts projects complete a development process that is similar but consists of only one phase, called Project Development. During the development process, FTA is required to evaluate and rate projects using a number of statutory criteria designed to assess the merit of a project (i.e., project justification). For example, for Core Capacity projects, FTA is required to evaluate and rate a project against six criteria: (1) mobility improvements, (2) environmental benefits, (3) cost- effectiveness, (4) the congestion relief associated with the project, (5) the economic development effects associated with the project, and (6) the existing capacity needs of the corridor. FTA is also required to evaluate and rate the local financial commitment to a project, including evidence of stable and dependable financing sources, as well as the project sponsor’s ability to operate the project and continue to operate any related transit system. FTA’s ratings are “point-in-time” evaluations—meaning that they can change—as a project progresses through the development process. To receive funding, project sponsors must complete the development process outlined in statute and meet all statutory eligibility requirements. Projects must also address all FTA requirements, and FTA must recommend the project for funding to Congress. FTA’s recommendations are based on its evaluation and rating of the project using the criteria specified in statute, the availability of Capital Investment Grants program funds, and the readiness of the project, such as whether the project’s cost, scope, and schedule are advanced enough to be considered reliable. As mentioned earlier, the funding that projects receive is subject to congressional appropriations. As we previously reported, both MAP-21 and the FAST Act made numerous changes to the Capital Investment Grants program. For example, in addition to establishing Core Capacity projects as a new category of eligible projects, MAP-21 reduced the number of phases in the development process that projects in the Capital Investment Grants program must follow to be eligible for and receive funding. According to FTA officials, changes the FAST Act made to the program include raising the dollar threshold for eligibility for New Starts and Small Starts projects and increasing the number of projects that are eligible for funding by allowing joint public transportation and intercity-passenger-rail service to compete for funding. FTA Has Not Addressed Three Out of Four Outstanding Statutory Provisions FTA has not addressed three of four outstanding statutory provisions concerning the Capital Investment Grants program. As shown in table 1, three of these provisions stem from MAP-21, which was enacted in 2012, and the fourth stems from the FAST Act, enacted in 2015. When we initiated our review, FTA officials told us FTA did not have immediate plans to address the outstanding statutory provisions due to the administration’s stated intent to phase out the Capital Investment Grants program. As mentioned earlier, in 2017 the President’s Fiscal Year 2018 budget first proposed phasing out the program, stating that future investments in new transit projects should be funded by the localities that use and benefit from those projects. Since then, FTA’s annual reports to Congress, which contain funding recommendations for the program, have reflected this direction. For example, FTA’s Fiscal Year 2018 report recommended that Congress only fund those projects that had already received a grant agreement through the program, and FTA’s Fiscal Year 2019 report stated that FTA neither requests nor recommends any funding for projects in the Capital Investment Grants program beyond those that have already received a grant agreement. However, as also mentioned earlier, in March 2018 the Consolidated Appropriations Act, 2018, provided the program with more than $2.6 billion, and also directed FTA to continue to administer the Capital Investment Grants program in accordance with the program’s procedural and substantive requirements. Following the enactment of the Consolidated Appropriations Act, 2018, FTA officials told us that they are reviewing the law and determining next steps. However, they did not indicate that they have any immediate plans to address those provisions. Moving forward, if FTA does not take steps to address the outstanding provisions, FTA runs the risk of violating federal law. During our review, FTA officials told us that other factors have also influenced FTA’s decisions. For example, FTA officials noted that since our last report, issuing regulations regarding the evaluation and rating process for Core Capacity projects was not identified as one of the Department’s regulatory priorities and, currently, FTA has no plans to issue such regulations. However, issuing regulations to address this provision is important. FTA’s policy guidance notes that aspects of the development process, such as the steps to get into and through the development process, were not subject to public outreach and are open to be discussed in future updates to the Major Capital Projects rule. While FTA officials emphasized that the agency’s policy guidance is intended to serve as a guide for running the program until such time that FTA initiates further rulemaking, FTA’s policy guidance also notes that further rulemaking is needed to fully implement the changes MAP-21 and the FAST Act made to the Capital Investment Grants program. Until FTA initiates this rulemaking, it is unclear when if at all, FTA might address most of these outstanding provisions. With respect to addressing the program of interrelated projects provisions, FTA officials reiterated their concerns, as we noted in our last report, that establishing an evaluation and rating process for a program of interrelated projects is difficult. As an example, FTA officials noted that as part of the New Starts, Small Starts, and Core Capacity evaluation process, FTA takes into account factors such as a corridor’s current ridership estimates and future ridership projections. According to FTA officials, evaluating and rating projects that encompass multiple corridors is challenging because it requires that FTA establish new measures and breakpoints—that is, thresholds for FTA’s ratings. Both FTA officials and the American Public Transportation Association representatives we spoke with told us that FTA has sought input from the transit industry in the past to help address these concerns. However, FTA officials also told us that addressing their concerns requires additional research and public outreach on FTA’s part and that undertaking that work has not been a priority of the Department of Transportation. Representatives from two of the sponsors we spoke with, as well as representatives of the American Public Transportation Association, told us that the transit industry is interested in seeing FTA implement the program of interrelated projects provisions and that doing so could help transit agencies deliver projects more efficiently. For example, according to one sponsor, implementing those provisions could help this sponsor purchase materials in bulk and reduce costs. Until FTA takes steps to address this provision, the federal government or project sponsors may be missing opportunities to deliver transit projects more efficiently. In the case of the FAST Act’s provision establishing a pilot program— called the “Expedited Project Delivery for Capital Investment Grants Pilot Program,” designed to create a fast-track approval process for projects that meet specific statutory criteria, such as having a maximum federal share of 25 percent—FTA published a notice in the Federal Register in 2016 stating that it would publish guidance describing the process project sponsors should follow to apply for consideration as a pilot project. However, at the time of our review, FTA had not provided sponsors with information that describes the process they should follow to apply for consideration as a pilot project. FTA officials told us that project sponsors have generally not expressed interest in participating in the program under the FAST Act, and most of the project sponsors that we spoke with agreed. Specifically, four of the six Core Capacity project sponsors told us that some FAST Act requirements, such as a requirement that projects in the program be supported in part by a public-private partnership, made participating in the program less attractive. According to two of the sponsors, private investors do not have an incentive to invest in public transit projects unless they can profit from their investment, but the FAST Act limits that opportunity by requiring that projects participating in the pilot program be operated and maintained by employees of an existing public transportation provider. Nonetheless, in February 2018, the President’s infrastructure plan recommended restructuring this program, with changes, to better achieve the goals of expediting project delivery. Among the changes recommended are allowing the pilot program to be available to all projects and not just on a pilot basis, and increasing the federal share from 25 to 50 percent. Taking steps to describe the steps project sponsors should follow to apply for consideration as a pilot project under this program could help FTA better understand whether further changes are needed. The statutory provision FTA has addressed relates to a MAP-21 provision directing FTA to use an expedited technical-capacity review process for certain experienced project sponsors. At the time of our 2016 report, FTA was in the process of finalizing the development of a tool to address this provision, and since then, FTA has implemented that tool. Specifically, the tool helps FTA staff determine the level of review required of project sponsors based on a number of risk factors, such as the complexity of a proposed project and the sponsor’s experience level. According to FTA, this tool helps FTA staff develop project-specific oversight plans that specify the resources FTA should devote when overseeing a particular project. Projects that FTA determines are at lower risk have fewer oversight resources allocated to them, and FTA officials told us that they have been using this tool on all projects in the program since mid-2017. FTA Has a Process to Verify That Requirements Are Met before Recommending a Core Capacity Project for Funding Based on our review of FTA’s policy guidance, instructions for applying to the Capital Investment Grants program, and interviews with FTA officials and six Core Capacity project sponsors, we found that FTA has established a process to verify that proposed Core Capacity projects meet statutory requirements before recommending projects for funding. In addition, based on our review of documentation supporting FTA’s funding recommendations for the two Core Capacity projects with grant agreements as of June 2017, as well as interviews with FTA officials and both project sponsors, we found that FTA took steps to verify that the statutory requirements were met before recommending those two projects for full funding grant agreements. Representatives of the other four sponsors we spoke with also confirmed that FTA is taking steps to verify that their projects meet the statutory requirements. Such requirements include specific project eligibility and other requirements that projects must meet during the Project Development and Engineering phases of the development process. Project Eligibility: Under statute, Core Capacity projects must meet specific eligibility requirements. For example, along the lines previously noted, statutory provisions require that a Core Capacity project be a substantial corridor-based capital investment located in a corridor that is at or over capacity, or projected to be at or over capacity within the next 5 years. These projects must also increase the corridor’s capacity in the peak hour and direction of travel by not less than 10 percent. To verify that projects meet these requirements, project sponsors and FTA officials told us that FTA staff assisted project sponsors in refining their project’s corridor (see fig. 1), and reviewed information provided by the sponsors on such things as the corridor’s current ridership estimates; the type, configuration, and capacity of light- and heavy-rail cars; and the number of seats on commuter rail cars. FTA’s policy guidance outlines the criteria that FTA uses, criteria that FTA developed after consulting industry standards and reaching out to the transit industry. FTA officials emphasized that they apply these criteria consistently across projects when evaluating whether a project’s corridor is at capacity. As another example, under statute, Capital Investment Grant funding for Core Capacity projects may not be applied to “state of good repair” improvements to the transit system. “State of good repair” improvements include, among other things, the replacement or rehabilitation of existing rail cars, tracks, or communications equipment due to normal wear and tear or preventive maintenance. Core Capacity projects are likely to be intertwined with state of good repair improvements, however, and FTA staff work with project sponsors to identify which project costs within the project corridor are eligible to receive Core Capacity funding and which are related to maintaining a state of good repair. Project Development Phase: As with the project eligibility requirements discussed above, statutory provisions identify specific requirements that must be met during the Project Development phase, and we found that FTA has a process to verify that those requirements are met. For example, under statute, Core Capacity projects have 2 years after the day on which they enter into Project Development to complete the activities required to obtain a project rating by FTA. Completion of the Project Development phase is marked by the completion of the environmental review process required under the National Environmental Policy Act of 1969 and FTA’s assignment of a project rating. FTA’s policy guidance encourages project sponsors to perform whatever work they feel is necessary prior to requesting entry into Project Development to enable them to complete this phase within 2 years. According to both FTA officials and representatives from each of the six Core Capacity project sponsors, FTA staff work closely with project sponsors to assist them with preparations to enter Project Development, review their documentation, and complete this phase on time. Further, each of the six Core Capacity project sponsors we spoke with told us that FTA follows up with sponsors to ensure that all statutory and FTA requirements for the Project Development phase are met. For example, the project sponsors reported that FTA officials hold a variety of periodic (e.g., weekly, monthly, quarterly) meetings with project sponsors during which they discuss various aspects of the sponsor’s progress toward meeting the statutory requirements. Under statute, to assign a project rating, FTA must evaluate and rate Core Capacity projects against specific project justification criteria and local financial commitment criteria, as well as ensure that the project has satisfied the project eligibility and other statutory requirements, such as having been selected as the locally preferred alternative and adopted into the appropriate regional transportation plans. To obtain the information needed to make these evaluations, FTA provides project sponsors with reporting instructions and templates on its website specifying its documentation requirements. These instructions and templates allow for the standardized review of the project eligibility requirements previously discussed, as well as aspects of the project justification and local financial commitment criteria. Representatives of two of the six Core Capacity project sponsors described these instructions and templates as helpful, and said the templates enable them to gauge what their project’s potential rating might be. Representatives of four sponsors also reported that when completing the templates they are in frequent contact with FTA officials to help ensure they are appropriately providing all required information. FTA officials inform sponsors that the agency reviews completed templates along with other information to assign project ratings. Pursuant to statute, once FTA determines that a Core Capacity project meets the specified project eligibility requirements, assigns the project a rating, and determines that the environmental review process has been completed, among other requirements, the project is ready to enter the Engineering phase. Before advancing the project to Engineering, FTA requires project sponsors to provide proof that at least 30 percent of the non-Capital Investment Grants funding necessary to complete the project is committed, as well as a variety of other documentation, such as a 20- year financial plan; a detailed cost estimate; a detailed project management plan and project schedule; a preliminary safety hazard, threat, and vulnerability analysis; and a draft “before and after” study plan. Once a project sponsor indicates it is ready to advance its project to the Engineering phase, FTA assigns oversight contractors, who take a prominent role in overseeing the day-to-day management of the project in order to provide FTA with ongoing reports of the project sponsor’s financial and technical progress. Engineering Phase: Based on our review of documentation for the two Core Capacity projects that have received a grant agreement, we found that FTA also has a process to verify that the requirements specified in statute applicable to the Engineering phase are met before recommending a Core Capacity project for funding. Pursuant to statute, during the Engineering phase the project sponsor must continue to show the financial capability to complete the project and maintain and operate the future transit system with stable and dependable funding sources. FTA requires that project sponsors show increasing financial capacity during the first 3 years in this phase by providing proof of commitments for at least 50 percent of all non-Capital Investment Grants funding. Pursuant to statute, project sponsors must also continue to show the technical capability to complete the project. FTA requires that project sponsors show increasing technical capacity during this phase by making sufficient progress advancing the level of project design. According to representatives from the two Core Capacity projects that have received a grant agreement, FTA’s oversight contractors interact with project sponsors frequently throughout the Engineering phase, and are responsible for assisting FTA in determining whether sponsors have the technical and financial capacity to complete their projects. Both FTA officials and the two project sponsors reported that these oversight contractors review project documentation throughout the Engineering phase to verify that the sponsor meets FTA requirements to execute a grant agreement, and are otherwise acceptable for advancing a project. In reviewing documentation for the two Core Capacity projects that have received a grant agreement, we found these oversight contractors provided FTA with their comprehensive assessments of the project sponsor’s technical and financial capacity. FTA officials said they use these assessments when evaluating whether a project should be recommended for a grant agreement. Conclusions For years, the Capital Investment Grants program has served as the primary source of federal financial assistance to new transit projects across the United States. During this review, however, the future of that program has been unclear, given the administration’s stated intent to phase out the program and FTA’s actions, which have reflected that direction. The Consolidated Appropriations Act, 2018, provided FTA with both the funding to continue awarding grants through the program and the direction to administer the program in accordance with the requirements specified in law. FTA stated that it is reviewing the law and determining next steps but did not indicate that it has specific plans or timeframes for addressing the three outstanding provisions discussed in this report. By not addressing those provisions, FTA runs the risk of failing to implement provisions of federal law, and the federal government or project sponsors may be missing opportunities to deliver transit projects more efficiently. Recommendations for Executive Action We are making the following three recommendations to the Department of Transportation: The FTA Administrator should initiate a rulemaking regarding the evaluation and rating process for Core Capacity Improvement projects, consistent with statutory provisions. (Recommendation 1) The FTA Administrator should take steps, such as undertaking additional research or public outreach, to enable FTA to evaluate and rate projects in a program of interrelated projects, in a manner consistent with statutory provisions. (Recommendation 2) The FTA Administrator should take steps to describe the process project sponsors should follow to apply for consideration as a pilot project under the Expedited Project Delivery for Capital Investment Grants Pilot Program. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Transportation for review and comment. In its comments, which are reproduced in appendix II, the Department concurred with our recommendations. However, the Department also stated in its letter that our report did not adequately describe the steps FTA has completed to implement the statutory provisions discussed in this report. Further, the Department stated that FTA has demonstrated its intent to address the outstanding provisions. We agree with the Department that FTA has taken numerous actions toward addressing various statutory provisions of the Capital Investment Grants program, provisions contained in either MAP-21 or the FAST Act. As noted above in this report, we discussed many of those actions in our April 2016 review of the Capital Investment Grants program. At that time, we reported that FTA was making progress implementing MAP-21 and that FTA intended to take action over the next 2 years toward addressing the remaining provisions of MAP-21 and the new requirements of the FAST Act. However, as of this report, FTA has still not addressed all the provisions, and as the Department stated in its letter, FTA cannot specify when action will be taken to address the outstanding provisions. Accordingly, we believe that our assessment is an accurate reflection of FTA’s progress in addressing the outstanding statutory provisions of the Capital Investment Grants program as amended by MAP-21 and the FAST Act. We are sending copies of this report to interested congressional committees and the Secretary of the Department of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact me at (202) 512-2834 or GoldsteinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Core Capacity Improvement Projects Appendix I: Core Capacity Improvement Projects Project description The Metropolitan Transportation Authority proposes to implement capacity improvements to the Canarsie L Line, which operates between South Brooklyn and Manhattan. Improvements include three new power substations and other upgrades necessary to increase capacity on the line. The Metropolitan Transportation Authority estimates that when the project is complete, capacity in the corridor will be increased 10 percent. The Dallas Area Rapid Transit is proposing to extend and modify platforms along two existing light rail lines to accommodate longer trains. The Dallas Area Rapid Transit estimates that when the project is complete, capacity in the corridor will be increased 12 percent. The Dallas Area Rapid Transit is proposing to implement a second light-rail alignment through the central business district of Dallas to supplement the existing alignment. The Dallas Area Rapid Transit estimates that when the project is complete, capacity in the corridor will be increased 100 percent. The Northern Indiana Commuter Transportation District is proposing to construct a second track and make additional improvements along a 26.6-mile segment of its South Shore commuter rail line between Gary and Michigan City. The Joint Powers Board (also known as Caltrain) is implementing capacity improvements that include upgrading and electrifying a 51-mile commuter rail line extending from San Francisco to San Jose. Caltrain estimates that when the project is complete, capacity in the corridor will be increased 11 percent. The New Jersey Transit Corporation, in cooperation with the Port Authority of New York and New Jersey, the Gateway Program Development Corporation, and Amtrak are proposing to replace an over 100-year-old drawbridge across the Hackensack River in Hudson County, New Jersey, with a new, two-track bridge, among other capacity improvements. The sponsors estimate that when the project is complete, capacity in the corridor will be increased 10 percent. The Chicago Transit Authority is implementing capacity improvements along a 5.6-mile corridor on the north side of Chicago. Improvements include the reconstruction of four stations, the installation of a new higher-capacity signal system, and the procurement of 32 new railcars. The Chicago Transit Authority estimates that when the project is complete, capacity in the corridor will be increased 15 percent. The Bay Area Rapid Transit District is proposing to implement capacity improvements between Oakland and Daly City in South San Francisco. Improvements include implementing communication-based train control equipment, the procurement of 252 rail cars, additional power substations, and the expansion of a maintenance facility. The Bay Area Rapid Transit District estimates that when the project is complete, capacity in the corridor will be increased 37 percent. Appendix II: Comments from the Department of Transportation Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Brandon Haller (Assistant Director); Melissa Bodeau; Kelsey Burdick; Geoffrey Hamilton; Wesley A. Johnson; Elke Kolodinski; Malika Rice; and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study FTA's Capital Investment Grants program is the primary source of federal financial assistance to support transit projects that are locally planned, implemented, and operated. FTA evaluates and rates projects seeking funding through this program according to statutory criteria and recommends to Congress which projects to fund. The funding that project sponsors receive is subject to congressional appropriation. MAP-21 includes a provision for GAO to biennially review FTA's implementation of the Capital Investment Grants program. This report discusses: (1) FTA's progress in addressing statutory provisions contained in MAP-21 and the FAST Act and (2) how the evaluation and rating process FTA has established for Core Capacity Improvement projects enables FTA to verify that statutory requirements are met before recommending such projects for funding. GAO reviewed the relevant laws and FTA's guidance. GAO also interviewed FTA officials and six project sponsors, representing seven of the eight Core Capacity Improvement projects in the Capital Investment Grants program at the time of GAO's review. What GAO Found The Federal Transit Administration (FTA) has not addressed three statutory provisions concerning the Capital Investment Grants program contained in the Moving Ahead for Progress in the 21st Century Act (MAP-21) and the Fixing America's Surface Transportation Act (FAST Act). Specifically, FTA has not: issued regulations regarding the evaluation and rating process for Core Capacity Improvement projects, which are a category of eligible projects within the program; established a program of interrelated projects designed to allow for the simultaneous development of more than one transit project within the Capital Investment Grants program; or implemented a pilot program designed to create a fast-track approval process for transit projects that meet specific statutory criteria. Throughout this review, FTA officials told GAO they do not have immediate plans to address these three statutory provisions. Officials cited a proposal by the President to phase out the Capital Investment Grants program as one of the factors influencing this decision. However, in March the Consolidated Appropriations Act, 2018, provided the program with more than $2.6 billion and required FTA to continue to administer the program in accordance with the procedural and substantive requirements specified in statute. Subsequently, FTA officials told GAO that they are reviewing the Act and determining next steps. However FTA officials did not indicate that they intend to address these provisions. If FTA does not implement the outstanding provisions, FTA and project sponsors—that is, local transit agencies—may be missing opportunities to deliver transit projects more efficiently. Based on a review of FTA's policy guidance, on FTA's instructions for applying to the Capital Investment Grants program, and on other documentation supporting the two Core Capacity Improvement projects that FTA has recommended for funding as of June 2017, GAO found that FTA has established a process to verify that proposed Core Capacity Improvement projects meet statutory requirements before recommending projects for funding. Core Capacity Improvement projects are capital investments designed to increase the capacity of an existing transit system and must meet specific statutory requirements to be eligible for funding through the program. GAO found that prior to recommending a project for funding FTA works with project sponsors to verify that their proposed project includes elements that will increase transit system capacity versus maintaining the current system, that the required amount of local funding is committed to the project, and that sponsors have the technical and financial capacity to complete the project they are proposing, among other statutory requirements. What GAO Recommends FTA should initiate a rulemaking regarding the evaluation and rating process for Core Capacity Improvement projects and take steps to address two other statutory provisions. FTA agreed with the recommendations but disagreed with certain findings on which they are based. GAO believes these findings are valid, as discussed in this report.
gao_GAO-19-225T
gao_GAO-19-225T_0
Background We testified before the Senate Committee on Armed Services in September 2017 after four significant mishaps at sea resulted in the loss of 17 sailors’ lives and serious damage to Navy ships. We reported on some of the Navy’s challenges, including the degraded condition and expired training certifications of ships homeported overseas, reductions to ship crews that contributed to sailor overwork and safety risks, and an inability to complete maintenance on time. Since that time, the Navy has completed two internal reviews to address these and other challenges, identifying 111 recommendations to improve surface fleet readiness. The Navy formed an executive group to guide and closely track the implementation of recommendations, and its reform efforts are ongoing. As of November 2018, the Navy reported that it had implemented 78 (i.e., 70 percent) of these recommendations. Navy officials recognize that full implementation will take significant time and management attention to address the fundamental readiness challenges identified. In figure 1, we show photographs of two of the four Navy ships involved in significant mishaps that occurred in 2017. Both the USS Fitzgerald and the USS John S. McCain were involved in collisions that resulted in sailor fatalities. DOD has reported that more than a decade of conflict, budget uncertainty, and reductions in force structure have degraded its readiness; in response, the department has made rebuilding readiness a priority. The 2018 National Defense Strategy emphasizes that restoring and retaining readiness across the entire spectrum of conflict is critical to success in the emerging security environment. Nevertheless, DOD reported that readiness of the total military force remains low and has remained so since 2013. Our work has shown that the Navy has experienced increasing maintenance challenges as a high pace of operations has continued and maintenance has been deferred. Maintenance and personnel challenges also hinder readiness recovery of Navy aircraft. For the Marine Corps, our work has shown that ground force readiness has improved and remained stable in recent years, but acute readiness problems remain in aviation units. Over the past year, DOD has made department-wide progress in developing a plan to rebuild the readiness of the military force, with the military services providing regular input on the status of their readiness recovery efforts. In August 2018, we reported that the Office of the Secretary of Defense has developed a Readiness Recovery Framework that the department is using to guide the services’ efforts and plans to use to regularly assess, validate, and monitor readiness recovery. The Office of the Secretary of Defense and the services have recently revised readiness goals and accompanying recovery strategies, metrics, and milestones to align with the 2018 National Defense Strategy and Defense Planning Guidance. We have ongoing work assessing DOD’s progress in achieveing its overall readiness goals. DOD’s readiness rebuilding efforts are occurring in a challenging context that requires the department to make difficult decisions regarding how best to address continuing operational demands while preparing for future challenges. Our work has shown that an important aspect of this, across all of the services, is determining an appropriate balance between maintaining and upgrading legacy weapon systems currently in operational use and procuring new ones to overcome rapidly advancing future threats. The Navy Fleet Faces Challenges in Rebuilding Readiness and the Costs Associated with Expanding the Fleet to Enhance Readiness in the Future Are Unknown Based on updated information we received in November 2018, the Navy has taken steps to provide dedicated training time so its surface forces may meet existing Navy training standards and their training is certified when they deploy. However, the Navy continues to struggle with rebuilding the readiness of the existing fleet due to enduring maintenance and manning challenges. As the Navy seeks to expand its fleet by 25 percent, these challenges will likely be further exacerbated and the Navy will likely face additional affordability challenges. Navy Has Taken Steps to Address Training Shortfalls in the Surface Fleet After the collisions in 2017, the Navy focused on training surface ship crews to its existing standards. We testified in September 2017 that there were no dedicated training periods built into the operational schedules of the cruisers and destroyers based in Japan and 37 percent of training certifications for these surface ship crews had lapsed as of June 2017. Since that time, the Navy has worked to ensure surface ships are certified before they are deployed. For example, the Navy has established controls to limit waivers that allowed training lapses to worsen, now requiring multiple high-level approvals for ships to operate uncertified. Based on our analysis of updated data, the Navy has improved markedly in the percentage of cruisers and destroyers with lapsed certifications in Japan, from 41 percent of certifications expired in September 2017 to 9 percent as of November 2018, with less than 3 percent of certifications expired on ships in operational status. While the Navy has demonstrated its commitment to ensuring that crews are certified prior to deploying, training for amphibious operations and higher-level collective training may not be fully implemented for several years. In September 2017, we reported that some Marine Corps units were limited in their ability to complete training to conduct an amphibious operation—a military operation that is launched from the sea to introduce a landing force ashore—by several factors, including a decline in the number of amphibious ships from 62 in 1990 to 32 as of November 2018, access to range space, and a high pace of deployments, among others. We recommended that the Navy and the Marine Corps develop an approach to mitigate their amphibious operations training shortfalls as the services await the arrival of additional amphibious ships into the fleet. Marine Corps officials told us that the Marine Corps and the Navy are working together to maximize amphibious training opportunities. Additionally, the Navy has plans to phase in high-level collective training into the operational schedules of its ships homeported in Japan over the next several years. Previously, advanced and integrated training involving multiple ships was conducted ad hoc if at all for ships homeported in Japan. Such collective training is important because the 2018 National Defense Strategy states that the department’s principal priority is to prepare for threats from strategic competitors due to the magnitude of the threat they pose. However, in November 2018, officials from Fleet Forces Command told us that fully implementing its training approach to prepare for advanced adversaries would not be fully implemented across the fleet for several years. The Fleet Faces Persistent Maintenance and Personnel Challenges as the Navy Seeks to Rebuild Readiness We have reported that the Navy faces persistent challenges in completing maintenance on time and providing sufficient manning to its ships. Unless these challenges are addressed, the Navy will be hampered in its ability to rebuild readiness and prepare for the future. Maintenance Delays for Ships and Submarines Reduce Time for Training and Operations Our work has found that the Navy has been unable to complete ship and submarine maintenance on time, resulting in continuing schedule delays that reduce time for training and operations and create costly inefficiencies in a resource constrained environment. The Navy’s readiness recovery is premised on the rigorous adherence to deployment, training, and maintenance schedules. However, we reported in May 2016 on the difficulty that both the public and private shipyards were having in completing maintenance on time. We reported that, from 2011 through 2014, about 28 percent of scheduled maintenance for surface combatants was completed on time and 11 percent was completed on time for aircraft carriers. We updated these data as of November 2018 to include maintenance periods completed through the end of fiscal year 2018 and found that the Navy continues to struggle to complete maintenance on time. For fiscal years 2012-2018, our analysis for key portions of the Navy fleet shows that 30 percent of Navy maintenance was completed on time, leading to more than 27,000 days in which ships were delayed and unavailable for training and operations as shown in figure 2 below. In addition to affecting training and operations, maintenance delays are costly. In November 2018, we examined attack submarine maintenance delays and reported that the Navy was incurring significant operating and support costs to crew, maintain, and support attack submarines that are delayed getting into and out of shipyard maintenance periods. We estimated that over the past 10 years the Navy has spent $1.5 billion in fiscal year 2018 constant dollars to support attack submarines that provide no operational capability—those sitting idle no longer certified to conduct normal operations—while waiting to enter the shipyards, and those delayed in completing their maintenance at the shipyards (see figure 3). We recommended that the Navy analyze how it allocates its maintenance workload across public and private shipyards. DOD concurred with our recommendation, stating that it has taken the first steps to take a more holistic view of submarine maintenance requirements and impacts across both the public and private shipyards. In an update provided in November 2018, the Navy told us that they are developing a contracting strategy to conduct two additional depot maintenance periods at private shipyards in the future. Our prior work has shown that three primary factors at the naval shipyards contribute to maintenance delays: Poor conditions and aging equipment limit the ability of the shipyards to meet current and future demands. We reported in September 2017 that facility and equipment limitations at the shipyards contributed to maintenance delays for the aircraft carriers and submarines, hindering the shipyards’ ability to support the Navy. Specifically, we found that the shipyards would be unable to support an estimated one-third of maintenance periods planned over the next 23 years. We recommended that the Navy take steps to improve its management of shipyard investments; the Navy concurred with this recommendation and we are encouraged by its response. For example, the Navy has developed a plan for the optimal placement of facilities and major equipment at each public shipyard, which the Navy estimates can ultimately increase its maintenance efficiency by reducing personnel and materiel travel by an average of 65 percent. This equates to recovering about 328,000 man days per year—an amount roughly equal to that of an aircraft carrier maintenance period. However, the Navy’s preliminary estimate —that this effort will require an estimated $21 billion and 20 years to address—is well beyond historical funding levels, and does not include some potentially significant costs (e.g., for utilities, roads, or environmental remediation). Shipyard workforce gaps and inexperience are limiting factors. The Navy has reported a variety of workforce challenges at the Navy’s four public shipyards such as hiring personnel in a timely manner and providing personnel with the training necessary to gain proficiency in critical skills. The Navy has noted that some occupations require years of training before workers become proficient. According to Navy officials, a large portion of its workforce is inexperienced. For example, 45 percent of the Puget Sound and 30 percent of the Portsmouth Naval Shipyards’ skilled workforce have fewer than 5 years of experience. According to DOD officials, workforce shortages and inexperience contribute to maintenance delays. For example, at Pearl Harbor Naval Shipyard, two submarines were delayed approximately 20 months, in part because of shortages in ship fitters and welders, among other skilled personnel. Most of DOD’s depots, which include the naval shipyards, have taken actions to maintain critical skills through retention incentives, bonuses, and awards. We plan to issue a report examining DOD’s depot skill gaps, including those at the naval shipyards, later this month. Depot supply support may not be cost-effective. In June 2016, we reported that the naval shipyards and other depots had not implemented actions that would likely improve the cost-effectiveness of their supply operations. Specifically, the Navy had not transferred certain functions to the Defense Logistics Agency (DLA) at the shipyards in the same manner as the Navy and Air Force did for their aviation depots. The Navy and Air Force aviation depots that transferred these functions to DLA had reaped a number of efficiencies in their supply operations, including a 10-percent reduction in backorders over a 5-year period. We recommended that the Navy analyze whether such a transfer of functions is warranted at the shipyards and the Navy concurred with the recommendation. However, as of October 2018, the Navy had not conducted a comprehensive analysis of transferring these functions and had provided no plans to do so. Navy Processes for Determining Manning of Ships Do Not Account for All Ship Workload In May 2017, we reported that the Navy’s process for determining manpower requirements—the number and skill mix of sailors needed on the Navy’s ships—did not fully account for all ship workload. The Navy was using outdated standards to calculate the size of ship crews that may have been leading to overburdened crews working long hours. We recommended steps to help ensure the Navy’s manpower requirements meet the needs of the existing and future surface fleet, and the Navy has been studying ship workload and revising its guidance. As of November 2018, the Navy was continuing to analyze the manpower requirements of its ship classes to better size and compose ship crews, and the Navy was also working to improve shipboard manning. However, these efforts are not yet complete and it is too early to assess their effectiveness. Until manpower requirements are reassessed across the fleet, the Navy risks that ship crews will continue to be undersized and sailors will be overworked with potential negative effects on readiness and safety. Additionally, the Navy provided information in November 2018 that showed that it is taking steps to ensure that ships have a minimum percentage of crew assigned and with the appropriate skills. The Navy has prioritized manning its surface ships homeported overseas. The Navy established a minimum threshold of filling at least 95 percent of authorized billets in its ship crews with sailors (referred to as fill), with a minimum goal of 92 percent of those sailors having the right qualifications for the billet (known as fit). According to Navy officials, the Navy is for the most part meeting its fill goals Navy-wide, but has not consistently met its fit goals. However, during group discussions in November 2018 with ship crews and interviews with Navy officials in Japan, we learned that the Navy’s methods for tracking fit and fill do not account for sailor experience and may be inaccurately capturing the actual presence of sailors onboard and available for duty on its ships. Moreover, sailors consistently told us that ship workload has not decreased, and it is still extremely challenging to complete all required workload while getting enough sleep. Navy officials told us that manning challenges will continue through at least fiscal year 2021 as the Navy increases its end strength and trains its new sailors to gain the proper mix of skills to operate and maintain the fleet. Navy Plans to Expand Its Fleet but Full Costs Are Unknown and Manning an Expanded Fleet Likely Will Be Challenging To meet continued operational demands, the Navy is planning for the most significant fleet size increase in over 30 years. According to the Navy’s fiscal year 2019 shipbuilding plan, the Navy plans to build and maintain a fleet of 355 battle force ships—an increase of about 25 percent above the Navy’s current force of 287 ships. To reach its goal, the Navy plans to buy 301 ships through 2048 and extend the service life of its 66 Arleigh Burke class destroyers and up to 7 attack submarines. Together, the fiscal year 2019 shipbuilding plan and the service life extensions would allow the Navy to reach a 355-ship fleet by the 2030s. Congressional Budget Office reporting and our past work have shown that the Navy has consistently and significantly underestimated the cost and timeframes for delivering new ships to the fleet. For example, the Navy estimates that buying the new ships specified in the fiscal year 2019 plan would cost $631 billion over 30 years while the Congressional Budget Office has estimated that those new ships would cost $801 billion—a difference of 27 percent. We also reported in June 2018 that acquisition outcomes for ship classes built during the last 10 years have often not achieved cost, schedule, quality, or performance goals that were established. Furthermore, we have reported that: all 8 of the lead ships delivered over the past decade that we reviewed were provided to the fleet behind schedule, and more than half of those ships were delayed by more than 2 years, and six ships of different classes valued at $6.3 billion were delivered to the Navy with varying degrees of incomplete work and quality problems. As a result of past cost and schedule problems, our work has shown that the Navy has a less-capable and smaller fleet today than it planned over 10 years ago. The Navy has also received $24 billion more in funding than it originally planned in its 2007 long-range shipbuilding plan but has 50 fewer ships in its inventory today, as compared with the goals it first established. Therefore, we have reported that as the Navy moves forward in implementing its shipbuilding plan it will be paramount for the Navy to learn from and apply lessons learned from the past. In addition to the cost of buying the ships and submarines to expand fleet size, the Navy will likely face affordability challenges with regard to the manning of an expanded fleet with the right number of sailors with the right mix of skills. In May 2017, we reported that the personnel costs for surface ship classes in fiscal years 2000-2015 were the largest share of total operating and support costs and that careful planning will be needed as new ships are brought into the fleet. We also reported that crew sizes on recently inducted ship classes grew from original projections as the Navy gained experience operating them. For example, the total crew size of Littoral Combat Ships has grown from 75 in 2003 to 98 personnel in 2016, a 31-percent increase. Navy officials told us that they plan to better articulate the personnel and resources needed for a larger fleet after fully accounting for workload and right-sizing ship crews. The Navy’s end strength has since increased by over 11,000 personnel from fiscal year 2017 levels, which should help alleviate manning challenges as the fleet grows. In November 2018, officials from Fleet Forces Command provided us with projections of its manning shortfalls continuing through at least fiscal year 2021 and steps it was planning to take to mitigate them. Navy and Marine Corps Aging Aircraft and F-35s Face Maintenance and Supply Challenges That Affect Readiness Rebuilding Now and in the Future Our work has shown that Navy and Marine Corps aircraft availability has been limited by aging aircraft, delayed maintenance, and insufficient supply support. Pilot and maintenance personnel shortfalls further limit readiness recovery across legacy air platforms. The growing F-35 program, which is meant to replace many aging aircraft, has presented additional operational and sustainment challenges, which will likely persist into the future if not corrected. DOD, the Navy, and the Marine Corps have emphasized mission capability of critical aviation platforms— including the Navy and Marine Corps F/A-18s and F-35s—and are taking steps to improve availability, but these efforts will take time to realize results. Aircraft Availability Has Been Limited by Aging Fleets with Maintenance and Supply Challenges Navy and Marine Corps aircraft availability has been limited by challenges associated with aging aircraft fleets, depot maintenance, and supply support challenges that limit the services’ ability to keep aviation units ready. The Navy and Marine Corps spend billions of dollars each year on sustainment, such as for spare parts and depot maintenance, to meet aircraft availability goals. However, aircraft availability rates have generally declined since fiscal year 2011. While specific aircraft availability data are considered sensitive by the Navy and the Marine Corps, and cannot be discussed in detail, we found in September 2018 that the Navy and the Marine Corps generally did not meet aircraft availability goals in fiscal years 2011-2016 for the seven aircraft we reviewed. In updating data in November 2018, we found that none of the aircraft met aircraft availability goals for fiscal years 2017 and 2018. According to the Navy, the pace of operations has increased wear and tear on its aircraft and decreased the time available for maintenance and modernization—a necessity for an aging fleet. For example, the average age of a legacy F/A-18A-D Hornet is 26 years, of an AV-8B Harrier is 21 years, and of the C-2A Greyhound is 29 years. Both services expect these aircraft will continue to be used for the foreseeable future and in some cases into the 2030s. The Navy and the Marine Corps face delays in the arrival of the F-35 to replace their legacy F/A-18A-D Hornets and AV-8B Harriers. To compensate for the delay, the Navy and the Marine Corps are planning to procure additional aircraft, such as the F/A-18E-F Super Hornet, and extend the service life and upgrade the capabilities of their legacy aircraft. However, these efforts and the sustainment of the Navy and Marine Corps legacy aircraft fleet face key challenges as shown in figure 4. Specifically, our prior work has shown that the Navy and the Marine Corps are confronted with two sets of challenges in sustaining their aircraft: Depot maintenance complexities for aging aircraft and spare parts availability. Depot maintenance on aging weapon systems, including Navy and Marine Corps aircraft, becomes less predictable as structural fatigue occurs and parts that were not expected to be replaced begin to wear out. While the Navy and the Marine Corps reported that sustainment funding accounts, such as those for depot maintenance and spare parts, have been funded at increased levels in fiscal years 2017 and 2018, efforts to improve spare parts availability take time to produce results due to long lead times for acquiring some items. In addition, Navy and Marine Corps aircraft face challenges associated with diminishing manufacturing sources and parts obsolescence. DOD has a program intended to manage these risks, but we reported in September 2017 that its implementation varied across DOD weapon system program offices. We made recommendations to improve the program’s management; DOD concurred and has initiated improvement efforts. Maintenance personnel inexperience and retention. The Navy has had difficulty attracting and retaining skilled maintainers, such as sheet metal workers and machinists at its aviation depots (i.e., Fleet Readiness Centers), which directly affects its ability to complete planned maintenance. Some of the depots experienced challenges attracting and retaining skilled personnel due to competition with nearby contractors that are able to offer higher pay, according to Navy depot officials. Similar to the shipyards, the aviation depots also lack experienced personnel, affecting the efficiency and quality of maintenance. For example, 41 percent of the skilled workers at Fleet Readiness Center Southwest have 2 years or fewer of experience. Workforce inexperience and attrition of skilled personnel were some of the reasons cited for machining defects detected in the landing gear for F/A-18, E-2, and C-2A aircraft by a recent Navy report. All of the depots have undertaken retention efforts such as incentives, bonuses, and awards to address these issues. Until the Navy and Marine Corps address maintenance and supply challenges it will be difficult to meet Secretary of Defense-established mission capability goals. Specifically, in September 2018, the Secretary of Defense issued a memorandum emphasizing that a key component of implementing the 2018 National Defense Strategy is ensuring critical aviation platforms meet their mission capability targets by the end of fiscal year 2019. The memorandum established a goal of achieving a minimum of 80-percent mission capable rates for various aircraft, including for the Navy’s and Marine Corps’ F/A-18 inventories, by the end of fiscal year 2019 while also reducing operating and maintenance costs. To accomplish this, the Navy and the Marine Corps developed the Return to Readiness strategy in November 2018 that includes a broad array of actions to improve the availability of spare parts and evaluate the application of best commercial practices to naval aviation sustainment, among other actions. Office of the Secretary of Defense and Navy program officials told us, and based on our prior work we agree, that this goal will be challenging to achieve by the end of fiscal year 2019. Pilot Shortages Have Worsened in Recent Years and Are Projected to Remain through 2023 We reported in April 2018 that fighter pilot shortages in the Navy and the Marine Corps have been worsening in recent years and shortfalls are projected to remain through at least fiscal year 2023. Our analysis of Navy and Marine Corps data showed that the Navy’s shortage of first operational tour fighter pilots more than doubled from 12 percent in fiscal year 2013 to 26 percent in fiscal year 2017. Similarly, the Marine Corps’ overall shortage of fighter pilots quadrupled from 6 percent in fiscal year 2006 to 24 percent in fiscal year 2017. Also, as we reported in April 2018, service officials attributed the pilot shortages to reduced training opportunities and increased attrition due to career dissatisfaction, among other factors. Officials from both services stated at the time that they have ensured that deploying squadrons have been fully staffed with fighter pilots by using various approaches including using senior pilots to staff junior positions and having pilots deploy more frequently and for longer periods. However, we reported that squadron leaders and fighter pilots said that these approaches had a negative impact on the fighter pilot training and retention and ultimately may be exacerbating the situation. Further compounding their pilot shortages, we also found that the services have not recently reevaluated squadron requirements to reflect an increased fighter pilot workload. As a result, the reported shortage actually could be greater. The services were taking actions, including increasing retention incentives for fighter pilots. To help determine the magnitude of the shortages and help target strategies to better meet their personnel needs, we recommended, and the Navy and Marine Corps agreed, to reevaluate fighter pilot squadron requirements. New F-35 Aircraft Facing Sustainment and Operational Challenges Sustainment challenges are not just an issue for older aircraft, but represent an enduring challenge for the F-35 Lightning II aircraft—a key component to the future of tactical aviation for the Navy and Marine Corps. The Navy and Marine Corps are both flying F-35s now as the program ramps up development, and they plan to procure nearly 700 aircraft over the coming decades. The sustainment costs of the F-35 fleet are projected to exceed $1 trillion over its 60-year life cycle. In October 2017, we reported that: F-35B aircraft (including Marine Corps aircraft) were available (i.e., the aircraft were safe to fly, available for use, and able to perform at least one tasked mission) about 52 percent of the time from March 2017 through June 2017, which fell short of the 65-percent goal established by the Marine Corps for non-deployed units and F-35B aircraft (including Marine Corps aircraft) were fully mission capable (i.e., the aircraft were capable of accomplishing all tasked missions) about 15 percent of the time from March 2017 through June 2017, which fell short of the 60-percent goal established by the Marine Corps for non-deployed units. We also reported on numerous sustainment challenges leading to less than desirable outcomes for F-35 warfighter readiness. For example, F-35 aircraft were unable to fly 22 percent of the time because of parts shortages from January 2017 through August 7, 2017. Additionally, DOD’s capabilities to repair F-35 parts at military depots were 6 years behind schedule, which resulted in average part repair times that are twice that of the program’s objective. As DOD gains experience with the F-35, our work has shown that the department has encountered additional challenges. In 2017, the Marine Corps became the first military service to station F-35 aircraft overseas, transferring aircraft to Iwakuni, Japan. While in the Pacific, DOD expects to disperse its F-35s into smaller detachments to outmaneuver the enemy and counter regional threats. However, in April 2018, we reported that this approach posed logistics and supply challenges. In June 2018, we reported that the F-35 program had not improved its reliability and maintainability over the past year and continued to fall short on half of its performance targets. Furthermore, we found that the program may not meet its required targets before each variant of the F-35 is expected to demonstrate maturity—the point at which the aircraft has flown enough hours to predictably determine reliability and maintainability over its lifespan. This means that the Navy and the Marine Corps may have to decide whether they are willing to accept less reliable and maintainable aircraft than originally planned. Among other outcomes, this could result in higher maintenance costs and lower aircraft availability than anticipated which also could pose readiness challenges in the future. As we reported in October 2017, the poor reliability of certain parts is already contributing to shortages of F-35 spare parts. Challenges posed by the F-35 program are largely the result of sustainment plans that do not fully include or consider key requirements. Our work has shown that planning for sustainment and aligning its funding are critical if DOD wants to meet its aircraft availability goals and effectively deploy to support operations. To address the challenges associated with F-35 sustainment and operational deployment, we recommended that DOD revise its sustainment plans, align associated funding, and mitigate the risks associated with key supply chain-related challenges for deployed F-35s in the Pacific, among others. DOD concurred with these recommendations and stated that it is taking steps to address them. Furthermore, as previously discussed, the Secretary of Defense has established an 80-percent mission capability goal for critical aviation assets, including the F-35. Due to current low availability and numerous sustainment issues, the F-35 fleet will be challenged in meeting the goal. In sum, the Navy’s and Marine Corps’ significant readiness challenges have developed over more than a decade of conflict, budget uncertainty, and reductions in force structure. Both services have made encouraging progress identifying the causes of their readiness decline and have begun efforts to arrest and reverse it; however, our prior work shows that fully addressing the persistent readiness challenges will require years of sustained management attention. Our work cited today contains 25 specific recommendations to the Navy and the Marine Corps and an additional 20 recommendations to various other DOD components to assist these services in rebuilding the readiness of their forces and in modernizing for the future. Attention to these recommendations can assist the Navy and the Marine Corps as they seek to rebuild the readiness of their forces. Chairmen Wicker and Sullivan, Ranking Members Hirono and Kaine, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have questions about this testimony, please contact John H. Pendleton, Director, Defense Capabilities and Management at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Suzanne Wren, Assistant Director; Clarine Allen; Steven Banovac; John Bumgarner; Chris Cronin; Benjamin Emmel; Cynthia Grant; Mae Jones; Amie Lesser; Tobin McMurdie; Shahrzad Nikoo; Carol Petersen; Cody Raysinger; Michael Silver; John E. “Jet” Trubey; and Chris Watson. Appendix I: Implementation Status of Prior GAO Recommendations Related to Navy and Marine Corps Readiness Over the past 4 years, we have issued a number of reports related to Navy and Marine Corps readiness and we used them to develop this statement. Table 1 summarizes the recommendations in these reports. The Department of Defense (DOD) concurred with most of the 45 recommendations and has many actions underway. However, DOD has not fully implemented any of the recommendations to date. For each of the reports, the specific recommendations and any progress made in implementing them are summarized in tables 2 through 16. Related GAO Products Report numbers with a C or RC suffix are classified. Report numbers with a SU suffix are sensitive but unclassified. Classified and sensitive but unclassified reports are available to personnel with the proper clearances and need to know, upon request. Navy Readiness: Actions Needed to Address Costly Maintenance Delays Facing the Attack Submarine Fleet. GAO-19-229. Washington, D.C.: November 19, 2018. Air Force Readiness: Actions Needed to Rebuild Readiness and Prepare for the Future. GAO-19-120T. Washington, D.C.: October 10, 2018. Weapon System Sustainment: Selected Air Force and Navy Aircraft Generally Have Not Met Availability Goals, and DOD and Navy Guidance Need to Be Clarified. GAO-18-678. Washington, D.C.: September 10, 2018. Weapon System Sustainment: Selected Air Force and Navy Aircraft Generally Have Not Met Availability Goals, and DOD and Navy Guidance Need Clarification. GAO-18-146SU. Washington, D.C.: April 25, 2018. Military Readiness: Update on DOD’s Progress in Developing a Readiness Rebuilding Plan. GAO-18-441RC. Washington, D.C.: August 10, 2018. (SECRET) Military Personnel: Collecting Additional Data Could Enhance Pilot Retention Efforts. GAO-18-439. Washington, D.C.: June 21, 2018. F-35 Joint Strike Fighter: Development Is Nearly Complete, but Deficiencies Found in Testing Need to Be Resolved. GAO-18-321. Washington, D.C.: June 5, 2018. Warfighter Support: DOD Needs to Share F-35 Operational Lessons Across the Military Services. GAO-18-464R. Washington, D.C.: April 25, 2018. Military Readiness: Clear Policy and Reliable Data Would Help DOD Better Manage Service Members’ Time Away from Home. GAO-18-253. Washington, D.C.: April 25, 2018. Military Personnel: DOD Needs to Reevaluate Fighter Pilot Workforce Requirements. GAO-18-113. Washington, D.C.: April 11, 2018. Military Aircraft: F-35 Brings Increased Capabilities, but the Marine Corps Needs to Assess Challenges Associated with Operating in the Pacific. GAO-18-79C. Washington, D.C.: March 28, 2018. (SECRET) Navy and Marine Corps Training: Further Planning Needed for Amphibious Operations Training. GAO-18-212T. Washington, DC.: December 1, 2017. F-35 Aircraft Sustainment: DOD Needs to Address Challenges Affecting Readiness and Cost Transparency. GAO-18-75. Washington, D.C.: October 26, 2017. Defense Supply Chain: DOD Needs Complete Information on Single Sources of Supply to Proactively Manage the Risks. GAO-17-768. Washington, D.C.: September 28, 2017. Navy and Marine Corps Training: Further Planning Needed for Amphibious Operations Training. GAO-17-789. Washington, D.C.: September 26, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Facing the Fleet. GAO-17-809T. Washington, D.C.: September 19, 2017. Naval Shipyards: Actions Needed to Improve Poor Conditions That Affect Operation. GAO-17-548. Washington, D.C.: September 12, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Facing the Fleet. GAO-17-798T. Washington, D.C.: September 7, 2017. Navy Readiness: Actions Needed to Maintain Viable Surge Sealift and Combat Logistics Fleets GAO-17-503. Washington, D.C.: August 22, 2017 (reissued on Oct 31, 2017). Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017. Military Readiness: Coastal Riverine Force Challenges. GAO-17-462C. Washington, D.C.: June 13, 2017. (SECRET) Navy Shipbuilding: Policy Changes Needed to Improve the Post-Delivery Process and Ship Quality. GAO-17-418. Washington, D.C.: July 13, 2017 Offshore Petroleum Discharge System: The Navy Has Not Mitigated Risk Associated with System Limitations. GAO-17-531C. Washington, D.C.: June 22, 2017. (SECRET) Navy Force Structure: Actions Needed to Ensure Proper Size and Composition of Ship Crews. GAO-17-413. Washington, D.C.: May 18, 2017. Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: September 7, 2016. Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-534C. Washington, D.C.: June 30, 2016. (SECRET) Defense Inventory: Further Analysis and Enhanced Metrics Could Improve Service Supply and Depot Operations. GAO-16-450. Washington, D.C.: June 9, 2016. Navy and Marine Corps: Services Face Challenges to Rebuilding Readiness. GAO-16-481RC. Washington, D.C.: May 25, 2016. (SECRET//NOFORN) Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016. F-35 Sustainment: DOD Needs a Plan to Address Risks Related to Its Central Logistics System. GAO-16-439. Washington, D.C.: April 14, 2016. Navy Force Structure: Sustainable Plan and Comprehensive Assessment Needed to Mitigate Long-Term Risks to Ships Assigned to Overseas Homeports. GAO-15-329. Washington, D.C.: May 29, 2015. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The 2018 National Defense Strategy emphasizes that restoring and retaining readiness is critical to success in the emerging security environment. The Navy and Marine Corps are working to rebuild the readiness of their forces while growing and modernizing their aging fleet of ships and aircraft. However, achieving readiness recovery goals will take years as both services continue to be challenged to rebuild readiness amid continued operational demands. This statement provides information on current and future readiness challenges facing (1) the Navy ship and submarine fleet and (2) Navy and Marine Corps aviation. GAO also discusses prior recommendations on Navy and Marine Corps readiness and progress to address them. This statement is based on previously published work since 2015 related to Navy and Marine Corps readiness challenges, including shipyard workforce and capital investment, ship crewing, weapon system sustainment, the fighter pilot workforce, and modernizing force structure. GAO conducted site visits to the Pacific fleet in November 2018 and analyzed updated data, as appropriate. What GAO Found The Navy has taken steps to address training shortfalls in the surface fleet, but faces persistent maintenance and personnel challenges as it seeks to rebuild ship and submarine readiness. While the Navy has corrective actions underway, they will take years to implement. Following ship collisions in 2017, the Navy has taken steps to ensure its crews are trained to standards prior to deployment and made significant progress in those efforts. However, the Navy has struggled to complete ship maintenance—with only 30 percent of maintenance completed on time since fiscal year 2012—leading to thousands of days that ships were unavailable for training and operations (see figure). Additionally, manning shortfalls and experience gaps continue to contribute to high sailor workload and are likely to continue through at least fiscal year 2021. The Navy has developed a plan to improve shipyards and is re-examining its ship manning, among other actions; however, these positive steps have not yet fully addressed GAO's recommendations. Looking to the future, the Navy has indicated that it wants to grow its fleet to meet demands. However, the costs of such growth are not yet known and would likely require resourcing well above currently planned levels. Navy and Marine Corps aircraft availability has been limited due to numerous challenges (see figure). Specifically, the seven aircraft GAO reviewed have generally experienced decreasing availability since fiscal year 2011 and did not meet availability goals in fiscal years 2017 and 2018. The F-35—the future of naval aviation—also has not met availability goals due to part shortages and poor sustainment planning. In September 2018, the Department of Defense established aggressive targets for aircraft availability. While the Navy and Marine Corps are taking actions to improve aircraft availability, including addressing GAO's recommendations, aviation readiness will take many years to recover. What GAO Recommends GAO has made a total of 45 recommendations in the prior work described in this statement. The Department of Defense concurred with most of them, and has many actions underway, but has not yet fully implemented any. Attention to these recommendations can assist the Navy and the Marine Corps as they seek to rebuild the readiness of their forces.
gao_GAO-18-268
gao_GAO-18-268_0
Background CBP Staffing and Infrastructure In fiscal year 2017, approximately 24,000 CBP officers performed a variety of functions at over 300 air, land, and sea POEs, including inspecting travelers and cargo containers, among other activities. According to CBP, increases in passenger and cargo volumes are outpacing CBP’s staffing resources, resulting in increased passenger wait times and cargo backups, among other things. For example, in fiscal year 2017, CBP identified a need for an additional 2,516 CBP officers across all POEs. Further, as of 2017, CBP estimated that it needed approximately $5 billion to meet infrastructure and technology requirements at about 167 land POEs. To help identify and mitigate resource challenges, CBP developed its Resource Optimization Strategy, an integrated, long-term plan to improve operations at all POEs. The Strategy consists of three components: Business transformation: utilize new technology, such as Automated Passport Control kiosks, or new processes, such as trusted traveler programs, to increase CBP operational efficiencies; Workload Staffing Model: utilize modeling techniques to help ensure that existing staffing resources are appropriately aligned with threat environments while maximizing cost efficiencies; and Alternative funding strategies: utilize public-private partnership agreements, such as RSP and DAP, to supplement regular appropriated resources. Overview and Evolution of the RSP The RSP enables partnerships between CBP and private sector or government entities, allowing CBP to provide new or additional services upon the request of partners. These services can include customs, immigration, or agricultural processing; border security and support at any facility where CBP provides, or will provide, services; and may cover costs such as salaries, benefits, overtime expenses, administration, and transportation costs. According to authorizing legislation, RSP agreements are subject to certain limitations, including that they may not unduly and permanently impact existing services funded by an appropriations act or fee collection. According to AFP officials, the purpose of the RSP is to provide new or additional CBP services at POEs that the component would otherwise not have been able to provide. From 2013 to 2017, the number of RSP agreements has increased as new authorizing legislation has expanded participant eligibility and made the program permanent. Table 1 below outlines the evolution of RSP through its different legislative authorities. Overview and Evolution of the DAP The DAP permits CBP and GSA to accept donations from private and public sector entities, such as private or municipally-owned seaports or land border crossings. Donations may include real property, personal property, money, and non-personal services, such as design and construction services. Donated resources may include improvements to existing facilities, new facilities, equipment and technology, and operations and maintenance costs, among other things. In terms of the types of locations that may accept donations, donations may be used for activities related to land acquisition, design, construction, repair, alteration, operations, and maintenance, including installation or deployment of furniture, fixtures, equipment or technology, at an existing CBP-owned land POE; a new or existing space at a CBP air or sea POE; or a new or existing GSA-owned land POE. CBP and GSA may not accept donations at a leased land POE, nor is CBP able to accept a donation at or for a new land POE if the combined fair market value of the POE and donation exceeds $50 million. Additionally, CBP may not use monetary donations accepted under the DAP to pay salaries of CBP employees performing inspection services. Finally, CBP may not accept donations on foreign soil. Table 2 below depicts the evolution of DAP authorizing legislation since the program’s inception in 2014. Figures 1 and 2 depict the location and number of RSP and DAP agreements in place through fiscal year 2017. CBP Uses Criteria and Documented Procedures to Evaluate and Approve Public-Private Partnership Applications and Administer Programs CBP Uses Criteria and Procedures to Approve Public-Private Partnership Applications and Coordinate with Partners RSP Application Process CBP has developed detailed guidance on the RSP application process, including application timeframes, requirements, and evaluation criteria, and this guidance is on CBP’s website. According to this guidance, in 2017, CBP expanded the RSP application submission period. Whereas in prior years applications were accepted during a single one-month window, prospective partners may now submit applications throughout the year. Under this new process, CBP evaluates submissions three times per year—beginning in March, July, and November. According to CBP, the submission period was expanded in part because new legislative authorities removed previous restrictions on the number of RSP agreements CBP can enter into each year. The overarching RSP application process—from application submission through CBP evaluation and applicant notification—is depicted in figure 3. According to CBP’s procedures for accepting and reviewing applications, potential partners first submit a letter of application that includes a variety of logistical information concerning the stakeholders, services to be requested, location of services to be requested, available facilities, and funding. For example, in submitting a letter of application, an applicant is to estimate how many hours of services it may request per month and identify the applicant’s available budget for the first fiscal year of the partnership, among other things. According to the application guidance, prospective applicants are encouraged to work with local CBP officials at individual POEs to develop letters of application. After submission, CBP officials at the affected POEs, including affected CBP Field Offices, review applications and communicate their findings and recommendations to the AFP office. In addition, the CBP Office of Chief Counsel reviews the applications for legal sufficiency and may suggest that CBP request additional information from applicants. Next, CBP convenes an expert panel consisting of two senior CBP officials who are not part of the AFP office to consider POE and legal comments on the applications, among other information provided by AFP officials. The panel deliberates and scores each proposal based on seven criteria, and all proposals that achieve a certain minimum score are accepted. The seven evaluation criteria used to weigh the merits of potential new partnership agreements are listed in table 3. The scoring scale ranges from -5 to 5, and the 7 criteria are weighted based on potential impact. For example, impact to CBP operations is weighted more heavily than other agency support. In September 2017, we observed an RSP application review panel. Among other things, we observed senior CBP officials, who were independent from the AFP office, score 31 RSP applications that impacted 46 CBP Field Office locations. The panel members based their deliberations on set criteria and reached consensus on which applications to approve. Finally, Congress and approved partners are notified of the selections. Where CBP denies a proposal for an agreement, it is to provide the reason for denial unless such reason is law enforcement sensitive or withholding the reason for denial is in the national security interests of the United States. Once CBP approves an application, CBP and its prospective new partners follow documented procedures to formalize the agreements and prepare all involved stakeholders, including new partners and local CBP officials, for Reimbursable Services Agreement implementation. The process to establish new RSP partnerships at specific POEs is depicted in figure 4 below. After CBP notifies the applicant of its selection, officials from the AFP office schedule a site visit to meet with local CBP officials at the POEs and the new partners. According to CBP program requirements, the purpose of the site visit is to discuss workload and services, and to verify that the POE facilities and equipment meet CBP’s required specifications. AFP officials also provide program training to CBP Field Office and POE officials, as well as to new partners on the processes to request and fulfill RSP service requests, among other things. We attended an AFP office visit to CBP’s Baltimore Field Office in October 2017 and observed AFP officials sharing best practices with local CBP officials and new RSP partners. According to CBP’s procedures, before any RSP services can be provided, CBP and the prospective partners must sign a legally binding Reimbursable Services Agreement. Among other things, the Reimbursable Services Agreement establishes that the partner will reimburse CBP for the costs of services provided under the RSP authorizing legislation, including the officer overtime rates, benefits, and a 15 percent administrative fee. Further, the partner agrees to reimburse CBP for these services within 15 days of billing through a Department of the Treasury system. Finally, local CBP Field Office and partner officials negotiate a local MOU that outlines the services, schedules, and other conditions for the POE location(s) covered by the Reimbursable Services Agreement. DAP Application Process Similar to the RSP application process, CBP, in conjunction with GSA, utilizes criteria and documented processes to evaluate DAP proposals and implement the program. More specifically, in alignment with the most recent DAP authorizing legislation, CBP and GSA developed the Section 482 Donation Acceptance Authority Proposal Evaluation Procedures & Criteria Framework (Framework) for receiving, evaluating, approving, planning, developing, and formally accepting donations under the program. The initial steps of the Framework, which encompass the DAP application process, are depicted in figure 5. In prior years, CBP accepted large-scale proposals, defined by CBP as $5 million or more, during one application and evaluation cycle per year. Beginning in fiscal year 2017, CBP accepts large-scale proposals on a rolling basis, using a streamlined process for expedited review. CBP also accepts small-scale proposals, defined by CBP as less than $5 million, on a rolling basis. According to AFP officials, CBP undertakes considerable effort to provide early education about the program to potential partners who plan to apply for a DAP agreement, including discussing CBP’s operational needs at the POEs. The Framework notes that this outreach helps prospective donors gauge their willingness and ability to work cooperatively with CBP and GSA on potential POE improvements and also helps applicants enhance the viability of their submissions. After a DAP proposal is submitted and checked for completeness, CBP and GSA subject matter experts evaluate the proposal against seven operational and six technical criteria (see table 4 below). The evaluators reach consensus on proposed recommendations and submit their evaluation results to CBP and GSA senior leadership for consideration. Leadership reviews the recommendations and other pertinent information and determines whether or not to select proposals. In accordance with legislative requirements, CBP must notify DAP applicants of the determination to approve or deny a proposal not later than 180 days after receiving the completed proposal. Figure 6 depicts all three phases of the DAP Framework from selecting a proposal to signing a formal Donations Acceptance Agreement. Phase 2 of the Framework begins shortly after CBP notifies new partners of DAP selections. CBP officials then initiate a series of biweekly calls with GSA officials, if applicable, and the partner. AFP officials provide partners with documentation in the form of a high-level roadmap which contains a sequence of activities and deliverables CBP expects from the partners, and all stakeholders convene to track progress against planned activities and milestones. CBP, GSA, and the partner also meet to discuss the technical implementation of the donation. AFP and GSA officials conduct a site visit to meet with new partners; obtain a visual understanding of how CBP, GSA, and the partner will implement the donation; and help the partner begin the planning and development phase. CBP, GSA, and the partner negotiate a MOU on roles and responsibilities and terms and conditions of the donation. CBP then provides the partner with its technical standards and other operational requirements, such as space and staffing needs, under a non- disclosure agreement. The partner then begins to plan and develop its conceptual proposal into an executable project in close coordination with CBP and GSA. By the end of Phase 2, CBP, GSA, as applicable, and the partner confirm that all pre-construction development activities are complete, no outstanding critical risks exist, and that the appropriate agencies are prepared to request future funding, as applicable. Finally, stakeholders move to Phase 3 of the Framework to formalize the terms and conditions under which either CBP, GSA, or both, may accept the proposed donation. After CBP, GSA and the partner agree to the provisions of the project plan, they sign the legally binding Donations Acceptance Agreement, and stakeholders proceed to project execution. CBP Administers the Public-Private Partnerships Using Documented Policies and Procedures, and Implementation of the Programs Can Vary by Port CBP has documented standard operating procedures, roadmaps, and other formally documented policies and procedures to administer the RSP and DAP. In addition, as mentioned above, AFP officials conduct site visits to the POEs with new RSP and DAP agreements, and provide formal training for CBP personnel at Field Offices and POEs. The general process for administering RSP–from requesting and fulfilling services to billing and collecting payments–is dictated by standard operating procedures, as shown in figure 7. In general, RSP partners submit a formal request for services by completing an electronic form and calendar access via CBP’s Service Request Portal. Once the partner submits the request, the portal sends an electronic copy of the request to the partner’s email and the port’s RSP email inbox. CBP supervisors at the POE access the Service Request Portal to review, edit, approve, deny, or cancel requests. The system tracks and requires CBP officials to comment on any requests that CBP edits, denies, or cancels, and sends an email notification of CBP’s decision to the partner. If CBP approves the request, the Service Request Portal creates a line item with information about the request, such as codes for the location and partner, as well as the hours CBP officers will work. Next, CBP officers enter line item information—information on accounting codes for the location and partner and the actual hours CBP officers worked to fulfill the request—into CBP’s overtime management system. At the end of every shift, CBP supervisors review and approve the amount of overtime and other data entered into the overtime management system. In addition, data from this system is checked for accuracy and certified weekly by both CBP POE and AFP officials. After the overtime and request information is checked, payroll data generated from the overtime management system, including salary and benefits information for each officer that worked RSP overtime, uploads to CBP’s financial accounting system at the end of each pay period, or every 14 days. CBP bills its partners for two full pay periods, and the partner has 15 days to make a full payment through the partner’s account with the Department of the Treasury. After the partner makes the payment through the Department of the Treasury collection system, CBP National Finance Center officials reimburse the CBP annual Operations & Support account initially used to pay its officers for all of the RSP overtime worked during that pay cycle by moving the expenses to the RSP officer payroll fund. Although the general request and billing processes for RSP services are the same across all POEs regardless of location or mode—air, land, or, sea—CBP and its partners have flexibility to tailor RSP implementation based on local conditions or needs. Some of this implementation variation is documented in locally negotiated MOUs. For example, CBP’s partner at Miami International Airport in Florida relies on CBP to schedule RSP overtime daily based on CBP expertise. CBP officials at the airport developed their own software templates to plan, track, and manage CBP officers for RSP overtime for a given amount of available overtime funding. At the Pharr land POE in Texas, CBP staff at the POE submit recommended RSP overtime request proposals to the partner based on local conditions, including staffing, and the partner decides whether to submit a formal request to CBP. In all of these instances, RSP partners and CBP Field Office and POE officials expressed satisfaction with their more customized administration processes. CBP and its partners also noted some challenges to implementing RSP and DAP agreements, but partners generally agreed that the program benefits outweighed the challenges. For example, some DAP partners we met with mentioned that navigating GSA requirements was difficult and sometimes caused delays. GSA officials we met with noted that they are educating partners on GSA building standards and the GSA approvals process for donations, among other things, to help partners manage their timelines and expectations. GSA officials noted that they are working with CBP and partner officials to manage and learn from these early implementation challenges. CBP, GSA, and DAP partners also acknowledged a lack of clarity about which entity or entities are responsible for the long-term operations and maintenance costs of DAP infrastructure projects, although CBP has taken steps to address this issue. GSA pricing procedures dictate that once a POE receives an improvement, it charges the customer (CBP) for the additional operating costs, such as utilities. CBP officials acknowledged that the long term sustainability of donations, specifically the costs of operations, maintenance, and technology for infrastructure- based donations, needs to be addressed, and officials reported taking initial steps. For example, once CBP and its partner complete the planning of a project and GSA has calculated the project’s estimated operating expenses, the AFP office begins working with the CBP Office of Facilities & Asset Management to budget for such costs with the goal of reaching a mutually acceptable partnership for donations that will have long-term sustainability. CBP officials noted that the agency cannot commit to funding that is not guaranteed for the future. To mitigate budget uncertainty, CBP now includes language in its MOU and Donations Acceptance Agreement templates stating that upon project completion, the partner will be responsible for all costs and expenses related to the operations and maintenance of the donation until the federal government has the available funding and resources to cover such costs. According to AFP officials, CBP also makes efforts to educate its DAP partners on the budgeting process and associated timeframes with project completion. CBP officials noted that the majority of projects are in the early stages of development, and it will be years before the projects are complete. Furthermore, GSA officials stated that the actual operating and maintenance costs associated with DAP projects will not be known until about 1 year after the projects are completed. Public-Private Partnerships Are Increasing and Provide a Variety of Additional Services and Infrastructure Improvements RSP Partnerships are Increasing and Provide a Variety of Additional Services at POEs As noted previously, as CBP’s authorities to enter into new RSP agreements expanded to an unlimited number of agreements per year, and in total, for all types of POEs in 2017, the number of applications that CBP has selected has also increased. For example, in fiscal year 2013, CBP received 16 applications from interested stakeholders and selected five of these applications for partnerships, while in fiscal year 2017 cycle 2, CBP received 31 applications from interested stakeholders and tentatively selected 30 for partnerships. From fiscal year 2013 through fiscal year 2017 cycle 2, CBP has tentatively selected over 100 partners for RSP agreements. This figure includes RSP agreements under the authorities provided in Section 481 that allow CBP to enter into agreements with small airports to pay for additional CBP officers above the number of officers assigned at the time the agreement was reached. Figure 8 details this information for each application cycle. As mentioned above, once CBP selects an application for a new reimbursable services partnership, CBP and its partner sign a legally binding Reimbursable Services Agreement. From fiscal years 2013 through 2017 cycle 2, CBP selected 114 applications and entered into 69 Reimbursable Services Agreements with partners. As mentioned previously, local CBP officials also work with the partner to negotiate the terms of an MOU, which outlines how the partnership will work at the POE. As of November 2017, CBP and its partners were implementing 54 MOUs from partnerships that they entered into from fiscal years 2013 through 2017. Of those 54 MOUs, 10 cover agreements at land POEs, 22 cover agreements at sea POEs, and 23 cover agreements at air POEs. According to AFP officials, during the process of negotiating the MOUs with its partners, CBP and the partner often agree to include a variety of services that the partner can request, so that if a need arises, there is a record that CBP has agreed to provide those services under the MOU. CBP and its partners also negotiate a variety of other terms for the agreements in the MOUs, including the types of requests for services the partner can make, expectations for how often CBP and its partners communicate, and how to amend the MOU, among others terms. Table 5 provides details about the existing 54 MOUs. As noted in the above table, MOUs detail a variety of services that CBP officers can provide at the POEs, and the types of services vary by POE type. For example, most MOUs across land, air, and sea POEs allow partners to request services for freight or cargo processing, while a majority of the MOUs at air POEs allow CBP to provide services for traveler processing and to address unanticipated irregular operations or diversions. In addition, all MOUs allow partners to submit ad-hoc requests that partners make for services in advance. Most of these MOUs also allow partners to make urgent requests for immediate services. In examining the MOUs, we found that 44 of the 54 MOUs, or 81 percent, indicate that CBP and its partner meet at least quarterly to discuss how the partnership is going. Further, CBP and some of its partners meet more often. For example, CBP and its partners agreed to meet monthly in accordance with 23 MOUs, while CBP and its partners agreed to meet weekly according to 3 MOUs. All partners we interviewed that have utilized their RSP agreements reported that maintaining strong communication between CBP and the partner is important to implementing the RSP agreements at the POEs. Appendix I has additional information about each of the 54 current MOUs. Tables 6 and 7 provide the amount that partners reimbursed CBP for overtime services, the total number of overtime hours that CBP officers worked for each fiscal year from 2014 through 2017, and the total number of travelers and vehicles that CBP officers inspected during RSP partner requests for services from fiscal years 2014 through 2017 respectively. DAP Partnerships Provide for Infrastructure Improvements at POEs Similar to the RSP, the number of DAP partnerships more than doubled in fiscal year 2017. In fiscal years 2015 and 2016, CBP selected seven DAP proposals. In fiscal year 2017, CBP selected 9 DAP proposals. Combined, these 16 DAP projects affect 13 POEs. The donations that partners will provide CBP and GSA, as applicable, include a variety of POE improvements such as the installation of new inspection booths and equipment, removal of traffic medians, and new cold inspection facilities, as well as smaller items such as a high-capacity perforating machine, which reduces document processing time and allows CBP officers to focus on more critical operational duties, among other donations. According to CBP, these 16 donation proposals combined are intended to support over $150 million in infrastructure improvements at U.S. POEs. CBP also expects a variety of benefits from these donations, including support for local and regional trade industries and tourism, reductions in border wait times, and increased border security and officer safety, among others. Table 8 provides information on the scope and status of DAP projects that CBP and GSA have selected since CBP established the DAP in fiscal year 2015. As noted in the table above, CBP has fully accepted six donations, including the donation of a high capacity perforating machine to facilitate the processing of titles and other documents at the Freeport Sea POE in fiscal year 2016, the removal of traffic medians at the Ysleta Land POE, and recurring luggage donations in fiscal year 2017. Figure 9 is a photo of the high capacity perforating machine that CBP accepted at the Port of Freeport Sea POE from its partner Red Hook Terminals in 2016. As mentioned above, once CBP selects an application for a new donation partnership, CBP, GSA, if applicable, and partner officials negotiate the terms of a MOU, which outlines intentions of the partnerships for projects that require coordinated planning and development. CBP currently has MOUs for 9 of its 16 DAP projects. The MOUs contain a variety of project- specific information, including the scope of the project, a list of documents that CBP and GSA may request to determine whether the project is ready for execution, and details on donor warranty and continuing financial responsibility after CBP and GSA accepts the donation. As mentioned previously, CBP classifies donations under the DAP into two categories: small-scale donations, which are reviewed on an expedited basis, and large-scale donations. For example, the Salvation Army’s recurring donation of six to nine pieces of luggage per year to support Office of Field Operations canine training activities is a small-scale donation. Large-scale donations are donations with an estimated value of $5 million or more and are moderate to significant in size, scope, and complexity. For example, the City of Laredo’s donation for construction of four additional commercial vehicle lanes and booths, roadways and infrastructure, and exit booths and related technologies is a large-scale donation. CBP Uses Various Processes to Monitor and Evaluate Its Partnerships, but Could Benefit from Establishing an Evaluation Plan to Assess Overall Program Performance CBP Has Various Processes to Monitor and Evaluate the Implementation and Benefits of Its Public- Private Partnership Programs RSP Audits, Metric Reports, and Partner Satisfaction Surveys Given that partner requests for RSP services are predominately for the purposes of CBP officer overtime, CBP primarily monitors the RSP through audits. Specifically, CBP conducts regular audits using information from its Service Request Portal, its overtime management system, and its internal accounting system to ensure partners appropriately reimburse CBP for the overtime services officers provide under the RSP. Figure 10 describes how and when CBP uses these tools to conduct audits as part of the RSP request, fulfillment, and billing processes. As noted previously, CBP officers who work RSP overtime enter information from the Service Request Portal, such as the partner code and POE code, into CBP’s overtime management system for the actual hours that the officer worked to complete the request. At the end of every shift, CBP supervisors review and approve the information entered into the overtime management system, which contains the information needed for CBP to bill its RSP partner for the services that it performed, such as the number of hours each CBP officer worked to fulfill RSP requests and the salary and benefits information for those officers. POE supervisors then update the Service Request Portal records so that they reflect what CBP officers actually worked. On Mondays, AFP officials and CBP POE supervisors conduct concurrent audits of weekly overtime management system reports and reconcile these data with the information from the Service Request Portal to ensure that CBP will bill the partner appropriately. At the end of two pay period cycles, or every 28 days, officials at CBP’s National Finance Center review the payroll and benefits information that was uploaded from the overtime management system into CBP’s financial management system to confirm that it matches the appropriate partner code. This ensures that the correct partner is billed for the reimbursable services that CBP provided. Generally, CBP and partner officials we met with did not have any problems with the billing and payment process, and CBP officials noted that any discrepancies in the billing information between the Service Request Portal, the overtime management system, or the financial accounting system, such as the partner code or the number of hours that CBP officers worked, are usually identified and corrected during the weekly audits. Further, in October 2017, we received a demonstration of how partners and CBP manage requests for services in the Service Request Portal, how CBP officers and supervisors at the POEs enter and review overtime information, and how CBP runs reports in its financial accounting system during the audit process. In addition, we conducted a test of the data from the overtime management system and the billing information from the financial accounting system for a selection of partners across eight pay periods from fiscal years 2014 through 2017 to determine if CBP billed its partners appropriately. Specifically, for each of the eight selected pay periods, we randomly selected one RSP partner from the universe of partners who used RSP services during the period. We then compared the number of RSP overtime hours logged in CBP’s overtime management system for the selected partners and pay periods with the number of hours on the corresponding partner bills. In all eight cases, the amount of RSP overtime hours logged by CBP officials matched the overtime hours billed to the partners. Our observations, review of applicable documentation, and testing provided reasonable assurance that CBP is being appropriately reimbursed by partners for the services that it provided under the RSP. To evaluate the benefits of RSP services, the AFP office develops metrics reports on the services that CBP performed while fulfilling RSP requests throughout the billing cycle that it provides its partners. These metrics reports include data, such as the number of overtime hours CBP officers worked, the number of travelers CBP processed, the number of containers CBP inspected, and the average wait times CBP recorded during RSP overtime services, among other data. According to AFP officials, this information about the impact of reimbursable services helps partners make informed decisions when assessing their future requests. The AFP office works with partners to ensure that the information CBP provides in these reports is useful and will provide additional data upon the partners’ request, as applicable. CBP also conducts annual RSP partner satisfaction surveys to obtain feedback and evaluate overall satisfaction with program implementation. In 2015 and 2016, RSP partners expressed high levels of satisfaction about the level of services CBP provided, the request and fulfilment process, the billing and payment process, the monthly and annual metrics reports that CBP provides its partners, and the program’s ability to meet partner goals. Additionally, partners generally responded that the program allowed them to achieve their goals, which primarily focused on reducing wait times and increasing their own customer satisfaction levels. Monitoring and Evaluation of DAP Implementation and Benefits CBP has guidance that it follows to monitor and evaluate the implementation of DAP projects, and CBP and its partners use tools such as implementation roadmaps and other policy documents, such as standard operating procedures, to administer and monitor the progress of DAP projects at the POEs. For example, CBP develops project roadmaps for all donation projects in close collaboration with its partner, GSA (as applicable), and other entities involved in the project, and shares them with project participants. The roadmap identifies a variety of project milestones and tasks, such as drafting the MOU and completing the technical requirements package, among other things. The roadmap also tracks the number of days that CBP expects will be required to complete each task, which helps CBP to ensure that all stakeholders meet project milestones. CBP also monitors overall DAP implementation by collecting quantitative data on the efficiency of DAP processes to inform program and process improvements. For example, from 2015 to 2016, CBP consolidated certain elements of its application evaluation process to reduce the number of days it takes to evaluate and approve applications from an average of 144 days to 75 days for large-scale donations. Similarly, from 2015 to 2016, CBP determined that it could gain efficiencies by establishing a separate application evaluation and approval process for small-scale donation applications to better accommodate small-scale donations, and delegated approval and acceptance authority to the Office of Field Operations Executive Assistant Commissioner. This new process expedited the proposal evaluation timeline for small-scale donations from approximately 27 days to 14 days. In addition, GSA implemented a similar delegation authority for approval and acceptance of small-scale donations in fiscal year 2017, which decreased GSA’s application evaluation process from approximately 57 days to 25 days from fiscal year 2016 to 2017. In addition to monitoring the implementation of the overall program and the progress of specific DAP projects, CBP works with its partners to evaluate the benefits of each project. Specifically, during the planning and development phase of a donation, AFP officials coordinate with local CBP officials and DAP partners to develop a plan for identifying, measuring, and reporting on the local benefits to be derived from accepted donations upon project completion. CBP has completed its evaluation of the benefits of one completed small-scale project. For example, CBP estimated that the donated perforating machine at the Freeport Sea POE will save CBP 166 officer hours and approximately $7,450 in salary and maintenance costs per year. For large-scale projects, CBP is working with its partners to develop these evaluation plans, but it is too early for CBP to evaluate the benefits given that most of these projects are in the early planning and development phases. CBP shares its findings on benefits with its partners to help them assess their return on investment and so that they can share that information with their own local stakeholders. CBP Is Taking Steps to Plan for the Expansion of Its RSP and DAP, but Could Benefit from Establishing an Evaluation Plan to Assess Overall Program Performance CBP is taking steps to monitor the existing use and impacts of RSP and DAP and to plan for further expansion of these programs. For example, in addition to the monthly metrics reports that CBP provides its RSP partners, AFP officials told us that they monitor the fulfillment rates of formal partner requests for RSP services. The current fulfillment rate across all of CBP’s RSP agreements is over 99 percent. In addition, as noted previously, AFP officials coordinate with local CBP officials and DAP partners to develop a plan for identifying, measuring, and reporting on the local benefits to be derived from accepted donations upon project completion. Furthermore, with regard to planning for future program expansion, CBP has taken steps to plan for the additional oversight activities that it expects at the headquarters level as the RSP expands. For example, CBP is hiring new staff members and contractors for the AFP office, as well as reimbursing the Office of Finance for one staff position and embedding one staff member in the Budget Office to help complete the increased number of financial transactions and audits. In addition, the AFP office is considering the future impact of DAP projects on staffing and other resources at the affected POEs, and is working with Field Office, POE, and partner officials to identify and budget for anticipated operational needs, with assistance from CBP’s Workload Staffing Model and Planning, Program Analysis and Evaluation offices. These efforts to monitor and evaluate the impacts of the programs and plan for further expansion are positive steps that should help position CBP to manage anticipated increases in the number of agreements going forward. Furthermore, prior to Sections 481 and 482 authorities, in accordance with the report of the Senate Appropriations Committee accompanying the Department of Homeland Security Appropriations Act, 2013, CBP submitted semiannual reports to Congress on its Section 560 partnerships for fiscal years 2014 through 2016. CBP included information in these reports on the benefits of RSP services. For example, CBP compared baseline traveler and vehicle volume and wait times at participating POEs from previous years to the traveler and vehicle volume and wait times during time periods when CBP provided reimbursable services. Subsequently, in accordance with the Consolidated Appropriations Act, 2014, CBP developed an evaluation plan with objectives, criteria, evaluation methodologies, and data collection plans to be used to evaluate RSP and DAP performance on an annual and aggregated basis. However, the provision requiring that an evaluation plan be established for the section 559 pilot program was repealed by the Cross- Border Trade Enhancement Act of 2016. This Act requires that CBP report to Congress annually to identify the activities undertaken and the agreements entered into under the RSP and DAP but does not require that CBP develop or report on an evaluation plan for these programs. As of November 2017, CBP had not decided whether it will use a performance evaluation plan going forward. However, in December 2017, AFP officials acknowledged that such a plan—that examines RSP and DAP performance at the programmatic level—could benefit program management and augment evaluation activities already conducted by the AFP office. We reviewed draft versions of CBP’s fiscal year 2017 reports to Congress on new Section 481 fee agreements and new Section 482 donation agreements. Both reports detailed how CBP responded to changes in legislative authorities for the RSP and DAP and listed its fiscal year 2017 selections for public-private partnership agreements, but did not include an evaluation plan or identify measures for tracking program performance going forward. Further, while the AFP office tracks the fulfillment rates of requests for RSP services and is working with its partners and other CBP components to monitor and plan for program expansion, CBP could benefit from a more robust assessment of possible impacts of staffing challenges on program expansion. As mentioned above, as of fiscal year 2017, CBP has an overall staffing shortage of 2,516 officers, according to CBP’s Workload Staffing Model analysis, and CBP officer hiring remains an agency-wide challenge. We identified some staffing challenges that could affect CBP’s management and implementation of its RSP and DAP programs, which roughly doubled in the number of agreements from fiscal year 2016 to 2017. As of November 2017, public-private partnership agreements were in place at approximately one-third of all U.S. POEs. With the removal of the limit on the number of air agreements that CBP can enter each year, some POEs have or are anticipated by CBP to have more than one RSP agreement in place. According to AFP officials, if there are multiple RSP partnerships at the same POE, CBP will try to accommodate all partner requests. Generally, the AFP office expects the POEs to handle requests on a first-come, first-serve basis. As the number of RSP partners increase across POEs, requests for services are likely to also increase, according to CBP officials. While it is too soon for CBP to assess the extent to which fulfillment rates may change over time, if at all, with the expansion of the program, officials noted that RSP agreements do not guarantee that CBP will be able provide all services that partners request, and that RSP services are above and beyond what CBP would normally provide. According to CBP, the recent increase in the mandated cap on officer overtime pay from $35,000 to $45,000 has allowed CBP officers to work more RSP overtime. Nevertheless, it is unclear how CBP will evaluate and address any increase in RSP agreements that may outpace the staff available to fulfill service requests. As noted previously, new authorities for the RSP also allow CBP to enter into agreements that allow partners to reimburse CBP for up to five additional officers, above the number assigned at the time the agreement was reached, at small airports. In fiscal year 2017, CBP selected four partners for this type of reimbursable services agreement. For its agreement with the Rhode Island Airport Corporation, CBP relocated three officers from the Boston-Logan International Airport, one of the busiest U.S. international airports, to T.F. Green State International Airport, which inspects less than 100,000 international travelers annually. AFP officials noted that, in accordance with legislation, the Port Director overseeing the port of origin for the CBP officer(s) added to small airports must determine that the movement of the officer(s) from one POE to another in fulfilling RSP agreements for additional CBP officers does not permanently affect operations at any other POE, including the POE that the officer(s) depart. However, CBP has not planned for how individual POEs or the agency more broadly would make these determinations or how CBP would evaluate any longer term impacts on overall CBP officer staffing resulting from the movement of officers among POEs. Office of Management and Budget guidance for making program expansion decisions indicates that agencies should evaluate cost- effectiveness in a manner that presents facts and supporting details among competing alternatives, including relative costs, benefits, and performance tradeoffs. Further, in September 2016 we developed a list of leading practices for evaluation based on the American Evaluation Association’s An Evaluation Roadmap for a More Effective Government, including development of an evaluation plan or agenda, a description of methods and data sources in evaluation reports, procedures for assuring evaluation quality, and tracking the use of evaluation findings in management or reforms, among others. CBP is taking steps to monitor its RSP and DAP and plan for program expansion. However, given its staffing challenges, CBP could benefit from developing and implementing an evaluation plan for assessing overall RSP and DAP performance. Such a plan could further integrate evaluation activities into program management and could better position CBP to assess relative costs, benefits, and performance trade-offs as CBP expands its RSP and DAP, and consider the extent to which any future program changes may be needed. Conclusions The amount of legitimate travel and trade entering through the nation’s POEs continues to increase each year. To date, CBP and its partners have utilized public-private partnerships to help meet an increased demand for CBP services and infrastructure improvements at POEs, and agency officials and program partners have generally concurred that the RSP and DAP have been effective in helping to bridge CBP resource gaps and improve partner operations. However, given CBP’s officer hiring and retention challenges and its finite resources for addressing infrastructure needs at POEs, CBP’s ability to monitor and evaluate the implementation of its public-private partnership programs is essential to ensuring that CBP leaders have the information that they need to make program decisions and identify and respond to challenges as the programs expand. As CBP continues to expand its public-private partnership programs, evaluating the RSP and DAP at the program level could better position CBP leaders to assess the relative costs, benefits, and performance trade-offs of continuing to expand the programs. It could also better position CBP to identify and respond to expansion challenges, such as CBP officer staffing. Recommendation for Executive Action The CBP Commissioner should develop and implement an evaluation plan to be used to assess the overall performance of the RSP and DAP, which could include, among other things, measurable objectives, performance criteria, evaluation methodologies, and data collection plans to inform future program decisions. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to DHS and GSA for their review and comment. GSA indicated that it did not have any comments on the draft report via e-mail. DHS provided written comments, which are noted below and reproduced in full in appendix II, and technical comments, which we incorporated as appropriate. DHS concurred with our recommendation and described the actions it plans to take in response. Specifically, DHS stated that CBP will develop and implement a plan to assess the overall performance of the RSP and DAP to inform future program decisions. The plan will evaluate current partnerships, including but not limited to: service denial rate; trend analysis of frequency and type of requests; annual stakeholder survey results; impact of multiple stakeholders in one port location on levels of service provided; impact of unanticipated operations and maintenance costs associated with property donations; and staffing implications on donations of upgraded port infrastructure. If implemented effectively, these planned actions should address the intent of our recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Administrator of the General Services Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Details of U.S. Customs and Border Protection Reimbursable Services Program Agreement Memoranda of Understanding Since 2013, U.S. Customs and Border Protection (CBP) has entered into public-private partnerships with private sector or government entities under its Reimbursable Services Program (RSP) to cover CBP’s cost of providing certain services at U.S. ports of entry (POE) upon the request of partners. As of the end of fiscal year 2017, CBP approved 114 applications for reimbursable fee agreements. These services can include customs, immigration, agricultural processing, border security and support at any facility where CBP provides, or will provide services and may cover costs such as salaries, benefits, overtime expenses, administration, and transportation costs. Once CBP selects an application for a new reimbursable services partnership, CBP and its partner sign a legally binding Reimbursable Services Agreement, which is a standard legal form that CBP uses for all new RSP agreements. Local CBP officials then work with the partner to negotiate the terms of a Memorandum of Understanding (MOU), which outlines how the partnership will work at the POE. In the following table, we provide select details from the 54 existing MOUs between CBP and its partners in the RSP. In addition to the partners listed in the table above, CBP has also signed Reimbursable Services Agreements with the following partners, but has not completed negotiating the terms of an MOU as of the end of fiscal year 2017. Fiscal year 2016 partners: 1. City of Charlotte Aviation Department 2. Dole Fresh Fruit Company (Port of Wilmington, Delaware; Port Everglades; and Port of Freeport) 3. GT USA LLC 4. Port of Galveston 5. Presidio Port Authority Local Government Corporation 6. Red Hook Container Terminal, LLC 7. United Parcel Service Co. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kirk Kiester (Assistant Director), Dominick Dale, Michele Fejfar, Eric Hauswirth, Stephanie Heiken, Susan Hsu, Elizabeth Leibinger, David Lutter, and Sasan J. “Jon” Najmi made significant contributions to this report.
Why GAO Did This Study International trade and travel to the United States is increasing. On a typical day in fiscal year 2016, CBP officers inspected nearly 1.1 million passengers and pedestrians and over 74,000 truck, rail, and sea containers at 328 U.S. land, sea, and air ports of entry, according to CBP. To help meet the increased demand for these types of CBP services, since 2013, CBP has entered into public-private partnerships under RSP and DAP. The RSP allows partners to reimburse CBP for providing services that exceed CBP's normal operations, such as paying overtime for CBP personnel that provide services at ports of entry outside normal business hours. The DAP enables partners to donate property or provide funding for port of entry infrastructure improvements. The Cross-Border Trade Enhancement Act of 2016 included a provision for GAO to review the RSP and DAP. This report examines: (1) how CBP approves and administers RSP and DAP agreements, (2) the status of RSP and DAP agreements, including the purposes for which CBP has used funds and donations, and (3) the extent to which CBP monitors and evaluates program implementation. GAO reviewed partnership agreements and data on program usage. GAO also interviewed CBP and partner officials at 11 ports of entry selected based on a mix of port of entry and agreement types. What GAO Found Within the Department of Homeland Security, U.S. Customs and Border Protection (CBP) uses criteria and follows documented procedures to evaluate and approve public-private partnership applications and administer the Reimbursable Services Program (RSP) and Donations Acceptance Program (DAP). For example, RSP applications undergo an initial review by CBP officials at the affected ports of entry before they are scored by an expert panel of CBP officials at headquarters. The panel evaluates RSP applications against seven criteria, such as impact on CBP operations. Similarly, DAP proposals are evaluated by CBP officials against seven operational and six technical criteria, such as real estate implications. Further, if the proposal involves real estate controlled by the General Services Administration (GSA), CBP and GSA officials collaborate on DAP selection decisions and project implementation. To administer the RSP and DAP, CBP has documented policies and procedures, such as standard operating procedures and implementation frameworks. For example, CBP uses a standard procedure to guide the process for RSP partners to request services and to provide reimbursement. For DAP projects, CBP, GSA (if applicable), and partners follow an implementation framework that includes a project planning and design phase. The number of public-private partnerships is increasing, and partnerships provide a variety of additional services and infrastructure improvements at ports of entry. From fiscal years 2013 through 2017, CBP selected over 100 partners for RSP agreements that could impact 112 ports of entry and other CBP-staffed locations, and the total number of RSP partnerships doubled from fiscal year 2016 to 2017. According to CBP, since partners began requesting reimbursable services in 2014, CBP has provided its partners nearly 370,000 officer overtime hours of services, which led to over $45 million in reimbursed funds. As a result, CBP inspected an additional 8 million travelers and over 1 million personal and commercial vehicles at ports of entry. Similar to the RSP, the number of DAP partnerships more than doubled from fiscal year 2016 to 2017, and totals 16 projects that impact 13 ports of entry as of November 2017. The donations include improvements, such as the installation of new inspection booths and equipment and removal of traffic medians, and are intended to support over $150 million in infrastructure improvements. CBP uses various processes to monitor and evaluate its partnerships, but could benefit from establishing an evaluation plan to assess overall program performance. For example, CBP conducts regular audits of RSP records to help ensure that CBP bills and collects funds from its partners accurately, and uses guidance, such as the DAP Implementation Roadmap, to identify and monitor project milestones and tasks. However, as of November 2017, CBP had not developed an evaluation plan—which could include, among other things, measurable objectives, performance criteria, and data collection plans—to assess the overall performance of the RSP and DAP, consistent with Office of Management and Budget guidance and leading practices. Given CBP's staffing challenges and anticipated growth of the RSP and DAP, an evaluation plan could better position CBP to further integrate evaluation activities into program management. What GAO Recommends GAO recommends that CBP develop an evaluation plan to assess the overall performance of the RSP and DAP. DHS concurred with the recommendation.
gao_GAO-18-590
gao_GAO-18-590_0
Background PLCY Organizational Structure and Vacancies With the passage of the NDAA in December 2016, PLCY is to be led by an Under Secretary for Strategy, Policy, and Plans, who is appointed by the President with advice and consent of the Senate. The Under Secretary is to report directly to the Secretary of Homeland Security. Prior to the NDAA, the office was headed by an assistant secretary. Since the passage of the act, the undersecretary position has been vacant, and as of June 5, 2018, the President had not nominated an individual to fill the position. According to PLCY officials, elevating the head of the office to an undersecretary was important because it equalizes PLCY with other DHS management offices and DHS headquarters components. The NDAA further authorizes, but does not require, the Secretary to establish a position of deputy undersecretary within PLCY. If the position is established, the NDAA provides that the Secretary may appoint a career employee to the position (i.e., not a political appointee). In March 2018, the Secretary named a Deputy Under Secretary, who has been performing the duties of the Deputy Under Secretary and the Under Secretary since then. As shown in figure 1, PLCY is divided into five sub- offices, each with a different focus area. As of June 5, 2018, the top position in these sub-offices was an assistant secretary and two of the five positions were vacant. As of June 5, 2018, 6 of PLCY’s 12 deputy assistant secretary positions were vacant or filled by acting staff temporarily performing the duties in the absence of permanent staff placement. PLCY’s Policy and Strategy Responsibilities, and Strategic Priorities The NDAA codified many of the functions and responsibilities that PLCY had been carrying out prior to the act’s enactment and, with a few exceptions as discussed later in this report, were largely consistent with the duties the office was already pursuing. According to the act and PLCY officials, one of the office’s fundamental responsibilities is to lead, conduct, and coordinate departmentwide policy development and implementation, and strategic planning. According to PLCY officials, there are four categories of policy and strategy efforts that PLCY leads, conducts, or coordinates: Statutory responsibilities: among others, the Homeland Security Act, as amended by the NDAA, includes such responsibilities as establishing standards of validity and reliability for statistical data collected by the department, conducting or overseeing analysis and reporting of such data, and maintaining all immigration statistical information of U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and U.S. Citizenship and Immigration Services; the Immigration and Nationality Act includes such responsibilities as providing for a system for collection and dissemination to Congress and the public of information useful in evaluating the social, economic, environmental, and demographic impact of immigration laws, and reporting annually on trends in lawful immigration flows, naturalizations, and enforcement actions, Representing DHS in interagency efforts: coordinating or representing departmental policy and strategy positions for larger interagency efforts (e.g., interagency policy committees convened by the White House), Secretary’s priorities: leading or coordinating efforts that correspond to the Secretary of Homeland Security’s priorities (e.g., certain immigration or law-enforcement related issues), and Self-initiated activities: opportunities to better harmonize policy and strategy or create additional efficiencies given PLCY’s ability to see across the department. For example, PLCY officials said that DHS observed an increase in e-commerce and small businesses shipping items via carriers other than the U.S. Postal Service, thus exploiting a gap in DHS monitoring, which covers the U.S. Postal Service and other traditional shipping entities. PLCY officials noted that DHS’s interest in addressing e-commerce issues occurred just before opioids and other controlled substances were being mailed through small businesses and the U.S. Postal Service. As a result, PLCY developed an e-commerce strategy for, among other things, the shipping of illegal items and how to provide information to U.S. Customs and Border Protection before parcels are shipped to the United States from abroad. In accordance with the NDAA, as PLCY leads, conducts, and coordinates policy and strategy, it is to do so in a manner that promotes and ensures quality, consistency, and integration across DHS and applies risk-based analysis and planning to departmentwide strategic planning efforts. The NDAA further provides that all component heads are to coordinate with PLCY when establishing or modifying policies or strategic planning guidance to ensure consistency with DHS’s policy priorities. In addition to the roles PLCY plays that are directly related to leading, conducting, and coordinating policy and strategy, the office is responsible for select operational functions. For example, PLCY is charged with operating the REAL ID and Visa Waiver Programs. The NDAA also conferred responsibilities to PLCY that had not been responsibilities of the DHS Office of Policy prior to the NDAA’s enactment. Among other things, the NDAA charged PLCY with responsibility for establishing standards of reliability and validity for statistical data collected and analyzed by the department, and ensuring the accuracy of metrics and statistical data provided to Congress. In conferring this responsibility, the act also transferred to PLCY the maintenance of all immigration statistical information of the U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and U.S. Citizenship and Immigration Services. PLCY has established five performance goals: build departmental policy-making capacity, coordination, and foster the Unity of Effort, mature the office as a mission-oriented, component-focused organization that is responsive to DHS leadership, effectively engage and leverage stakeholders, enhance productivity and effectiveness of policy personnel through appropriate alignment of knowledge, skills, and abilities, and accountability, transparency, and leadership. PLCY officials stated that the office established the performance goals in fiscal year 2015 and they were still in effect as of fiscal year 2018. Homeland Security Crosscutting Missions and Functions As previously discussed, DHS has eight operational components. DHS also has six support components. Although each one has a distinct role to play in helping to secure the homeland, there are operational and support functions that cut across mission areas. For example, nearly every operational component has, as part of its security operations, a need for screening, vetting, and credentialing procedures and risk- targeting mechanisms. Likewise, nearly all operational components have some form of international engagement, deploying staff abroad to help secure the homeland before threats reach U.S. borders. Finally, as shown in figure 2, different aspects of broad mission areas fall under the purview of more than one DHS operational component. Key Departmentwide and Crosscutting Strategic Efforts PLCY is responsible for coordinating three key DHS strategic efforts: the QHSR, the DHS Strategic Plan, and the Resource Planning Guidance. The QHSR is a comprehensive examination of the homeland security strategy of the nation that is to occur every 4 years and include recommendations regarding the long-term strategy and priorities for homeland security of the nation and guidance on the programs, assets, capabilities, budget, policies, and authorities of DHS. The QHSR is to be conducted in consultation with the heads of other federal agencies, key DHS officials (including the Under Secretary, PLCY), and key officials from other relevant governmental and nongovernmental entities. The DHS Strategic Plan describes how DHS can accomplish the missions it identifies in the QHSR report, identifies high-priority mission areas within DHS, and lays the foundation for DHS to accomplish its Unity of Effort Initiative as well as various cross-agency priority goals in the strategic plan, such as cybersecurity. The Resource Planning Guidance describes DHS’s annual resource allocation process in order to execute the missions and goals of the QHSR and DHS Strategic Plan. The Resource Planning Guidance contains guidance over a 5-year period and informs several forward- looking reports to Congress, including the annual fiscal year Congressional Budget Justification as well as the Future Years Homeland Security Program Report. PLCY Has Effectively Coordinated Intradepartmental Strategy Efforts, but Ambiguous Roles and Responsibilities Have Limited PLCY’s Effectiveness in Coordinating Policy Although PLCY has effectively carried out key coordination functions at the senior level related to strategy, PLCY’s ability to lead and coordinate policy have been limited due to ambiguous roles and responsibilities and a lack of predictable, accountable, and repeatable procedures. PLCY Has Effectively Conducted Key Coordination Functions at the Senior Level According to our analysis and interviews with operational components, PLCY’s efforts to lead and coordinate departmentwide and crosscutting strategies—a key organizational objective—have been effective in providing opportunities for all relevant stakeholders to learn about and contribute to departmentwide or crosscutting strategy development. In this role, PLCY routinely serves as the executive agent for the Deputies Management Action Group and the Senior Leaders Council, which involve analytical and coordination support. PLCY also provides support for deputy- and principal-level decision making. For example, the Strategy and Policy Executive Steering Committee (S&P ESC) meetings have been used to discuss components’ implementation plans for crosscutting strategies, PLCY’s requests for information from components for an upcoming strategy, and updates on departmentwide strategic planning initiatives. According to PLCY and operational component officials, PLCY also provides leadership for the Resource Planning Guidance and Winter Studies, both of which help inform departmentwide resource decision- making. For example, officials from one operational component stated that PLCY’s leadership of the Resource Planning Guidance is a helpful practice for coordination and collaboration on departmentwide or crosscutting strategies. The officials stated that PLCY reaches out to ensure that the component is covering the Secretary’s priorities and this helps the component to ensure that its budget includes them. Furthermore, PLCY develops and coordinates policy options and opinions for the Secretary to present at the National Security Council and other White House-level meetings. For example, PLCY officials told us that, in light of allegations of Russian involvement in using poisonous nerve agents on two civilians in Great Britain, PLCY coordinated the collection of information to develop a policy recommendation for the Secretary to present at a National Security Council meeting. Ambiguity in Roles and Responsibilities and a Lack of Predictable, Repeatable, and Accountable Procedures Have Limited PLCY’s Ability to Lead and Coordinate Policy PLCY has encountered challenges leading and coordinating efforts to develop, update, or harmonize policy—also a key organizational objective—because it does not have clearly-defined roles, responsibilities, and mechanisms to implement these responsibilities in a predictable, repeatable, and accountable way. Standards for Internal Control in the Federal Government states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. As such, an organization’s management should develop an organizational structure with an understanding of the overall responsibilities and assign these responsibilities to discrete units to enable the organization to operate in an efficient and effective manner. An organization’s management should also implement control activities through policies. It is important that an organization’s management document and define policies and communicate those policies and procedures to personnel, so they can implement control activities for their assigned responsibilities. In addition, leading collaboration practices we have identified in our prior work include defining and articulating a common outcome, clarifying roles and responsibilities, and establishing mutually-reinforcing or joint strategies to enhance and sustain collaboration, such as the work that PLCY and the components need to do together to ensure that departmentwide and crosscutting policy is effective for all relevant parties. According to PLCY officials, in general, PLCY is responsible for leading the development of a policy when it crosses multiple components or if there is a national implication, including White House interest in the policy. However, PLCY officials acknowledged that this practice does not always make them the lead and there are no established criteria that define the circumstances under which PLCY (or another organizational unit) should lead development of policies that cut across organizational boundaries. PLCY officials said the lead entity for a policy is often announced in an email from the Secretary’s office, on a case-by-case basis. According to PLCY officials, once components have been assigned responsibility for a policy, they have generally tended to retain it, and PLCY may not have oversight for crosscutting policies that are maintained by operational components. Therefore, there is no established, coordinated system of oversight to periodically monitor the need for policy harmonization, revision, or rescission. In the absence of clear roles and responsibilities, and processes and procedures to support them, PLCY and officials in 5 of the 8 components have encountered challenges in coordinating with each other. Although PLCY and most component officials we interviewed described overall positive experiences in coordinating with each other, we identified multiple instances of (1) confusion about which parties should lead and engage in policy efforts, (2) not engaging components at the right times, (3) incompatible expectations around timelines, and (4) uncertainty about PLCY’s role and the extent to which it can and should identify and drive policy in support of a more cohesive DHS. Confusion about who should lead and engage. Officials from one operational component told us that they were tasked with leading a departmentwide policy development effort they believed was outside their area of responsibility and expertise. Officials in another operational component stated that components sometimes end up coordinating among themselves, but that policy development could be more effective and efficient if PLCY took the role of convener and facilitator to ensure the departmentwide perspective is present and all relevant stakeholders participate. Officials from a third component stated that they spent significant time and resources to develop a policy directly related to their component’s mission. As the component got ready to implement the policy, PLCY became aware of it and asked the component to stop working on the policy, so PLCY could develop a departmentwide policy. According to component officials, while they were supportive of a departmentwide policy, PLCY’s timing delayed implementation of the policy the component had developed and wasted the resources it had invested. Moreover, officials from four operational components told us that sometimes counselors from outside PLCY, such as the Secretary’s office, have led policy efforts that seem like they should be PLCY’s responsibility, which created more confusion about what PLCY’s ongoing role should be. PLCY officials agreed that, at times, it has been challenging to define PLCY’s role relative to counselors for the Secretary, and acknowledged that clear guidance to define who is leading which types of policy development and coordination would be helpful. Not engaging components at the right times. Officials from 5 of 8 operational components told us that they had not always been engaged at the right times by PLCY in departmentwide or crosscutting policies that affected their missions. For example, officials from an operational component described a crosscutting policy that had significant implications for some of its key operational resources, but the component was not made aware of the policy until it was about to be presented at the White House. Officials from another component stated that they learned of a new policy after it was in place and had to find significant training and software resources to implement it even though they viewed the policy as unnecessary for their mission. PLCY officials stated that, while they intend to identify all components that should be involved in a policy, there are times when PLCY is unaware a component is developing a policy that affects other components. PLCY officials said they will involve other components when PLCY becomes aware that a component is developing such a policy. PLCY officials stated that it would be helpful to have a process and procedures for cross-component coordination on policies to help guide engagement regardless of who is developing the policy. Incompatible expectations around timelines. Officials at 4 of 8 operational components stated that short timelines from PLCY to provide input and feedback can prevent PLCY from obtaining thoughtful and complete information from components. For example, officials from one component stated that PLCY asked them to perform an analysis that would inform major, departmental decision-making and quickly provide the analysis. Component officials told us that they did not understand why PLCY needed the analysis on such an accelerated timeline, which seemed inappropriate given the level of importance and purpose of the analysis. Officials from another component told us that PLCY had not always provided enough time to provide thoughtful feedback; therefore, component officials were not sure if PLCY really wanted their feedback. Officials from a third component stated that sometimes PLCY did not provide sufficient time for thoughtful input or feedback that had cleared the component’s legal review, so component officials elected to miss PLCY’s deadline and provide late feedback. PLCY officials told us that, frequently, timelines are not within their control, a situation that some component officials also noted during our interviews with them. However, PLCY officials agreed that a documented, predictable, and repeatable process and procedures for policies may help ensure PLCY provides sufficient comment time when in its control and may provide a basis to help negotiate timelines with DHS leadership in other situations. PLCY officials stated that, even with a documented process and procedures, there would still be circumstances when short timelines are unavoidable. Uncertainty about PLCY’s role in driving policy harmonization. Policy officials at 6 of 8 operational components told us that they were unsure or not aware of PLCY’s role in harmonizing policy across the department, and stated a desire for PLCY to be more involved in harmonizing or enhancing departmentwide and crosscutting policy or for greater clarity about PLCY’s responsibility to play this role. As previously discussed, PLCY’s policy and strategy efforts fall into four categories—statutory responsibilities, interagency efforts, Secretary’s priorities, and self- initiated activities; these activities include efforts to better harmonize policies and strategies. According to PLCY officials, the category with the lowest priority is self-initiated activities. PLCY officials stated that PLCY makes tradeoffs and rarely chooses to work on self-initiated projects over its other three categories of effort. According to the officials, PLCY’s work on the other three higher-priority categories is sufficient to ensure that the office is effectively leading, conducting, and coordinating strategy and policy across the department. Given its organizational position and strategic priorities, PLCY is uniquely situated to identify opportunities to better harmonize or enhance departmentwide and crosscutting policy, a role that is in line with its strategic priority to build departmental policymaking capacity and foster Unity of Effort. In the absence of clear articulation of the department’s expectations for PLCY in this role, it is difficult for PLCY and DHS leadership to make completely informed and deliberate decisions about the tradeoffs they make across any available resources. Past Efforts to Define and Codify PLCY’s Roles and Responsibilities in a Delegation of Authority Remain Incomplete In addition to statutory authority that PLCY received in the NDAA, PLCY officials stated that a separate, clear delegation of authority—a mechanism by which the Secretary delegates responsibilities to other organizational units within DHS—is needed to help confront the ambiguous roles it has experienced in the past. PLCY officials stated that past efforts to finalize a delegation of authority have stalled during leadership changes and that the initiative has been a lower priority, in part, due to where PLCY is in its maturation process and DHS is in its evolution into a more cohesive department under the Unity of Effort. As of May 2018, the effort had been revived, but it is not clear whether and when DHS will finalize it. According to a senior official in the Office of the Under Secretary for Management, a delegation of authority is important for PLCY. He described the creation of a delegation of authority as a process that does more than simply delegate the Secretary’s authority. He noted that defining PLCY’s roles and responsibilities in relation to other organizational units presents an opportunity to engage all relevant components and agree on appropriate roles. He said that, earlier in the organizational life of the Office of the Under Secretary for Management, it went through a process like this, which has been vital in it being able to carry out its mission. He said now that PLCY has a deputy undersecretary in place, this is a good time to restart the process to develop the delegation of authority. Until the delegation or a similar process clearly and fully articulates PLCY’s roles and responsibilities, PLCY and the operational components are likely to continue to experience limitations in collaboration on crosscutting and departmentwide policy. PLCY Identifies Workforce Needs during the Annual Budget Cycle, but Could Apply DHS Workforce Planning Guidance to Better Identify and Communicate Resource Needs PLCY determines its workforce needs through the annual budget process, but systematic identification of workforce demand, capacity gaps, and strategies to address them could help ensure that PLCY’s workforce aligns with its and DHS’s priorities and goals. PLCY Uses the Annual Budget Cycle to Determine Workforce Needs and Requires Flexibility in Staffing To determine its workforce needs each year, PLCY officials told us that, as part of the annual budget cycle, they work with PLCY staff and operational components to determine the scope of activities required for each PLCY area of responsibility and the associated staffing needs. PLCY officials said there are three skill sets needed to carry out the office’s responsibilities: policy analysis, social science analysis, and regional affairs analysis. PLCY officials explained that the office’s priorities can change rapidly as events occur and the Secretary’s and administration’s priorities shift. Therefore, according to PLCY officials, their staffing model must be flexible. They said that, rather than a defined system of full-time equivalents with set position types and levels, PLCY officials start with their budget allotment and consider current and potential emerging needs to set position types and levels, which may fluctuate significantly from year to year. In addition, PLCY officials stated that PLCY staff are primarily generalists and, given the versatility in skill sets of their workforce, PLCY has a lot of flexibility to move staff around if there is an emerging need. For example, if there is an emerging law enforcement issue that affects all law enforcement agencies, PLCY may be tasked with developing a policy to ensure the issue is addressed quickly and that the resulting policy is harmonized across the department and with other law enforcement agencies, such as the Department of Justice. PLCY Has Not Used DHS’s Workforce Planning Guide to Analyze Workforce Gaps or Communicate Tradeoffs to DHS Management to Ensure Alignment with DHS Priorities While PLCY completes some workforce planning activities as part of its annual budgeting process, PLCY does not systematically address several aspects of the DHS Workforce Planning Guide that may create more efficient operations and greater alignment with DHS priorities. According to the DHS Workforce Planning Guide, workforce planning is a process that ensures the right number of people with the right skills are in the right jobs at the right time for DHS to achieve the mission. This process provides a framework to: align workforce planning to the department’s mission and goals, predict, then assess how evolving missions, new processes, or environmental conditions may impact the way that work will be performed at DHS in the future, identify gaps in capacity, develop and implement strategies and action plans to address capacity and capability gaps, and continuously monitor the effectiveness of action plans and modify, as necessary. The DHS Workforce Planning Guide stipulates that an organization’s management should not only lead and show support during the workforce planning process, but ensure alignment with the strategic direction of the agency. Moreover, Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity’s objectives. For example, management uses an entity’s operational processes to make informed decisions and evaluate the entity’s performance in achieving key agency objectives. According to PLCY officials, the current staffing paradigm involves shifting the office’s staff when new and urgent issues arise from the Secretary or White House, and adding these unexpected tasks to staff’s existing responsibilities. However, this means that tradeoffs are made, resulting in some priority items taking longer to address or not getting attention at all. PLCY officials stated that they have been caught off-guard at times by changes in demands placed on PLCY and had to scramble to address the new needs. Additionally, PLCY officials said they have a number of vacancies, which hamper the office’s ability to meet certain aspects of its mission. For example, PLCY’s Office of Cyber, Infrastructure, and Resilience was created in 2015. According to PLCY officials, PLCY has had some resources to address cyber issues, however, there has not been funding to staff this office and an assistant secretary has not been appointed to lead it. Therefore, PLCY officials stated that PLCY has not been able to address its responsibilities for infrastructure resilience. Similarly, PLCY has limited capacity for risk analysis. A provision of the NDAA provides that PLCY is to: develop and coordinate strategic plans and long-term goals of the department with risk-based analysis and planning to improve operational mission effectiveness, including consultation with the Secretary regarding the quadrennial homeland security review under section 707 [6 U.S.C. § 347]. However, PLCY officials acknowledged that their focus on identifying needs for risk analyses and conducting them has been limited, in part, because DHS disbanded the risk management office. Officials from one component told us that they contribute to a report that PLCY coordinates, called Homeland Security National Risk Characteristics, which is prepared as a precursor to the DHS Strategic Plan. PLCY officials stated that, outside of these foundational documents and some risk-based analyses completed as part of specific policy development efforts, PLCY does not have the capacity to complete any additional risk analysis activities. Although PLCY officials said they conduct some analysis of potential demands as a starting point for how to allocate PLCY’s annual staffing budget, these efforts are largely informal and internal and have not resulted in a systematic analysis that provides PLCY and DHS management with the information they need to understand the effects of resource tradeoffs. Also, PLCY officials said they track accomplishments toward PLCY’s strategic priorities as part of a weekly meeting and report, however, officials acknowledged they do not analyze what role workforce decisions have played in achieving or not achieving strategic priorities. Moreover, although PLCY officials stated that they have intermittent, in- person, informal communication about resource use, they have not used the principles outlined in the DHS Workforce Planning Guide to systematically identify and communicate workforce demands, capacity gaps, and strategies to address workforce issues. According to PLCY officials, they have not conducted such analysis, in part, because the Secretary’s office has not requested it of them or the other DHS offices that are funded in the same part of the DHS budget. Regardless of whether the Secretary expects workforce analysis as part of the budgeting process, the DHS Workforce Planning Guide could be used within and outside of the budgeting process to help inform resource decision making throughout the year. PLCY officials stated that at the PLCY Deputy Under Secretary’s initiative, they recently began a review of all relevant statutory authorities, which they will map against the current organizational structure and day- to-day operations. The Deputy Under Secretary plans to use the results of the review to enhance PLCY’s efficiency and effectiveness, and the results could serve as a foundation for a more holistic and systematic analysis of workforce demand, any capacity gaps, and strategies to address them. Employing workforce planning principles—in particular, systematic identification of workforce demand, capacity gaps, and strategies to address them—consistent with the DHS Workforce Planning Guide could better position PLCY to use its workforce as effectively as possible under uncertain conditions. Moreover, using the DHS guide would help PLCY to systematically communicate information about any workforce gaps to DHS leadership, so there is transparency about how workforce tradeoffs affect PLCY’s ability to support DHS goals. Additional External Communication Practices Could Enhance PLCY’s Collaboration with DHS Stakeholders As discussed earlier, officials from PLCY and DHS operational components praised existing mechanisms to coordinate and communicate at the senior level, especially about strategy. However, component officials identified opportunities for PLCY to better connect at the staff level to identify and respond to emerging policy and strategy needs. Leading practices for collaboration that we have identified in our prior work state that it is important to ensure that all relevant participants have been included in a collaborative effort, and positive working relationships among participants from different agencies or offices can bridge organizational cultures. These relationships build trust and foster communication, which facilitate collaboration. Also, as previously stated, PLCY has mechanisms like the S&P ESC to communicate and coordinate with operational components and other DHS stakeholders at the senior level (e.g., Senior Executive Service officials). However, PLCY does not have a mechanism to effectively engage in routine communication and collaboration at the staff level (e.g., program and policy specialists working at operational components to oversee or implement policy and strategy functions). Specifically, officials with responsibility for policy and strategy at 6 of 8 operational components told us that they did not have regular contact with or know who to contact at PLCY for questions about policies or strategies, or that the reason they knew who to contact was because of existing working relationships, not because of efforts PLCY had undertaken to facilitate such contacts. In addition, some component officials noted that, when they tried to use the PLCY website to coordinate, they found it to be out of date and lacking sufficient information. PLCY officials acknowledged that the website needs improvement. They stated that the office has developed improved content for the website, but does not have the necessary staff to update the website. According to the officials, the needed staff should be hired soon and improved content should be on the website by the end of summer 2018. Although officials at 5 of the 8 operational components we interviewed stated that the quality of PLCY’s coordination and collaboration has improved in the past 2 years or so, component officials offered several suggestions to enhance PLCY’s coordination and collaboration, especially at the staff level. Among these were: conduct routine information sharing meetings with staff-level officials who have policy and strategy responsibilities at each operational component, clearly articulate points of contact, their contact information, and their portfolios at PLCY as well as at other policy and strategy stakeholders, ensure the PLCY website is up-to-date with contact information for PLCY and components that work in strategy and policy areas, and with relevant information about crosscutting strategy and policy initiatives underway, host a forum—such as an annual conference—to bring together policy and strategy officials from PLCY and DHS components to share ideas and make contacts, and prepare a standard briefing for component officials with strategy and policy responsibilities to help ensure that staff at all levels understand what PLCY does, how it works, and opportunities for engagement on emerging policy and strategy needs or identified harmonization opportunities. For example, officials from one component told us that they would like PLCY officials to have in-person meetings with component staff to discuss what PLCY does, who to contact in PLCY, where to find information about policies and strategies, and other relevant information to ensure a smooth working relationship between the component and PLCY. According to PLCY officials, the office recognizes the value of creating mechanisms to connect staff, who work on policy and strategy at all levels in DHS. PLCY officials said they have historically done a better job in coordinating at the senior level, but are interested in expanding opportunities to connect other staff with policy and strategy responsibilities. PLCY officials stated that they are considering creating a working group structure that mirrors existing organizational mechanisms to coordinate at the senior level, but have not taken steps to do so. Routine collaboration among PLCY, operational components, and other DHS offices at the staff level is important to ensure that PLCY is able to carry out its functions under the NDAA, including the effective coordination of policies and strategies. A positive working relationship among these stakeholders can build trust, foster communication, and facilitate collaboration. Such enhanced communication and collaboration across PLCY and among component officials with policy and strategy responsibility could help the department more quickly and completely identify emerging, crosscutting strategy and policy needs and opportunities to enhance policy harmonization. Conclusions PLCY’s efforts to lead, conduct, and coordinate departmentwide and crosscutting policies have sometimes been hampered by the lack of clearly-defined roles and responsibilities. In addition, PLCY does not have a consistent process and procedures for its strategy development and policymaking efforts. Without a delegation of authority or similar documentation from DHS leadership clearly articulating PLCY’s missions, roles, and responsibilities—along with defined processes and procedures to carry them out in a predictable and repeatable manner—there is continuing risk that confusion and uncertainty about PLCY’s authority, missions, roles, and responsibilities will limit its effectiveness. PLCY employs some workforce planning, but does not systematically apply key principles of the DHS Workforce Planning Guide to help predict workforce demand, and identify any workforce gaps and design strategies to address them. Without this analysis, PLCY faces limitations in ensuring that its workforce is aligned with its and DHS’s priorities and goals. Moreover, the results of this analysis would better position PLCY to communicate to DHS leadership any potential tradeoffs in workforce allocation that would affect PLCY’s ability to meet priorities and goals. PLCY could enhance its use of mechanisms for collaboration and communication with DHS stakeholders at the staff level. Implementation of additional mechanisms at the staff level for regular communication and coordination, including providing up-to-date information to stakeholders about the office, could help PLCY and operational components to better connect in order to identify and address emerging policy and strategy needs. Recommendations for Executive Action We are making the following four recommendations to DHS: The Secretary of Homeland Security should finalize a delegation of authority or similar document that clearly defines PLCY’s mission, roles, and responsibilities relative to DHS’s operational and support components. (Recommendation 1) The Secretary of Homeland Security should create corresponding processes and procedures to help implement the mission, roles, and responsibilities defined in the delegation of authority or similar document to help ensure predictability, repeatability, and accountability in departmentwide and crosscutting strategy and policy efforts. (Recommendation 2) The Under Secretary for Strategy, Policy, and Plans should use the DHS Workforce Planning Guide to help identify and analyze any gaps in PLCY’s workforce, design strategies to address any gaps, and communicate this information to DHS leadership. (Recommendation 3) The Under Secretary for Strategy, Policy, and Plans should enhance the use of collaboration and communication mechanisms to connect with staff in the components with responsibilities for policy and strategy to better identify and address emerging needs. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to DHS. DHS provided written comments, which are reproduced in appendix I. DHS also provided technical comments, which we incorporated, as appropriate. DHS concurred with three of our recommendations and described actions planned to address them. DHS did not concur with one recommendation. Specifically, DHS did not concur with our recommendation that PLCY should use the DHS Workforce Planning Guide to help identify and analyze any gaps in PLCY’s workforce, design strategies to address any gaps, and communicate this information to DHS leadership. The letter described a number of actions, including actions that are also described in the report, which PLCY takes to help ensure alignment of its staff with organizational needs. In the letter, PLCY officials pointed to the workforce activities PLCY undertakes as part of the annual budgeting cycle. We acknowledge that the actions described to predict upcoming priorities and resource needs as part of the annual budgeting cycle are in line with the DHS workforce planning principles. However, as we noted, there are opportunities to apply the workforce planning principles outside the annual budgeting cycle to provide greater visibility and awareness of resource tradeoffs to management inside PLCY and in the Secretary’s office. In the letter, PLCY officials made note of the dynamic and changing nature of its operational environment, stating that it often required them to shift resources and priorities on a more frequent or ad hoc basis than many organizations. We acknowledged in the report that PLCY’s operating environment requires it to maintain flexibility in its staffing approach. However, PLCY has a number of important duties, including helping foster Unity of Effort throughout the department and helping to ensure the availability of risk information for departmental decision making, that require longer-term, sustained attention and strategic management. During interviews, PLCY officials acknowledged that striking a balance between these needs has been difficult and at times they have faced significant struggles. The report describes some areas where, during the time we were conducting our work, it was clear that some tasks and functions, such as risk analyses, lacked the resources or focus necessary to ensure they received sustained institutional attention. It is because of PLCY’s dynamic operating environment, coupled with the need for sustained institutional attention to other key responsibilities, that we recommended PLCY undertake workforce planning activities that would help generate better information for PLCY and DHS management to have full visibility and awareness of gaps and resource tradeoffs. Finally, the letter stated that because PLCY is a very small and flat organization, it is able to identify capacity gaps and develop action plans without obtaining all of the data collected through each recommended element, worksheet, form, and template of the model proposed in the DHS Workforce Planning Guide. We acknowledge that it would be counterproductive for PLCY to engage in data collection and analysis that are significantly more elaborate than its planning needs. Nevertheless, we continue to believe that PLCY could use the principles more robustly, outside the annual budgeting process, to help ensure that it identifies and communicates the effect that resource tradeoffs have on its ability to accomplish its multifaceted mission. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (404) 679-1875 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgements GAO Contact Staff In addition to the contact named above, Kathryn Godfrey (Assistant Director), Joseph E. Dewechter (Analyst-in-Charge), Michelle Loutoo Wilson, Ricki Gaber, Dominick Dale, Thomas Lombardi, Ned Malone, David Alexander, Sarah Veale, and Michael Hansen made key contributions to this report.
Why GAO Did This Study GAO has designated DHS management as high risk because of challenges in building a cohesive department. PLCY supports cohesiveness by, among other things, coordinating departmentwide policy and strategy. In the past, however, questions have been raised about PLCY's efficacy. In December 2016, the NDAA codified PLCY's organizational structure, roles, and responsibilities. GAO was asked to evaluate PLCY's effectiveness. This report addresses the extent to which (1) DHS established an organizational structure and processes and procedures that position PLCY to be effective, (2) DHS and PLCY have ensured alignment of workforce with priorities, and (3) PLCY has engaged relevant component staff to help identify and respond to emerging needs. GAO analyzed the NDAA, documents describing specific responsibilities, and departmentwide policies and strategies. GAO also interviewed officials in PLCY and all eight operational components. What GAO Found According to our analysis and interviews with operational components, the Department of Homeland Security's (DHS) Office of Strategy, Policy, and Plans' (PLCY) organizational structure and efforts to lead and coordinate departmentwide and crosscutting strategies—a key organizational objective–have been effective. For example, PLCY's coordination efforts for a strategy and policy executive steering committee have been successful, particularly for strategies. However, PLCY has encountered challenges leading and coordinating efforts to develop, update, or harmonize policies that affect multiple DHS components. In large part, these challenges are because DHS does not have clearly-defined roles and responsibilities with accompanying processes and procedures to help PLCY lead and coordinate policy in a predictable, repeatable, and accountable manner. Until PLCY's roles and responsibilities for policy are more clearly defined and corresponding processes and procedures are in place, situations where the lack of clarity hampers PLCY's effectiveness in driving policy are likely to continue. Development of a delegation of authority, which involves reaching agreement about PLCY's roles and responsibilities and clearly documenting them, had been underway. However, it stalled due to changes in department leadership. As of May 2018, the effort had been revived, but it is not clear whether and when DHS will finalize it. PLCY does some workforce planning as part of its annual budgeting process, but does not systematically apply key principles of the DHS Workforce Planning Guide to help ensure that PLCY's workforce aligns with its and DHS's priorities and goals. According to PLCY officials, the nature of its mission requires a flexible staffing approach. As such, a portion of the staff functions as generalists who can be assigned to meet the needs of different situations, including unexpected changing priorities due to an emerging need. However, shifting short-term priorities requires tradeoffs, which may divert attention and resources from longer-term priorities. As of June 5, 2018, PLCY also had a number of vacancies in key leadership positions, which further limited attention to certain priorities. According to PLCY officials, PLCY recently began a review to identify the office's authorities in the National Defense Authorization Act for Fiscal Year 2017 (NDAA) and other statutes, compare these authorities to the current organization and operations, and address any workforce capacity gaps. Employing workforce planning principles—in particular, systematic identification of workforce demand, capacity gaps, and strategies to address them—consistent with the DHS Workforce Planning Guide could better position PLCY to use its workforce as effectively as possible under uncertain conditions and to communicate effectively with DHS leadership about tradeoffs. Officials from PLCY and DHS operational components praised existing mechanisms to coordinate and communicate at the senior level, especially about strategy, but component officials identified opportunities to better connect PLCY and component staff to improve communication flow about emerging policy and strategy needs. Among the ideas offered by component officials to enhance communication and collaboration were holding routine small-group meetings, creating forums for periodic knowledge sharing, and maintaining accurate and up-to-date contact information for all staff-level stakeholders. What GAO Recommends GAO is making four recommendations. DHS concurred with three recommendations, including that DHS finalize a delegation of authority defining PLCY's roles and responsibilities and develop corresponding processes and procedures. DHS did not concur with a recommendation to apply the DHS Workforce Planning Guide to identify and communicate workforce needs. GAO believes this recommendation is valid as discussed in the report.
gao_GAO-18-179
gao_GAO-18-179_0
Background Federal and state Medicaid spending on long-term care continues to increase; for example it increased from $146 billion in 2013 to $158 billion in 2015. Individuals seeking long-term care generally need care that is, by definition, longer term in nature and more costly than other types of care. Spending on long-term care services provided in home and community settings, including assisted living facilities, exceeds the amount spent on institutional settings such as nursing homes. State Medicaid programs may cover certain medical and non-medical services that assisted living facilities provide; however, the Medicaid statute does not provide for coverage of room and board charges of an assisted living facility. In their federal-state partnership, both CMS and states play important roles in the oversight of Medicaid. CMS is responsible for oversight of state Medicaid programs. To conduct this oversight, CMS issues program requirements in the form of regulations and guidance, approves changes states make to their programs, provides technical assistance to states, collects and reviews required information and data from states and, in some cases, reviews individual state programs. States are responsible for the day-to-day administration of their Medicaid programs, including monitoring and oversight of the different HCBS programs through which they cover assisted living services, within broad federal rules and requirements. Each state is required to identify and designate a single state agency to administer or supervise the administration of its Medicaid program. The state Medicaid agency may partially or fully delegate the administration and oversight of the state’s HCBS programs to another state agency or other entity, such as a state unit on aging, a mental health department, or other state departments or agencies with jurisdiction over a specific population or service. However, the state Medicaid agency is ultimately accountable to the federal government for compliance with the HCBS requirements. Under different authorizing provisions of federal law, states have considerable flexibility to establish multiple HCBS programs including those covering assisted living services. A state Medicaid program can have multiple HCBS programs operating under different federal authorities. CMS is responsible for ensuring that states meet the requirements associated with their HCBS programs under these different authorities. Key to states’ monitoring of the health and welfare of Medicaid beneficiaries is their tracking of, and response to, incidents that may cause harm to a beneficiary’s health or welfare, such as abuse, neglect, or exploitation—commonly referred to as critical incidents. Such monitoring is required for most HCBS programs; however, we previously found that requirements for states related to oversight of the health and welfare of beneficiaries in different types of HCBS programs varied, and recommended that CMS take steps to harmonize those requirements across programs. The most common HCBS programs with the most stringent federal requirements are HCBS waiver programs. These programs serve beneficiaries who are eligible for an institutional level of care; that is, beneficiaries must have needs that rise to the level of care usually provided in a nursing facility, hospital, or other institution. CMS oversees states’ HCBS waiver programs specifically by reviewing and approving applications and reviewing HCBS program reports that states submit. HCBS waiver program applications include specific requirements implementing various statutory and regulatory provisions. (See text box below.) One requirement is that states have the necessary safeguards in place to protect the health and welfare of beneficiaries receiving services covered by HCBS waiver programs. For each of their HCBS waiver programs, states must demonstrate to CMS that they are meeting various requirements CMS has established regarding beneficiary health and welfare. The Six Requirements States Must Demonstrate for Home- and Community-Based Services Waiver Programs 1. Administrative authority: The Medicaid agency retains ultimate administrative authority and responsibility for the operation of the waiver program by exercising oversight of the performance of waiver functions by other state and local/regional non-state agencies (if appropriate) and contracted entities. 2. Level of care: The state demonstrates that it implements the processes and instrument(s) specified in its approved waiver for evaluating/re-evaluating an applicant’s/waiver participant’s level of care consistent with care provided in a hospital, nursing facility, or intermediate care facility. 3. Qualified providers: The state demonstrates that it has designed and implemented an adequate system for assuring that all waiver services are provided by qualified providers. 4. Service plan: The state demonstrates it has designed and implemented an effective system for reviewing the adequacy of service plans for the waiver participants. 5. Health and welfare: The state demonstrates it has designed and implemented an effective system for assuring waiver participant health and welfare. 6. Financial accountability: The state must demonstrate that it has designed and implemented an adequate system for insuring financial accountability of the waiver program. CMS also provides ongoing oversight of state HCBS programs through annual reports that states must submit for each of their HCBS waiver programs as well as renewal reports submitted about two years before an HCBS waiver is scheduled to end. The state reports are intended to provide CMS with information on the operation of state HCBS waiver programs. In contrast to long-term care services provided in nursing facilities, less is known at the federal level about the oversight and quality of care in assisted living facilities. Generally, states establish their own licensing and oversight requirements for assisted living facilities. As a result, the requirements for assisted living facilities and the type and frequency of oversight can vary across states. In contrast, nursing homes must meet a comprehensive set of federal requirements in order to receive payment for long-term care services for Medicaid and Medicare beneficiaries in addition to state requirements. CMS contracts with state entities to regularly inspect nursing facilities and investigate complaints to assess whether nursing homes meet these federal quality requirements. Annually CMS publishes a comprehensive report on nursing homes that serve Medicaid and Medicare beneficiaries, including the extent that beneficiaries are at risk for harm, based on these investigations and inspections. In addition, CMS publicly reports a summary of each nursing home’s quality data using a five-star quality rating based on health inspection results, staffing data, and quality measure data. The goal of this rating system is to help consumers make meaningful distinctions among high- and low-performing nursing homes. This type of standardized framework for oversight, investigation and inspections, and reporting on quality of care concerns does not exist for assisted living facilities and other types of HCBS providers. States Reported Spending $10 Billion on More than 130 Programs Covering Assisted Living Services in 2014 Forty-Eight States Reported Spending $10 Billion on Assisted Living Services for More than 330,000 Medicaid Beneficiaries in 2014; Spending per Beneficiary Varied Widely by State Forty-eight state Medicaid agencies reported collectively spending about $10 billion in state and federal Medicaid funds for assisted living services in 2014, according to our survey. The other 3 states reported that they did not pay for assisted living services. We estimate that this spending for services provided by assisted living facilities represents 12.4 percent of the $80.6 billion Medicaid spent on HCBS in all settings that year. More than 330,000 Medicaid beneficiaries received assisted living services, based on data reported to us by the 48 states. Nationally, the average spending per beneficiary on assisted living services in the 48 states in 2014 was about $30,000; states provided these HCBS services through fee-for-service and managed care delivery models. Fee-for-service spending comprised 81 percent of total spending on assisted living services and managed care spending was about 19 percent of the total. The cost per beneficiary reported by surveyed states also varied based on payment type; average per beneficiary cost was $31,000 for fee-for-service and $27,000 for managed care. About 21 percent of Medicaid assisted living enrollment was for beneficiaries receiving these services under a managed care delivery model. (See table 1.) Average per-beneficiary spending varied significantly across the states. For example, for the nine states with the lowest spending per beneficiary, average Medicaid spending ranged from about $1,700 to about $9,500 per beneficiary. In contrast, in the nine states with the highest per- beneficiary spending, the average spending ranged from about $43,000 to $108,000 per beneficiary. (See Figure 1.) For more information on each state’s enrollment, total spending, and average per beneficiary spending on assisted living services, see appendix I. Forty-Eight States Administered More than 130 Programs That Covered Assisted Living Services, Mainly under HCBS Waiver Authority The 48 states that reported covering assisted living services in 2014 said they did so through 132 different programs. The majority of the states, 31 of the 48, reported administering more than one program that covered assisted living services. As illustrated in table 2 below, of the different types of HCBS programs under which states can provide coverage for assisted living services, HCBS waivers were the most common type of program they used. Specifically, 39 states and 69 percent of the programs that provided assisted living services, were operated under the HCBS waiver program. (See appendix II for additional details on each state’s number of programs by program type and total number of HCBS programs that covered assisted living facility services in 2014.) States Reported Offering Assisted Living Services to Certain Aged and Disabled Beneficiaries, and Most Reported Covering Common Services Almost all of the 48 states that covered assisted living services did so for two groups of Medicaid beneficiaries eligible through their programs. In 45 of 48 states, aged beneficiaries received services provided by assisted living facilities. Similarly, in 43 of 48 states, physically disabled beneficiaries received services. (See Figure 2.) In 38 or more of the 48 states that covered assisted living services, six types of services were provided. For example, 45 states covered assistance with activities of daily living, such as bathing and dressing; 44 states covered medication administration; and 41 states covered coordination of meals. (See Figure 3.) State Approaches for Overseeing Health and Welfare of Beneficiaries in Assisted Living Services Varied, Including Monitoring Incidents of Beneficiary Harm Oversight by State Medicaid Agencies Varied in the Functions Delegated to Other Agencies, the Information Used, and the Actions Taken to Correct Any Identified Problems State Medicaid agency approaches for oversight of assisted living services varied widely in terms of who provided the oversight for their largest programs, according to their responses to our survey. Thirteen of the 48 state Medicaid agencies reported delegating administrative responsibilities, including oversight of beneficiary health and welfare, to other state or local agencies. State Medicaid agencies may delegate the administration of programs to government or other agencies through a written agreement; however, state Medicaid agencies retain the ultimate oversight responsibility for those delegated functions. For example, among the 13 states that delegated HCBS program administration, the administering agencies were those that provided services to the aged, disabled, or both of these populations, such as the states’ Departments of Aging. (See text box, below, for examples of states’ delegation.) Examples of State Medicaid Agencies’ Delegation of Authority for Administration of Home- and Community-based Services’ Programs Covering Assisted Living Services Georgia’s Elderly & Disabled Waiver Program was operated in 2014 by the Georgia Department of Human Services Division of Aging Services, a separate agency of the state that was not a division/unit of the Medicaid agency. The Georgia Medicaid Agency maintained a formal interagency agreement with the Division of Aging Services which describes by function the required deliverables to support compliance and a schedule for delivery of reports. Nebraska’s Waiver for Aged and Adults and Children with Disabilities is operated by the state Medicaid agency Division of Medicaid and Long Term Care. The majority of services are provided by independent contractors in order to allow service delivery in the rural and frontier areas of the state. The state Medicaid agency contracts with the Area Agencies on Aging, Independent Living Centers, and Early Development Network agencies to perform a variety of operational and administrative functions including authorizing services and monitoring the delivery of services. States also varied in the types of information they reported reviewing as part of the oversight of assisted living services, and the extent to which state Medicaid agencies review the information when another agency is responsible for administration. For example, other entities outside the state Medicaid agency—such as the agency delegated to administer an HCBS program, or a contractor that manages provider enrollment—may check to ensure a provider is allowed to deliver services to Medicaid beneficiaries; in such cases, however, the state Medicaid agency might not be aware of the results of such checks. As illustrated in table 3, in all 48 states the types of information generally reviewed by either the state Medicaid agency, the agency delegated administrative responsibilities, or other agencies were: critical incident reports, the HHS Office of Inspector General’s list of excluded providers, patient service plans, and information on concerns about care received directly from patients, relatives, caregivers or the assisted living facility itself. In many cases, the state Medicaid agency did not review all information sources reviewed by other agencies. For example, although all critical incident reports were reviewed in the 48 states by either the state Medicaid agency, the agency delegated administrative responsibilities, or another agency; in 16 of those states, the state Medicaid agency was not involved in those reviews, according to responses to our survey. Instead, the critical incident reports were reviewed by another entity designated responsible for the HCBS program in the state or another state entity with regulatory responsibility over the assisted living facility. Such reviews, including any critical incidents found, may not have been communicated back to the state Medicaid agency, according to responses to our survey. State Medicaid agencies also varied in reporting the extent to which they were made aware or notified when enforcement actions were taken as a result of concerns with beneficiary care identified by other entities. Various oversight actions may be taken by the state Medicaid agency, the agency delegated to administer an HCBS program, or a state regulatory agency, such as a state agency responsible for licensing and inspecting various types of HCBS providers. When delegated agencies or other licensing agencies take corrective action, the state Medicaid agency may not be aware unless notified by the agencies taking that action. For example, in 23 states, the investigation of potential incidents related to beneficiary health and welfare was delegated to another agency but in only 6 of these states was the state Medicaid agency always notified of such an investigation based on our survey. (See table 4 and text box below.) Example of a Collaborative Approach to Monitoring and Ensuring Quality Care Specifically for Assisted Living Facilities In 2009, the Wisconsin Coalition for Collaborative Excellence in Assisted Living was formed to redesign the way quality is ensured and improved for individuals residing in assisted living communities. This public/private coalition utilizes a collective impact model approach that brings together the state, the industry, the consumer, and academia to identify and implement agreed upon approaches designed to improve the outcomes of individuals living in Wisconsin assisted living communities. The core of the coalition is the implementation of an association developed, department approved, comprehensive quality assurance, quality improvement program. State Medicaid Agencies Varied in How They Monitored Incidents of Potential or Actual Harm to Medicaid Beneficiaries Receiving Assisted Living Services For their largest HCBS programs that covered assisted living services, the 48 states varied in how they monitored “critical incidents” that caused actual or potential harm to Medicaid beneficiaries in assisted living facilities. Specifically, the 48 states varied in their ability to report the number of critical incidents; how they defined incidents, and the extent to which they made information on such incidents readily available to the public. These states varied in whether they could provide us the number of critical incidents involving beneficiaries for their largest programs covering assisted living services, and for those that could report, the number of incidents they reported varied widely. In 26 of the 48 states the Medicaid agencies were unable to report, for their largest program covering assisted living services, the number of critical incidents that had occurred in assisted living facilities in 2014. The remaining 22 states reported a total of 22,921 critical incidents involving Medicaid beneficiaries in their largest programs covering assisted living services. The number of critical incidents reported in these states ranged from 1 to 8,900. For six of these states the number of critical incidents reported was more than 1,000, (See text box, below, for examples of selected state processes managing critical incidents.) Selected States’ Processes for Managing Beneficiary Harm or Potential Harm in Assisted Living Facilities Georgia: According to state officials in 2014 there was no centralized or comprehensive system for capturing and tracking the data on actual and potential violations. State officials acknowledged the lack of a centralized system prevents the Division of Community Health from tracking the status of each problem. Nebraska: According to state officials, Nebraska’s Adult Protective Services operates an electronic system that coordinates across state social service programs. When Adult Protective Services initiates an investigation of reported harm to an assisted living resident, the state Medicaid agency is automatically notified. Reasons state Medicaid agencies reported for being unable to provide us with the number of critical incidents included limitations in the data or data systems for tracking them. Nine states reported an inability to track incidents by provider type, and thus distinguish critical incidents in assisted living facilities from other providers of home and community based services. States also cited lacking a system to collect critical incidents (9 states), and that the system for reporting could not identify whether a resident was a Medicaid beneficiary (5 states). Even in the 32 states where the state Medicaid agencies reported reviewing information about critical incidents, 20 states were unable to provide the actual number of critical incidents that occurred in assisted living facilities. State Medicaid agencies’ definitions of critical incidents also varied. As illustrated in Figure 4, all 48 states cited physical assault, emotional abuse, and sexual assault or abuse as a critical incident in their largest programs providing assisted living services in 2014. However, for other types of incidents, several states did not identify the incident as critical, including discharge and eviction from the facility (not a critical incident in 24 states), medication errors (not a critical incident in 7 states), and unauthorized use of seclusion, (not a critical incident in 6 states). For other serious incidents, a relatively small number of states did not identify the incident as critical, such as unexplained death (not a critical incident in 3 states) and missing beneficiaries (not a critical incident in 2 states). See appendix IV for a full list of the beneficiary-related incidents and the number of states that identify each as critical. Although half of the 48 states that cover assisted living services did not consider discharges or evictions to be critical incidents, according to state responses to our survey, 42 states offered certain protections related to involuntary discharge of Medicaid residents who live in assisted living facilities. The majority of protections consisted of a lease agreement requirement that applied to other housing contracts in the state, such as providing residents with eviction notices. Other protections included an appeals process (10 states) and a requirement for the facility to find an alternative location for the resident (10 states). State Medicaid agencies also varied in whether they made information on critical incidents and other key information readily available to the public. (See table 5.) Beneficiaries seeking care in an assisted living facility may want to know the number of critical incidents related to a particular facility. Through our survey we found that states differed in the availability of information related to health and welfare that was available to the public. For example, 34 of the 48 states reported that they made critical incident information available to the public by phone, website, or in person, and the remaining 14 states did not have such information available at all. Although all 48 states had information in some form on which assisted facilities accepted Medicaid beneficiaries, 8 states could not provide this information by phone and 22 states could not provide the information in person. CMS Has Taken Steps to Improve Oversight of the Health and Welfare of Medicaid Beneficiaries in Assisted Living and Other Community Settings, but Gaps Remain In recent years, CMS has taken steps to improve oversight of beneficiary health and welfare in HCBS programs by adding new HCBS waiver application requirements for state monitoring of beneficiary health and welfare. CMS requires state waiver applications to include specific requirements that implement various statutory and regulatory provisions, including a provision that states assure that they will safeguard the health and welfare of Medicaid beneficiaries. In March 2014, CMS added unexplained death to the events that states must be able to identify and address on an ongoing basis, as part of their efforts to prevent instances of abuse, neglect, and exploitation, and added four new requirements for states to protect beneficiary health and welfare. (See table 6.) In its guidance implementing the 2014 requirements, CMS noted that state associations and state representatives’ work groups had agreed that “health and welfare is one of the most important assurances to track, and requires more extensive tracking to benefit the individuals receiving services, for instance by using data to prevent future incidents.” As a condition for approval of their HCBS waiver applications for each of the requirements, states must identify and agree with CMS on the type of information they will collect to provide as evidence that they will meet the requirements. However, according to CMS officials, each state Medicaid agency has wide discretion over the information it will collect and report to demonstrate that it is meeting the health and welfare requirements and protecting beneficiaries. Although CMS added the additional requirements in 2014 for safeguarding beneficiary health and welfare, the agency generally did not change requirements for how it oversees state monitoring efforts once HCBS waivers are approved. We found a number of limitations in CMS’s oversight of approved HCBS waivers that undermine the agency’s ability to effectively monitor state oversight of HCBS waivers. These limitations include: unclear guidance on what states should identify and report annually related to any identified program deficiencies; lack of requirements on states to regularly provide CMS information on critical incidents; and CMS’s inconsistent enforcement of the requirement that states submit annual reports. Unclear guidance on what states should identify and report annually related to any identified program deficiencies. Federal law requires states to provide CMS with information annually on an HCBS waiver’s impact on (1) the type and amount, and cost of services provided and (2) the health and welfare of Medicaid beneficiaries receiving waiver services. CMS reporting requirements give states latitude to determine what to report as health and welfare deficiencies found through state monitoring of their HCBS programs. With respect to health and welfare, CMS’s State Medicaid Manual directs states when preparing their annual reports to “check the appropriate boxes regarding the impact of the waiver on the health and welfare” of beneficiaries and to describe relevant information. States are required to provide a brief description of the state process for monitoring beneficiary safeguards, use check boxes to indicate that beneficiary health and welfare safeguards have been met, and identify whether deficiencies were detected during the monitoring process. If states determine that deficiencies were identified through monitoring, states are required to “provide a summary of the significant areas where deficiencies were detected” and an explanation of the actions taken to address deficiencies and ensure the deficiencies do not recur. CMS’s written instructions for completing the HCBS annual report do not provide further guidance regarding reporting of deficiencies. For example, the reporting instructions do not describe or identify 1) what states are supposed to report as deficiencies, 2) how they are to identify which deficiencies are most significant, and 3) the extent to which states need to explain the steps taken to ensure that deficiencies do not recur. The lack of clarity is inconsistent with federal internal control standards, in particular, the need for federal agencies to have processes that identify information needed to achieve objectives and address risk. Without clear instructions as to what states must report, states’ annual reports may not identify deficiencies with states’ HCBS waiver programs that may affect the health and welfare of beneficiaries. States may determine that issues or problems they identified through monitoring do not represent reportable deficiencies and therefore may not report those deficiencies to CMS, increasing the risk that problems are not elevated to CMS’s attention. In the case of one of the selected states we reviewed, no problems were included on the annual reports submitted to CMS between 2011 and 2015. However, when CMS completed its review in the fourth year of the state’s waiver— for purpose of renewing the waiver—it determined the state was not assuring beneficiary health and welfare. CMS found that the information the state submitted for purpose of renewal suggested a “pervasive failure” by the state to assure the health and welfare of beneficiaries receiving services, including assisted living services. In particular, CMS noted the state provided insufficient information regarding the number of unexpected or suspicious beneficiary deaths. CMS concluded that the state failed to demonstrate that it has effective systems and processes for ensuring the health and welfare of beneficiaries. Lack of requirements on states to annually provide CMS information on critical incidents. Despite the importance of state critical incident management and reporting systems to protecting the health and welfare of beneficiaries, CMS lacks written requirements that states provide information needed for the agency oversight of state monitoring of critical incidents. According to CMS, a critical element of effective state oversight is the operation of data systems that support the identification of trends and patterns in the occurrence of critical incidents to identify needed improvements. Such a system is also consistent with federal internal controls standards which specify, in particular, the need for federal agencies to have processes that identify information needed to achieve objectives and address risk. CMS requires states to operate a critical incident reporting system. On their waiver applications states must check a box indicating they operate a system and also describe their system—including who must report and when, and what must be reported. Despite this requirement for states to have critical incident reporting systems, CMS does not require states to report to CMS any data from these systems on critical incidents as part of their required annual reports. Specifically, states are not required to include, in their annual reports, the number of critical incidents reported or substantiated that involve Medicaid beneficiaries. As a result, CMS does not have a method to confirm what states describe about critical incident management systems, which is a required component of states’ waiver applications or to assess the capabilities of states’ systems. For example, CMS cannot confirm whether the state systems can report incidents by location or type of residential provider, such as assisted living facilities; the type and severity of critical incidents that occurred; and the number of incidents that involved Medicaid beneficiaries. Without annual critical incident reporting, CMS may be at risk of (1) not having adequate evidence that states are meeting CMS requirements to have an effective critical incident management and reporting system and of (2) being unaware of problems with states’ abilities to identify, track, and address critical incidents involving Medicaid beneficiaries. Our prior work has shown that the lack of explicit reporting requirements on critical incidents not only impacts HCBS waiver programs but also impacts other types of Medicaid long-term services programs as well. Specifically, In a November 2016 report, we found that CMS requirements for states to report on their critical incident monitoring systems for the HCBS waiver program were more stringent than those for other types of HCBS programs, potentially leaving those other programs at even greater risk. We recommended that CMS take steps to harmonize requirements across different types of HCBS programs. HHS concurred with the recommendation stating it would seek input from states, stakeholders, and the public regarding harmonizing requirements across programs. In an August 2017 report we found similar issues in critical incident reporting requirements for other types of long term services programs, particularly those used to provide HCBS and other long term services under managed care. We found that CMS was not always requiring states that contracted with managed care organizations to provide long term services and supports to report to CMS sufficient information on critical incidents and other key areas needed to monitor beneficiary access and quality. We recommended that CMS take steps to identify and obtain key information needed to better oversee states’ efforts to monitor beneficiary access to quality services in their managed long-term services and supports programs. HHS concurred with this recommendation and stated that the agency would take this recommendation into account as part of an ongoing review of its 2016 Medicaid managed care rule. We continue to believe that the implementation of our prior recommendations is needed to help improve CMS oversight of states monitoring of beneficiary safety. CMS’s inconsistent enforcement of the requirement that states submit annual reports. States must prepare and submit an annual report for each HCBS waiver as a condition of waiver approval. According to CMS guidance, the agency’s review of the annual report is part of the ongoing oversight of HCBS waiver programs and not submitting an annual report jeopardizes the states renewal of HCBS waiver programs. However, some states have not been timely in submitting the required annual reports for their HCBS waivers. A review of 2013 HCBS annual reports by a CMS contractor, published in 2016, found that annual reports were missing for 29 HCBS waivers and multiple years’ of annual reports were missing for 8 waivers. In 2014, CMS adopted new strategies to ensure compliance with HCBS waiver requirements, including the requirement that states submit annual reports on a timely basis. These strategies include withholding federal funding, placing a moratorium on enrollment in the waiver, or other actions the agency determines necessary. CMS officials reported that the agency had not used these new strategies with states that were delinquent in submitting their annual reports. Officials said they were in the process of reviewing how to implement these new strategies in the case of one state; however, as of August 2017 officials had not finalized a decision. CMS’s ability to provide effective oversight of state programs and protect beneficiary health and welfare is undermined by the lack of enforcement and receipt of required annual waiver reports. Conclusions Effective state and federal oversight is necessary to ensure that the health and welfare of Medicaid beneficiaries receiving assisted living services are protected, especially given the particular vulnerability of many of these beneficiaries to abuse, neglect, or exploitation. CMS has taken steps to strengthen beneficiary health and welfare protections in states’ HCBS waiver programs, the most common type of program that covers assisted living services and one that serves the most vulnerable beneficiaries. In particular, CMS now has multiple requirements for states to safeguard beneficiaries’ health and welfare, including requirements to operate an effective critical incident management and reporting system to identify, investigate, and address incidents of beneficiary abuse, neglect, exploitation, and unexplained death. However, CMS’s ability to effectively monitor how well states are assuring beneficiary health and welfare is limited by gaps in state reporting to CMS. CMS has not provided clear guidance to states on what information to include in annual reports on deficiencies they identify. As a result, CMS lacks assurance that it is receiving consistent, complete, and relevant information on deficiencies that is needed to oversee beneficiary health and welfare. Lacking clear guidance on the reporting of deficiencies may result in a delayed recognition of problems that may affect beneficiary health and welfare. Further, for years, states have been required to check a box attesting that they operate a critical incident management system, but have not always been required to report information on incidents of potential or actual harm to beneficiaries. Given the increasing prevalence of assisted living facilities as a provider of services to Medicaid beneficiaries, it is unclear why more than half of states responding to our survey could not provide us information on the number of critical incidents that occurred in these facilities in their states. Reporting data from their critical incident systems, such as the number of incidents, the type and severity of the incidents, or the location or type of facility in which the incident occurred would provide evidence that an effective system is in place, provide information on the extent beneficiaries are subject to actual or potential harm, and allow for tracking trends over time. Finally, CMS has not ensured that all states submit annual reports on their HCBS waiver programs as required. Without improvements to state reporting, CMS cannot ensure states are meeting their commitments to protect the health and welfare of Medicaid beneficiaries receiving assisted living services, potentially jeopardizing their care. Recommendations for Executive Action We are making the following three recommendations to CMS: The Administrator of CMS should provide guidance and clarify requirements regarding the monitoring and reporting of deficiencies that states using HCBS waivers are required to report on their annual reports. (Recommendation 1) The Administrator of CMS should establish standard Medicaid reporting requirements for all states to annually report key information on critical incidents, considering, at a minimum, the type of critical incidents involving Medicaid beneficiaries, and the type of residential facilities, including assisted living facilities, where critical incidents occurred. (Recommendation 2) The Administrator of CMS should ensure that all states submit annual reports for HCBS waivers on time as required. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to HHS for review and comment. HHS provided written comments, which are reproduced in Appendix V. The department also provided technical comments, which we incorporated as appropriate. In its written comments, the department concurred with two of our three recommendations, specifically, that CMS will clarify requirements for state reporting of program deficiencies and ensure that all states submit required annual reports on time. HHS did not explicitly agree or disagree with our third recommendation to require all states to report information on critical incidents to CMS annually. The department noted it has established a workgroup to learn more about states’ health and welfare systems and that it will use the results of this workgroup to determine which additional reporting requirements would be beneficial. The workgroup’s review will continue through calendar year 2018. In technical comments, HHS indicated that after the workgroup’s review is complete it will consider annual reporting of critical incidents. We believe establishing the workgroup is a positive first step towards improving oversight and state reporting and encourage HHS to require annual reporting on critical incidents when developing additional reporting requirements. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, the Administrator of the Administration for Community Living, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix VI. Appendix I: State Reported Enrollment and Spending on Assisted Living Services State North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Wisconsin Wyoming Appendix II: State Reported Home- and Community-Based Services (HCBS) Programs Covering Assisted Living Services State Ohio Appendix III: Information Regarding Medicaid Beneficiaries’ Access to Assisted Living Services Our survey of state Medicaid agencies regarding coverage, spending, enrollment, and oversight of assisted living services in 2014, obtained information on challenges for Medicaid beneficiaries to access assisted living services in their states. States provided information related to factors that create challenges for Medicaid beneficiaries’ ability to access and receive assisted living services and the extent states had policies to help beneficiaries with the cost of room and board. A number of states in our survey cited common factors as creating the greatest challenges to a beneficiary’s ability to access assisted living services, including the number of assisted living facilities willing to accept Medicaid beneficiaries (13 states or 27 percent of the 48 states) program enrollment caps (9 states or 19 percent of the 48 states) beneficiaries’ inability to pay for assisted living facility room and board (9 states or 19 percent of the 48 states), which Medicaid typically does not cover low rates the state Medicaid program paid assisted living facilities (8 states or 17 percent of the 48 states). A number of states reported that they had policies to assist Medicaid beneficiaries with the costs of room and board charged by assisted living facilities, which Medicaid does not typically cover. Two common policies, cited by at least half of the states, were aimed at limiting how much assisted living facilities could charge Medicaid beneficiaries for room and board. For example, 30 of 48 states, limited the amount facilities could charge for room and board to the amount of income certain beneficiaries receive as Supplemental Security Income. The other commonly cited policies focused on providing financial assistance to the beneficiaries to defray the room and board costs. (See table 9.) Appendix IV: Events That States Defined as Critical Incidents Appendix V: Comments from the Department of Health and Human Services Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tim Bushfield and Christine Brudevold (Assistant Directors), Jennie Apter, Shirin Hormozi, Anne Hopewell, Kelsey Kreider, Perry Parsons, Vikki Porter, and Jennifer Whitworth made key contributions to this report.
Why GAO Did This Study The number of individuals receiving long term care services from Medicaid in community residential settings is expected to grow. These settings, which include assisted living facilities, provide a range of services that allow aged and disabled beneficiaries, who might otherwise require nursing home care, to remain in the community. State Medicaid programs and CMS, the federal agency responsible for overseeing the state programs, share responsibility for ensuring that beneficiaries' health and welfare is protected. GAO was asked to examine state and federal oversight of assisted living services in Medicaid. This report (1) describes state spending on and coverage of these services, (2) describes how state Medicaid agencies oversee the health and welfare of beneficiaries in these settings, and (3) examines the extent that CMS oversees state Medicaid agency monitoring of assisted living services. GAO surveyed all state Medicaid agencies and interviewed officials in a nongeneralizeable sample of three states with varied oversight processes for their assisted living programs. GAO reviewed regulations and guidance, and interviewed CMS officials. What GAO Found State Medicaid agencies in 48 states that covered assisted living services reported spending more than $10 billion (federal and state) on assisted living services in 2014. These 48 states reported covering these services for more than 330,000 beneficiaries through more than 130 different programs. Most programs were operated under Medicaid waivers that allow states to target certain populations, limit enrollment, or restrict services to certain geographic areas. With respect to oversight of their largest assisted living programs, state Medicaid agencies reported varied approaches to overseeing beneficiary health and welfare, particularly in how they monitored critical incidents involving beneficiaries receiving assisted living services. State Medicaid agencies are required to protect beneficiary health and welfare and operate systems to monitor for critical incidents—cases of potential or actual harm to beneficiaries such as abuse, neglect, or exploitation. Twenty-six state Medicaid agencies could not report to GAO the number of critical incidents that occurred in assisted living facilities, citing reasons including the inability to track incidents by provider type (9 states), lack of a system to collect critical incidents (9 states), and lack of a system that could identify Medicaid beneficiaries (5 states). State Medicaid agencies varied in what types of critical incidents they monitored. All states identified physical, emotional, or sexual abuse as a critical incident. A number of states did not identify other incidents that may indicate potential harm or neglect such as medication errors (7 states) and unexplained death (3 states). State Medicaid agencies varied in whether they made information on critical incidents and other key information available to the public. Thirty-four states made critical incident information available to the public by phone, website, or in person, while another 14 states did not have such information available at all. Oversight of state monitoring of assisted living services by the Centers for Medicare & Medicaid Services (CMS), an agency within the Department of Health and Human Services (HHS), is limited by gaps in state reporting. States are required to annually report to CMS information on deficiencies affecting beneficiary health and welfare for the most common program used to provide assisted living services. However, states have latitude in what they consider a deficiency. States also must describe their systems for monitoring critical incidents, but CMS does not require states to annually report data from their systems. Under federal internal control standards, agencies should have processes to identify information needed to achieve objectives and address risk. Without clear guidance on reportable deficiencies and no requirement to report critical incidents, CMS may be unaware of problems. For example, CMS found, after an in-depth review in one selected state seeking to renew its program, that the state lacked an effective system for assuring beneficiary health and welfare, including reporting insufficient information on the number of unexpected or suspicious beneficiary deaths. The state had not reported any deficiencies in annual reports submitted to CMS in 5 prior years. What GAO Recommends GAO recommendations to CMS include clarifying state requirements for reporting program deficiencies and requiring annual reporting of critical incidents. HHS concurred with GAO's recommendations to clarify deficiency reporting and stated that it would consider annual reporting requirements for critical incidents after completing an ongoing review.
gao_GAO-18-245
gao_GAO-18-245_0
Background Federal Banking Supervision The purpose of federal banking supervision is to help ensure that banks throughout the financial system are operating in a safe and sound manner and are complying with banking laws and regulations in the provision of financial services. Banks in the United States are supervised by one of the following three federal regulators: FDIC supervises all FDIC-insured state-chartered banks that are not members of the Federal Reserve System and insured state savings associations and insured state chartered branches of foreign banks. The Federal Reserve supervises commercial banks that are state- chartered and members of the Federal Reserve System. OCC supervises federally chartered national banks and savings associations (also known as federal thrifts). FDIC, the Federal Reserve, and OCC are required to conduct a full- scope, on-site examination of each of their supervised banks at least once during each 12-month period. The regulators may extend the examination interval to 18 months, generally for banks and thrifts that have less than $1 billion in total assets and that meet certain conditions, such as satisfactory ratings, are well capitalized, and are not being subject to a formal enforcement action. As part of a full-scope examination, examiners review a bank’s risk exposure within a number of components using the Uniform Financial Institutions Rating System, which also is referred to as the CAMELS rating system (capital adequacy, asset quality, management, earnings, liquidity, and sensitivity to market risk). Evaluations of CAMELS components consider a bank’s size and sophistication, the nature and complexity of its activities, and its risk profile. The end result of a full-scope, on-site examination is a report of examination, which includes the CAMELS ratings and other findings and is provided to the bank’s management and board of directors. A report of examination may include deficiencies or other issues that examiners found and that a bank is expected to address within specific time frames. Such issues generally are called supervisory recommendations by FDIC, supervisory findings by the Federal Reserve, and supervisory concerns by OCC. For purposes of this report, we collectively refer to such issues as supervisory concerns. Supervisory concerns may be designed to correct practices that deviate from sound risk management principles or noncompliance with laws and regulations. Supervisory concerns that involve more significant issues are brought to the attention of a bank’s board of directors and senior management in the report of examination as matters requiring immediate attention (MRIA) or matters requiring attention (MRA) under the Federal Reserve’s policies, matters requiring board attention (MRBA) under FDIC’s policies, and MRAs under OCC’s policies. If a bank were to fail to address a supervisory concern, its regulator could subject the bank to enhanced supervision, downgrade of a component or composite rating, or other supervisory actions, such as informal or formal enforcement actions. CRE Lending and Associated Risks Under their 2006 guidance, regulators define CRE loans to include construction loans, loans to finance CRE that are not secured by CRE, loans secured by multifamily property, and loans secured by nonfarm, nonresidential property in which the primary source of repayment derives from the rental income associated with the property or the proceeds of the sale, refinancing, or permanent financing of the property. CRE loans in which the primary source of repayment is not the property itself are called owner-occupied loans and can include loans to businesses for working capital purposes that use real estate as collateral. For example, a line of credit for a business’s operating expenses might be secured in part by commercial property, such as an office. Construction and land development (CLD) loans generally are considered to be the riskiest class of CRE, due to their long development times and because they can include properties (such as housing developments or retail space in a shopping mall) that are built before having firm commitments from buyers or lessees. In addition, by the time the construction phase is completed, market demand may have fallen, putting downward pressure on sales prices or rents—making this type of loan more risky. Regulatory Guidance on CRE Concentrations and Risk Management Practices Based on concerns about the increase in CRE concentrations at community banks and the risks associated with such concentrations, FDIC, the Federal Reserve, and OCC jointly issued guidance in December 2006 on CRE concentrations and sound risk management practices. The guidance described the regulators’ expectations for sound risk management practices for banks with concentrations in CRE loans. Specifically, the guidance identified seven key elements, or internal control areas, that a bank’s risk management practices should address to identify, monitor, and control its CRE concentration risk (see fig. 1). The 2006 CRE guidance also sets forth three criteria to identify banks with CRE loan concentrations that could be subject to greater supervisory scrutiny. According to the guidance, a bank that has experienced rapid growth in CRE lending, has notable exposure to a specific type of CRE, or is approaching or exceeds the following supervisory criteria may be identified for further supervisory analysis of the level and nature of its CRE concentration risk: CLD concentration threshold: CLD loans represent 100 percent or more of a bank’s total capital. Total CRE concentration threshold: Total nonowner-occupied CRE loans (including CLD loans) represent 300 percent or more of a bank’s total capital and total CRE lending increased by 50 percent or more during the previous 36 months. According to the guidance, the CLD and CRE thresholds do not constitute limits on a bank’s CRE lending activity but rather serve as high-level indicators to identify banks potentially exposed to CRE concentration risk. In 2011, we reported on how the federal banking regulators had responded to the potential risks of growing CRE concentrations at community banks, including by jointly issuing the 2006 CRE concentration guidance. We recommended that the regulators should enhance or supplement the 2006 CRE guidance and take steps to better ensure that such guidance is consistently applied. The regulators have taken steps to address our recommendation. Out of the approximately 5,900 banks that had a CRE loan portfolio as of the end of June 2017, a total of 504 banks exceeded either 100 percent in CLD loans as a percentage of total risk-based capital, or 300 percent in CRE loans as a percentage of total-risk based capital and had seen 50 percent CRE portfolio growth during the previous 36 months. Of these 504 banks, a total of 69 banks exceeded both the CLD criteria and the total CRE criteria (including the growth component). In December 2015, federal banking regulators issued a joint statement to remind banks of the 2006 regulatory guidance on prudent risk management practices for CRE lending activity through economic cycles. The regulators noted, among other trends, that many banks’ CRE concentration levels had been rising. According to the statement, regulators would continue to pay special attention to potential risks associated with CRE lending during 2016. Specifically, the regulators stated that when conducting examinations that include a review of CRE lending activities, they would focus on banks’ implementation of the prudent principles in the 2006 CRE guidance and other applicable guidance relative to identifying, measuring, monitoring, and managing concentration risk in CRE lending activities. According to officials from FDIC, the Federal Reserve, and OCC, their agencies use a variety of formal and informal processes to monitor the condition of banks and identify risks, including CRE concentration risk. For example, The Federal Reserve has a National Risk Council and FDIC and OCC have National Risk Committees that meet routinely to identify and evaluate risks facing banks and are supported by a number of other committees or other groups. FDIC officials told us that analysis done by FDIC’s Regional Risk Committees identified growth in CRE concentrations in 2015 and brought the issue to the National Risk Committee’s attention. OCC began actively monitoring CRE loan growth in the middle of 2014 and began focusing on CRE concentration risk management during bank examinations in 2015. OCC officials also stated that CRE concentration risk has been a key risk issue for the agency’s National Risk Committee since early 2016. Federal Reserve officials told us that the agency, including the Federal Reserve banks, began to monitor bank CRE concentrations more closely around mid-2013 after identifying an increase in CRE concentrations. According to FDIC, Federal Reserve, and OCC officials, they met together in early 2015 to discuss CRE lending growth and the rise in bank CRE loan concentrations and held subsequent meetings throughout the year, in part to discuss policy options for helping to ensure that banks were appropriately managing their CRE concentration risks. One of the outcomes of such interagency coordination was the December 2015 joint statement on CRE concentrations. Credit and Other Risks Related to Bank CRE Lending Have Increased over the Past Several Years Although the CRE sector has recovered since the 2007–2009 financial crisis, our trend and econometric analyses generally indicate that credit and other risks related to bank CRE lending have increased over the past several years. Based on indicators of CRE market conditions and loan performance, the CRE sector has recovered from the 2007–2009 financial crisis. For example, spending on CRE construction projects—a source of demand for bank financing—has rebounded. Vacancy rates for apartments, office buildings, and other CRE properties have declined. Similarly, as shown in figure 2, delinquency and charge-off rates on bank CRE loans have fallen from their post-crisis peaks and are at or below their lowest levels since 2002. Although these rates provide information on the current performance of bank CRE loans, they provide little or no information about potential future risks faced by banks. For example, high-risk loans made to less creditworthy borrowers could perform well when property markets and the economy are doing well but may perform poorly when property markets or the economy begin to slow. At the same time, our analyses of other market, underwriting, and lending data and forecasts from predictive econometric models we developed suggest that banks’ credit and concentration risks related to their CRE lending have increased. As shown in figure 3, according to a Federal Reserve survey, banks lowered their CRE loan underwriting standards— terms and conditions under which banks extend loans—after the financial crisis, but more banks began to tighten their underwriting standards since late 2015. In general, tightening underwriting standards may indicate that loan officers are reevaluating the degree of risk in CRE markets served by banks. According to Federal Reserve data, a larger share of banks has tightened underwriting standards on multifamily properties, such as apartments. CRE property prices, particularly for multifamily properties, have increased rapidly in recent years, and CRE property valuations have similarly increased. For example, as shown in figure 4, capitalization rates (the ratio of income generated by a property to the property’s price) on CRE properties have trended downward since around 2010—indicating that borrowers (i.e., property owners) may be earning less of a return on their CRE properties. Capitalization rates can be indicative of expected future price changes—for example, low capitalization rates may reflect expectations of future price increase, but can also be driven by investor sentiment not associated with fundamental aspects of properties. In addition, as shown in figure 5, the number of banks with concentrated portfolios in CLD or total CRE loans has been gradually increasing since around 2014. Greater concentrations in a particular lending sector (e.g., commercial real estate, residential real estate, or business lending) leave banks more vulnerable to a sectoral downturn, all else equal. To further assess risk in bank CRE lending, we developed and estimated several predictive models of aggregate losses on bank CRE loans. The models incorporate measures of CRE property prices, bank lending, and underwriting standards. The models generally found that, historically, higher future losses are predicted when CRE lending and prices are simultaneously high relative to gross domestic product, and when banks are tightening underwriting standards. Based largely on the simultaneous increase in bank CRE lending and CRE prices observed over the last several years, these models suggest that credit risk has increased, though it remains lower than the level of risk associated with the 2007– 2009 financial crisis. As we noted earlier, high property valuations and substantial increases in lending can simultaneously weaken collateral protections and indicate lower borrower quality, both of which can raise the risk of losses should the economy or CRE sector weaken. (See app. II for additional information on our models.) Regulators Examined Risk Management Practices of Banks with CRE Concentrations We found that regulators generally subjected banks with relatively high concentrations in CRE loans to greater supervisory scrutiny in comparison to banks with relatively lower concentrations in CRE loans in our review of 54 examinations for 40 banks conducted from 2013 through 2016. In all of these examinations, the regulators specifically assessed whether each bank had adequate risk management practices and capital for managing its CRE concentration risk and generally found that the banks had adequate risk management practices and capital. In a few examinations, regulators differed in how they addressed supervisory concerns about a bank’s CRE-related risk management practices. Regulators Examined Whether Banks with Relatively High CRE Concentrations Had Adequate Practices and Capital to Manage Their CRE Concentration Risk In our review of a nongeneralizable sample of 54 examinations conducted from 2013 through 2016, we found that FDIC, Federal Reserve, and OCC subjected banks with relatively high concentrations in CRE loans to greater supervisory scrutiny. In both their 2006 CRE guidance and 2015 CRE statement, the regulators indicated that banks with relatively high CLD or total CRE concentrations should maintain risk management practices commensurate with the level and nature of their concentration risk. The 2006 CRE guidance recognized that the sophistication of a bank’s CRE risk management practices depends on, among other things, the level and nature of its CRE concentrations and associated risk. As noted earlier, the guidance notes that a bank’s risk management practices should address seven internal control areas: (1) board and management oversight; (2) portfolio management; (3) management information systems; (4) market analysis; (5) credit underwriting standards; (6) portfolio stress testing and sensitivity analysis; and (7) credit risk review function. Based on our analyses, we found that the 2006 CRE guidance’s risk management framework is adequately designed to help ensure that banks effectively identify, measure, monitor, and control their CRE concentration risk. For example, the guidance is consistent with credit and concentration risk principles issued by international standard- setting bodies. Of the 54 reports of examination that we reviewed, 41 of them covered banks whose CLD or total CRE concentrations exceeded the CLD concentration threshold, total CRE concentration threshold, or both thresholds set forth in the 2006 guidance. In all of these examinations, we found that FDIC, Federal Reserve, and OCC examiners generally assessed whether each bank had implemented adequate risk management practices to manage their concentration risk. As shown in figure 6, in 26 of the 41 examinations, FDIC, Federal Reserve, and OCC examiners did not find any weaknesses in the banks’ CRE risk management practices across the seven internal control areas, but did find weaknesses in the remaining 15 examinations. In 15 of the 41 examinations we reviewed, FDIC, Federal Reserve, and OCC examiners found the banks had CRE-related risk management weaknesses in at least one of the seven internal control areas. Examiners most frequently found risk management weakness in three internal control areas: board and management oversight (11 instances), management information systems (8 instances), and stress testing (7 instances). To a slightly lesser extent, examiners found weaknesses in portfolio management, credit underwriting standards, and credit risk review function. Examiners communicated their supervisory concerns to these 15 banks in their reports of examinations. In 12 of the examinations, examiners included MRAs, MRBAs, or MRIAs in their reports of examination that directed the banks to correct their risk management weaknesses. In the other three examinations, examiners included recommendations or other notes in their reports of examination that generally directed the banks to correct their risk management weaknesses. Consistent with the 2006 CRE guidance, we found that examiners generally did not use the CLD or total CRE concentration thresholds as limits on bank CRE lending. With two exceptions, examiners did not direct banks that exceeded the CLD or CRE threshold to reduce their concentrations but rather focused on ensuring that the banks’ risk management practices were commensurate with the nature and level of their concentration risk. In the two exceptions, examiners found the banks’ practices and capital inadequate for managing their CLD or CRE concentration risk and directed the banks to reduce their concentrations and improve their risk management practices. We found that FDIC, Federal Reserve, and OCC examiners varied in the extent to which they documented—in the reports of examination and supporting workpapers—the scope of their review of banks’ CRE-related risk management practices and findings. For example, we were not always able to determine whether examiners found a bank’s practices adequate in one or more of the seven internal control areas based on our review of the report of examination and, if available, supporting workpapers. According to the regulators, reports of examinations are used primarily to document practices found to be inadequate and not practices found to be adequate. Moreover, the regulators told us that their examiners recently have been required to use a CRE examination module to document their assessment and findings of banks with concentrations exceeding the CLD or CRE threshold. Capital and Concentration Risk In the 41 examinations we reviewed where banks exceed one of the concentration thresholds, FDIC, Federal Reserve, and OCC examiners assessed whether the banks generally had capital commensurate with their CRE concentration risk. In 34 of the examinations, examiners determined that the banks’ capital levels were adequate for managing their CLD or total CRE concentration risk. In 7 of the examinations, examiners determined that the banks’ capital levels were inadequate. For six of the seven banks, examiners directed the banks in the reports of examination to reduce or manage their CRE concentrations in light of inadequate capital. In the case of one bank, examiners required the bank to comply with a previous formal enforcement action that addressed the need for the bank to adhere to its board-approved capital plan. Review of CRE-Related Risk Management Practices in Subsequent Examination Cycles For banks with relatively high CLD or total CRE concentrations, we found that Federal Reserve and OCC examiners assessed the banks’ CRE- related risk management practices in subsequent examinations. In our review of 41 examinations of banks that exceeded the CLD or CRE threshold, 26 of them covered two examination cycles of 13 banks conducted from 2013 through 2016. We found that examiners assessed the banks’ practices for managing their concentration risk in both examinations. In 14 examinations (covering 7 banks), examiners found that the banks had adequate risk management practices in both examinations. In six examinations (covering three banks), examiners found aspects of the banks’ risk management practices to be inadequate in their 2013 or 2014 examination and noted their supervisory concerns in the reports of examination. In the subsequent examinations, the examiners found that the banks had adequately addressed the previously identified risk management weaknesses. In six examinations (covering three banks), examiners found the banks’ practices for managing their CRE concentration risk to be adequate in the 2013 or 2014 examinations but inadequate in the subsequent examinations. The examiners issued the banks MRAs or MRIAs or took an informal enforcement action. Regulators Generally Did Not Examine CRE-Related Risk Management Practices of Banks with Concentrations below the CLD or Total CRE Threshold For banks with concentrations below the CLD or total CRE threshold, we found that regulators generally did not examine the banks’ CRE-related risk management practices. Thirteen of the 54 examinations we reviewed covered banks that did not exceed the CLD or CRE thresholds. Although the banks did not exceed either threshold, OCC examiners assessed the banks’ CRE-related risk management practices in 3 of the examinations. In 1 examination, examiners determined that the bank’s CRE-related risk management practices were adequate. The other 2 examinations covered subsequent cycle examinations of the same bank. In the first examination, examiners found that the bank had adequate practices for managing risk associated with its CRE loans but directed the bank through an MRA to incorporate stress testing of the loan portfolio into its monitoring. In the subsequent examination, the examiners found that the bank had addressed the MRA. In the other 10 examinations, FDIC, Federal Reserve, and OCC examiners did not mention in the report of examination the banks’ practices for managing the risk associated with their CRE loans. FDIC, Federal Reserve, and OCC officials told us that examiners use their professional judgment in determining whether to review a bank’s CRE-related risk management practices if the bank’s concentration is below the CLD and CRE threshold. This approach is consistent with the overall risk-based supervisory process used by the regulators, which focuses examiner resources on assessing bank management’s ability to identify and control risks. For example, FDIC’s examination guidelines note that examiners should focus their resources on a bank’s highest risk areas when assessing risk management programs, financial conditions, and internal controls. According to the guidance, the exercise of examiner judgment to determine the scope and depth of review in each functional area is crucial to the success of the risk-focused supervisory process. Regulators Differed in How They Addressed a Few Supervisory Concerns about Banks’ CRE-Related Risk Management Practices In a few examinations, we found differences across regulators in how they addressed supervisory concerns about banks’ CRE-related risk management practices because of differences in the regulators’ policies. In our nongeneralizable sample of 54 examinations, Federal Reserve, FDIC, and OCC examiners included CRE-related supervisory concerns, such as recommendations, MRAs, or MRBAs, in 22 of the reports of examinations. Although the regulators have policies for identifying and communicating supervisory concerns, their policies use different criteria. For example, OCC’s policies instruct examiners to use MRAs to describe practices that a bank must implement or correct to address a deficiency and not to use MRAs to require enhancements to bank practices that meet acceptable standards. However, the Federal Reserve’s and FDIC’s policies do not expressly include such criteria. Consistent with their policies, OCC examiners included MRAs in the reports of examination that we reviewed only when they found a bank’s CRE-related risk-management practices to be inadequate. In contrast, in 2 reports of examination, we found that FDIC examiners did not find the banks’ CRE- related risk management practices to be inadequate but included MRBAs to direct the banks to enhance or sustain certain CRE-related risk management practices. Similarly, in 1 report of examination, Federal Reserve examiners found that the bank’s risk management practices and capital were adequate for its CRE concentrations but included an MRA to require the bank to enhance its capital plan to include concentration risk considerations. FDIC, Federal Reserve, and OCC Have Recently Taken Formal Enforcement Actions against Banks for Not Adequately Managing Their CRE Concentration Risk In addition to their examinations, federal banking regulators have taken informal and formal enforcement actions against banks for not adequately managing their CRE concentration risk. In general, initial consideration and determination of whether informal or formal action is required usually results from examination findings. Unlike informal enforcement actions, formal enforcement actions are published or publicly available. From 2013 through 2016, FDIC, the Federal Reserve, and OCC took formal enforcement actions against banks for not adequately managing risks related to their CRE concentrations, including those outlined in the jointly issued 2006 CRE guidance. FDIC took 22 formal enforcement actions against banks for matters related to their CRE concentrations during this period. The Federal Reserve took 2 formal enforcement actions against banks for matters related to their risk management of CRE lending. OCC took 11 formal enforcement actions against banks for matters related to their CRE concentrations during this same period. The majority of these formal enforcement actions discussed the 2006 CRE guidance and directed the banks to improve their practices for managing their CRE concentration risk. For example, in a number of formal enforcement actions, the regulators ordered the banks to revise their written concentration risk management programs for identifying, monitoring, and controlling risks associated with concentrations of credit, consistent with the 2006 CRE guidance. Agency Comments We provided a draft of this report to FDIC, the Federal Reserve, and, OCC for review and comment. The agencies provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and FDIC, the Federal Reserve, and OCC. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives in this report were to examine: (1) trends in the commercial real estate (CRE) lending markets, including changes in the level of credit and concentration risk in the markets, and (2) actions federal banking regulators took through their examinations to help ensure that banks with CRE concentrations are effectively managing the related risks. To examine trends in the CRE lending markets, we reviewed academic literature and prior GAO work and interviewed officials from the federal banking regulators and private data providers. Specifically, we interviewed officials at the Board of Governors of the Federal Reserve System (Federal Reserve), the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC) to help identify potential indicators of risk in CRE markets. To further inform our assessment of risk, we reviewed prior GAO work on the lessons learned from prior banking crises and the use of early warning models for monitoring the financial system. We also reviewed academic research on early warning models of banking and real estate-related crises. To report trends and assess risk, we reviewed and analyzed a range of data that we considered to be reflective of various aspects of risk in CRE lending markets. Specifically, we reviewed and analyzed commercial property vacancy data from REIS (a private commercial real estate data provider); commercial property construction data from the U.S. Census Bureau; data on delinquencies and charge-offs on bank CRE loans from the Federal Reserve; data on commercial property prices and capitalization rates from Real Capital Analytics (a private commercial real estate data provider); FDIC data on bank CRE lending; and Federal Reserve data on underwriting standards. We evaluated trends in these data and used a subset of these data to estimate several predictive models of aggregate losses on bank CRE loans. (See app. II for more information on our predictive models.) To examine actions taken by federal regulators to help ensure that banks with high CRE concentrations are effectively managing the related risks, we reviewed and analyzed their relevant guidance and regulations on bank CRE lending, examination policies and procedures (e.g., examination manuals and modules), studies and other publications on risks in the banking industry, and formal enforcement actions taken from 2013 through 2016 for CRE-related matters. In addition, we analyzed Consolidated Reports of Condition and Income data from SNL Financial for the period from 2011 through 2016 to calculate banks’ construction and land development (CLD) and CRE concentrations during the period. Specifically, we used the concentration formulas in the 2006 CRE concentration guidance (jointly issued by the federal banking regulators) to calculate banks’ CLD and CRE concentrations and identify banks whose CRE concentrations exceeded, in full or in part, the guidance’s CRE concentration thresholds during part or all of the time frame. Based on whether the banks’ CRE concentrations exceeded the thresholds and other criteria discussed below, we selected a nongeneralizable sample of 40 banks overseen by FDIC, the Federal Reserve, or OCC. For the banks in our sample, we requested from the regulators copies of the reports of examination and, if available, related workpapers prepared by the regulators based on their full-scope examinations of the banks done from 2013 through 2014, and from 2015 through 2016. In addition to using banks’ CRE concentrations as a basis to select examinations, we judgmentally selected a nonprobability sample of banks based on the following criteria: Total asset size: We considered the size of the banks based on their total assets and selected banks from each of the following three ranges: (1) banks with $1 billion or more in total assets, (2) banks with $100 million or more but less than $1 billion in total assets, and (3) banks with less than $100 million in total assets. Primary regulator: We considered the primarily regulator of the banks and selected a sample of 40 banks that resulted in a total of 20 examinations to review from each regulator. Geographic distribution: We selected banks to ensure that at least one bank was from each of the four regions of the U.S. Census and each of the nine divisions within those regions. Based on the 40 banks we selected, we reviewed and analyzed 54 reports of examination and, if available, the related workpapers. We analyzed the examinations using criteria or other requirements specified in the 2006 CRE guidance jointly issued by the regulators and their examination policies and procedures. We did not review six examinations of banks supervised by the Federal Reserve. We also interviewed officials from FDIC, Federal Reserve, and OCC, and from a national banking association about bank CRE lending and applicable CRE guidance and requirements. For the data we analyzed under both of our objectives, we took a number of steps to assess the reliability of the data, including interviewing data providers; corroborating trends across multiple data sources; reviewing related documentation; inspecting data for missing values, outliers, or other errors; and reviewing relevant, prior GAO work. We determined that these data were sufficiently reliable for our reporting objectives. We conducted this performance audit from January 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Predictive Models of Aggregate Losses on Bank Commercial Real Estate Loans We developed and estimated several models of aggregate losses on bank commercial real estate (CRE) loans. These models attempt to predict future aggregate charge-offs using contemporary indicators of potential risks. We incorporated indicators of risk based on the cross- country research literature on early warning models of banking risk and prior GAO work on identifying early warning models as tools that could assist financial regulators in assessing risk. One study summarized the overall intuition for models of this class in the following way: “imbalances manifest themselves in the coexistence of unusually rapid cumulative growth in private sector credit and asset prices.” Our results were consistent with this concept and extend the aggregate early warning model literature to a sectoral model. As such, our models incorporate measures of CRE property prices, bank lending volumes, and bank loan underwriting standards. The models predict charge-offs 2–3 years into the future (the dependent variable is the average charge-off rate for 8 through 11 quarters into the future), using commercial bank charge-off rates from the Board of Governors of the Federal Reserve System (Federal Reserve), first quarter 1991 to second quarter 2017. (See below for an illustrative regression equation for one of these models.) We began with two model variations, one based on the levels of key variables and the other based on their growth rates, using the following independent variables, respectively: “Level” model: Level of CRE prices to gross domestic product (GDP), level of bank CRE lending to GDP, the interaction of the level of CRE prices and lending, and the net percentage of banks tightening underwriting standards on CRE loans. “Growth” model: Growth rate of CRE prices over the last year, growth rate of bank CRE lending over the last year, interaction of price and lending growth, and the net percentage of banks tightening underwriting standards on CRE loans. By inspection, the model based on levels also captured key aspects of the evolution of aggregate losses on bank CRE loans in recent decades—for example, low charge-offs prior to the crisis, the rapid increase during crisis, and very low charge-offs in recent years. In this model higher losses are predicted by tightening underwriting standards, and the interaction of (i.e., simultaneous increase in) the level of CRE prices and the level of CRE lending. The bulk of the explanatory power of the model appears to come from the interaction of the level of CRE prices and the level of CRE lending—consistent with Borio and Drehmann’s view that the coexistence of rapidly increasing credit and prices is associated with greater risk. These results are also consistent with a more general theory, for example, that periods of economic stability induce greater risk-taking over time, bidding up asset prices and loosening underwriting standards until ultimately increased valuations become unsustainable, prices fall, and borrowers begin to default. We estimated a number of additional models for robustness, to determine if goodness-of-fit and forecasts could be improved markedly, and to assess the degree of forecast uncertainty. For example we estimated a model with a censored dependent variable and used information criteria to select models that combined elements from our initially separate models based on growth rates and levels as well as a model that includes current charge-offs. In figure 7, we report the general trend in expected future charge-offs as well as convey forecast uncertainty based on differences in the forecasts of three of these models. In figure 8, we convey forecast uncertainty based on the 75 percent confidence interval for a combined model that we selected based on information criteria. Implicit in this exercise is the assumption that the data-generating process is reasonably stable—as a result, structural change associated with new financial products, new risk management tools, and new legal and regulatory frameworks could reduce the stability of the data- generating process. We interpret our results and forecasts in light of these potential limitations. Specifically, we do not interpret model results as concrete, precise predictions of aggregate commercial real estate losses but rather as an additional, general indication of the degree of risk in bank CRE lending. We mitigate risks associated with estimating this type of model with appropriate diagnostics, out-of-sample testing, and by developing the model in the context of the well-established early warning literature. That said, some inevitable limitations remain, including the potential omission of important risk factors and other approximations associated with our specification (e.g., our choice of a linear functional form). In addition, diagnostics for detecting nonstationary time series are imperfect, especially with small sample sizes, which may inflate our measures of statistical significance and traditional goodness-of-fit measures like r- squared. These biases may be present, however, in models that still generate useful predictions. In this “small data” context there is also risk of fitting (or over-fitting) the model to predict a particular credit event— though, again, this risk is mitigated somewhat in the context of the broad cross-country early warning literature and the use of out-of-sample testing. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Richard Tsuhara (Assistant Director), Tarek Mahmassani (Analyst in Charge), Abigail Brown, Tarik Carter, M’Baye Diagne, Michael Hoffman, Risto Laboski, Marc Molino, Jessica Sandler, Jennifer Schwartz, and Andrew Stavisky made significant contributions to this report.
Why GAO Did This Study In 2006, federal banking regulators jointly issued guidance that described their expectations for sound risk management practices for banks with CRE concentrations. The guidance includes two CRE thresholds that regulators use to identify banks that are potentially exposed to significant CRE concentration risk and could be subject to greater supervisory scrutiny. Concentrations in CRE loans at U.S. banks have been steadily increasing—raising safety and soundness concerns. In December 2015, the regulators jointly issued a public statement to remind banks of the 2006 CRE guidance. In light of the joint 2015 statement and GAO's ongoing monitoring of regulatory efforts to identify and respond to emerging threats to the banking system, GAO examined (1) trends in the CRE lending market, including changes in risk, and (2) actions taken by regulators to help ensure that banks with CRE concentrations are effectively managing the related risks. To address these issues, GAO analyzed CRE-related data; reviewed agency policies and guidance; and reviewed a nongeneralizable sample of 54 bank examinations conducted from 2013 through 2016 based on the banks' CRE concentrations, total assets, primary regulator, and geographic location. GAO also interviewed officials from the federal banking regulators. What GAO Found While the commercial real estate (CRE) sector has recovered since the 2007–2009 financial crisis, GAO's trend and econometric analyses generally indicate that risk in CRE lending by banks has increased over the past several years. Since the early 2000s, community banks have tended toward providing CRE loans more than other kinds of loans. Indicators of CRE market conditions and loan performance have been improving since 2011. At the same time, GAO's analyses of changes in CRE underwriting standards, property prices, and other data suggest that credit and concentration risks have increased in bank CRE lending. For example, the number of banks with relatively high CRE concentrations—measured by the ratio of a bank's CRE loans to its total capital—has been increasing. In addition, commercial property prices have been increasing rapidly, and property valuations also have risen in recent years. Similarly, GAO's predictive econometric models of CRE loan performance suggest that risk has increased, based largely on the simultaneous increase in bank CRE lending and CRE prices observed over the last several years, but is lower than the level associated with the 2007–2009 financial crisis. GAO found that federal banking regulators subjected banks with relatively high CRE concentrations to greater supervisory scrutiny based on its review of a nongeneralizable sample of 54 bank examinations covering 40 banks done by the Federal Deposit Insurance Corporation, Board of Governors of the Federal Reserve System, and Office of the Comptroller of the Currency from 2013 through 2016. Of the 54 examinations that GAO reviewed, 41 of them covered banks with relatively high CRE concentrations. In all of these examinations, regulators examined whether the banks had adequate risk management practices and capital to manage their CRE concentration risk. In 26 of the 41 examinations, regulators did not find any risk management weaknesses. However, in 15 of the 41 examinations, regulators found the banks had weaknesses in one or more risk management areas, such as board and management oversight, management information systems, or underwriting. The regulators generally communicated their findings to the banks in the reports of examination and directed the banks to correct their risk management weaknesses.
gao_GAO-19-180
gao_GAO-19-180_0
Background The United States has many international agreements that require treaty partners to provide certain information to IRS, which can help prevent the use of foreign bank accounts to facilitate tax evasion. FATCA goes much further, requiring FFIs to report more detailed information to IRS about their U.S. customers annually. These provisions are important developments in efforts to combat tax evasion by U.S. persons holding investments in offshore accounts. FATCA generally requires certain taxpayers to report foreign financial accounts and other specified foreign financial assets whose aggregate value exceeds specified thresholds to IRS on Form 8938. These taxpayers must report these assets and income generated from such assets to IRS with their tax return on Form 8938. These thresholds vary by filing status—such as single or married filing jointly—and by domestic or foreign residency. FATCA also promotes third-party reporting of foreign financial assets by requiring a withholding agent to withhold 30 percent on certain payments to an FFI unless the FFI or the jurisdiction in which the FFI is located has entered into an agreement with the United States to report certain account information of their U.S. customers. Under such an agreement, participating FFIs report detailed information to IRS annually about accounts held by their U.S. customers using an IRS Form 8966, FATCA Report (Form 8966). According to IRS, FATCA improves visibility into taxable income from foreign sources, and enhances the agency’s ability to identify and pursue taxpayer noncompliance. For example, FATCA allows IRS to compare information reported by FFIs on Forms 8966 to information reported by U.S. persons on Forms 8938. According to IRS, this comparison can be used to ensure taxpayers and FFIs are properly reporting foreign financial assets and income from international investments. This type of comparison is a common IRS enforcement technique. For example, IRS can directly compare information it receives from financial institutions’ IRS Form 1099-INT, Interest Income, against a tax return to determine if the taxpayer reported income generated from interest earned. To facilitate FATCA implementation for FFIs operating in jurisdictions with laws that would prohibit FFIs from complying with the terms of the FFI agreement, Treasury developed two alternative intergovernmental agreements (IGA)—Model 1 and Model 2—to facilitate the effective and efficient implementation of FATCA by removing partner jurisdictions’ legal impediments to comply with FATCA reporting requirements, and reducing burdens on FFIs located in partner jurisdictions. FFIs from countries with Model 1 IGAs report information on U.S. persons’ accounts to their respective host country tax authorities (HCTAs). The HCTAs, in turn, compile the information from FFIs and transmit it to IRS. In contrast, FFIs from countries with Model 2 IGAs, or countries treated as not having an IGA in effect, directly report information on U.S. persons’ accounts to IRS. Separate from the FATCA requirements, regulations implementing the Bank Secrecy Act of 1970 (BSA) also impose a separate self-reporting requirement for foreign accounts. Specifically, certain taxpayers and residents are required to file an FBAR with FinCEN annually if they have financial interest or signature or other authority over one or more foreign financial accounts with a total of more than $10,000, regardless of whether they reside within or outside the United States. Federal, state, and local law enforcement agencies can use information from these reports to combat financial crimes, including terrorist financing and tax evasion. Appendix IV provides a comparison of Form 8938 and FBAR reporting requirements. Figure 1 depicts the flow of foreign financial account information from U.S. persons and FFIs to IRS and FinCEN through the FATCA and FBAR reporting processes. FATCA Data Limitations and Lack of a Comprehensive Strategy Have Hampered IRS Efforts to Increase Compliance Incomplete and Inaccurate Reporting of Taxpayer Identification Numbers by FFIs Has Limited IRS’s Efforts to Match Account Information for Compliance Purposes As part of the FATCA reporting requirements, IRS collects information on financial accounts through forms and reports submitted by both taxpayers and FFIs. As part of this effort, IRS requires taxpayers to identify their TINs on Forms 8938 they submit. IRS also requires participating FFIs to report the TINs of each account holder who is a specified U.S. person on Forms 8966. IRS intends to use reported TINs to link Form 8938 data filed by taxpayers to Form 8966 data filed by the FFIs to ensure that taxpayers and FFIs are properly reporting foreign financial assets. However, IRS often could not link account information collected from FFIs to the account’s owner because of incorrect or missing TINs. In July 2018, the Treasury Inspector General for Tax Administration (TIGTA) found that almost half of new Forms 8966 filed by FFIs did not include a TIN or included an invalid TIN. A consulting firm working with FFIs to implement FATCA reporting requirements told us that FFIs encountered significant challenges obtaining accurate TINs from U.S. persons as part of the self-certification process. For instance, FFIs encountered situations where U.S. persons provided incomplete or inaccurate TINs—such as Social Security Numbers (SSN) with less than nine digits—on forms used to self-certify their status as U.S. persons. FFIs also encountered situations where U.S. persons may not have obtained TINs or were unwilling to provide them to FFIs. Additionally, banking associations told us that it has taken time, effort, and expense for FFIs to report TINs, as they had to upgrade computer systems to collect and record TINs from U.S. customers. Finally, Treasury told us that jurisdictions that have an IGA with the United States but no legal requirement to collect TINs are not in compliance with the requirements of the IGA. Treasury and IRS determined that some FFIs reporting from countries with Model 1 IGAs needed additional time to implement procedures to obtain and report required U.S. TINs for preexisting accounts that are U.S. reportable accounts. Consequently, IRS provided a transition period, through the end of 2019, for compliance with the TIN requirements for FFIs under Model 1 IGAs. Specifically, in September 2017, IRS issued a notice modifying procedures for FFIs reporting from countries with Model 1 IGAs to become compliant with TIN reporting requirements for preexisting accounts. For calendar years 2017-2019, IRS will not determine that certain FFIs in countries with Model 1 IGAs are significantly noncompliant with their obligations under the IGA solely as a result of a failure to report U.S. TINs associated with the FFI’s U.S. reportable accounts, providing they (1) obtain and report the date of birth of each account holder and controlling person whose TIN is not reported, (2) make annual requests for missing TINs from each account holder, and (3) search electronically searchable data maintained by such FFIs for missing required U.S. TINs before reporting information that relates to calendar year 2017 to a partner jurisdiction. As a result, even without any further extensions, calendar year 2020 is the earliest IRS will be enforcing requirements for FFIs from countries with Model 1 IGAs to provide accurate and complete information on U.S. account holders’ TINs to IRS. Without valid TINs on Forms 8966 submitted by FFIs, according to IRS officials, IRS faces significant hurdles in matching accounts reported by FFIs to those reported by individual tax filers on their Forms 8938. As a result, IRS must rely on information such as names, dates of birth, and addresses that the filers and/or FFIs may not consistently report. Without data that can be reliably matched between Forms 8938 and 8966, IRS’s ability to identify taxpayers not reporting accurate or complete information on specified foreign financial assets is hindered, interfering with its ability to enforce compliance with FATCA reporting requirements, and ensure taxpayers are paying taxes on income generated from such assets. In July 2018, TIGTA reported that IRS lacked success in matching FFI and individual taxpayer data because reports FFIs filed did not include or included invalid TINs. This, in turn, affected IRS’s ability to identify and enforce requirements for individual taxpayers. TIGTA recommended, among other things, that IRS initiate compliance efforts to address and correct missing or invalid TINs on Form 8966 filings from FFIs from countries with Model 2 IGAs or without any IGAs with the United States. IRS management said it disagreed with this recommendation because a system to ensure validation of every TIN upon submission of a Form 8966 would be cost prohibitive. However, IRS management said that IRS would address invalid TINs as they are uncovered on other compliance efforts, such as initiating development of a data product to automate risk assessments across the FATCA filing population. IRS also said it continues efforts to systematically match Form 8966 and Form 8938 data to identify nonfilers and underreporting related to U.S. holders of foreign accounts. However, IRS management told us they are waiting until they have a full set of data, including TINs, before doing analysis to develop a compliance strategy. According to TIGTA, IRS management believed that having the FFI’s Global Intermediary Identification Number (GIIN) on Form 8938, which is filed by the taxpayer, would help with matching records. However, Form 8938 instructions identify that the field is optional for taxpayers to complete. TIGTA recommended that to reduce taxpayer burden in obtaining GIINs from FFIs, IRS add guidance to Form 8938 instructions to inform taxpayers on how to use the FFI List Search and Download Tool on the IRS’s website to obtain an FFI’s GIIN. IRS agreed with this recommendation. However, even if an individual taxpayer provided GIINs, IRS may continue to have difficulty matching accounts with U.S. taxpayers if the TIN and name of the account holder reported on the Form 8966 do not match the TIN and name of the taxpayer on the Form 8938. IRS officials said they are aware of these difficulties and have attempted to match Forms 8938 and 8966 based on other criteria such as dates of birth. In its response to our draft report, IRS said that all financial institutions and foreign tax authorities that file required account information receive a notification listing administrative and other minor errors contained in their reporting. According to IRS, its Large Business and International division follows up with foreign tax authorities regarding these errors to ensure the tax authorities are working with financial institutions to correct these errors in compliance with the countries’ IGAs. IRS added it has initiated a campaign addressing FFIs that do not meet their compliance responsibilities with respect to account opening requirements. Additionally, IRS drafted a risk acceptance form and tool addressing risks in implementing FATCA compliance and business process capabilities. This risk assessment focused on the limitations IRS faces due to budget constraints, but did not address the specific risks it faces from not receiving complete and valid TINs on U.S. account holders. We previously reported that risk management could help stakeholders make decisions about assessing risk, allocating resources, and taking actions under conditions of uncertainty. Key management practices for risk management we identified from our prior work include identifying, analyzing, and prioritizing risks; developing a mitigation plan to address identified risks; implementing the plan; and monitoring, reporting, and controlling risks. Without developing a risk mitigation plan to address risks IRS faces from not receiving complete and valid TINs moving forward, IRS may lose opportunities to adjust its compliance programs to better identify U.S. persons who are not fully reporting specified foreign financial assets as required under FATCA. IRS Databases Lack Consistent and Complete FATCA and Taxpayer Data Useful for Compliance Enforcement and Research Several IRS databases store data collected from individuals’ electronic and paper filings of Form 8938 and/or elements of parent individual tax returns to which the Form 8938 is attached—the filer’s country of residence and filing status—used to determine specified reporting thresholds for Form 8938 filers. Additionally, data from these databases and other sources are transferred downstream to IRS’s Compliance Data Warehouse (CDW)—a database used for research and analytical purposes. We extracted data from copies of Individual Return Transaction File (IRTF) and Modernized Tax Return Database (MTRDB) data copied into CDW to obtain information reported on Forms 8938 and relevant information from parent tax returns, such as filing status and filers’ country of residence. We found that IRTF and MTRDB had inconsistent and incomplete data. For example, neither database had consistent and complete information on foreign financial account and other asset information submitted by Form 8938 filers. While IRS officials told us that IRTF is the authoritative source for filers of Form 8938, it does not store account and other asset information submitted on Forms 8938. Additionally, IRS officials said MTRDB is not designed to store information submitted on paper filings of Forms 8938 and parent tax returns. Officials from IRS’s Research, Applied Analytics and Statistics (RAAS) division also noted that CDW did not have reliable information from Form 8938 paper filings. Because of the lack of foreign financial asset information from such filings, we could not report complete information on assets reported by Form 8938 filers. Further, IRS does not provide instructions to CDW users on how to extract appropriate data from CDW—such as data copied from IRTF and MTRDB—leading to confusion on which databases to use for extracting Form 8938 and relevant parent tax return data. For example, five distinct tables within CDW are required to identify the TIN, parent form, filing status, country of residence, and amount of foreign assets accurately. Without clear explanations of how data in each of these tables relate to each other and to the underlying filings, errors could be introduced into CDW users’ analyses of foreign asset information. Standards for Internal Control in the Federal Government notes that management should use quality information to achieve the entity’s objectives. One attribute of this principle includes processing data into quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. Additionally, the Internal Revenue Manual states that IRS needs to measure taxpayer compliance so that customer-focused programs and services can be enhanced or developed so that compliance information and tools can be improved. According to IRS officials, IRS researchers have been taking additional steps to obtain and review Form 8938 and parent tax return data stored in the Integrated Production Model (IPM) database. They said IPM is the only database that contains complete data from individuals’ electronic and paper filings of Forms 8938 and relevant elements of parent tax returns. IRS officials said that RAAS has been working with IRS’s information technology (IT) division to obtain read-only access to IPM, and import Forms 8938 and 8966 data from IPM into CDW for analysis. However, as of February 2019, this effort has been delayed due to budget constraints. In its response to our draft report, IRS said that obtaining read-only access would require a new technical process and plans to continue working with IT on the feasibility and timeframe for enabling this access. Enabling access to consistent and complete Form 8938 and parent tax return data for compliance staff and researchers from RAAS and other IRS business units would help IRS strengthen its efforts to enforce compliance with FATCA reporting requirements and conduct research to bolster enforcement efforts. However, such efforts may be hampered until IRS can ensure readily available access to such data. IRS Stopped Pursuing a Comprehensive Plan to Leverage FATCA Data to Improve Taxpayer Compliance We previously recommended that IRS develop a broad strategy, including a timeline and performance measures, for how IRS intends to use FATCA information to improve tax compliance. IRS agreed with this recommendation and developed a strategy for FATCA in July 2013. IRS updated the strategy in 2016 by creating the FATCA Compliance Roadmap as a comprehensive plan to articulate IRS’s priorities to facilitate compliance with FATCA reporting requirements. The roadmap also provided an overview of compliance activities used solely for enforcing FATCA reporting requirements or enhancing existing compliance efforts. However, in July 2018, TIGTA reported that IRS had not updated the FATCA Compliance Roadmap since 2016, and had taken limited or no action on a majority of the planned activities outlined in it. We also found that IRS had not yet evaluated the effects of FATCA, including the effects on voluntary tax compliance. IRS documentation states that only 7 of 31 capabilities outlined in the FATCA Compliance Roadmap were delivered due to funding constraints. As of October 2018, IRS has stopped using the FATCA Compliance Roadmap and has not developed a revised comprehensive plan to manage efforts to leverage FATCA data to improve taxpayer compliance. According to IRS officials, IRS moved away from updating broad strategy documents, such as the FATCA Compliance Roadmap, to focus on individual compliance campaigns. These include a campaign to match individual tax filers to the reports from FFIs, and another campaign to identify FFIs with FATCA reporting requirements who are not meeting all of their obligations. According to what IRS told us, with the passage of time and as FATCA is becoming more integrated into agency operations, it has moved from updating the broad strategy documents focused on FATCA to working on compliance campaigns that incorporate FATCA into overall tax administration. Additionally, IRS and outside researchers plan to study the role of enforcement in driving overall patterns in reporting offshore assets and income generated from such assets. Though IRS maintains that FATCA is more integrated into its operations, TIGTA’s 2018 report concluded that IRS was still unprepared to enforce compliance with FATCA in part because it took limited or no action on the majority of planned activities outlined in the FATCA Compliance Roadmap. Documenting a framework for using FATCA reporting requirements to improve taxpayer compliance and measure their effect is consistent with three steps we found leading public sector organizations take to increase the accountability of their initiatives: (1) define clear missions and desired outcomes; (2) measure performance to gauge progress; and (3) use performance information as a basis for decision-making. We also previously reported that it is important for IRS to use a documented framework that defines a clear strategy, timeline, and plans for assessment. Having such a framework in place can help IRS better allocate resources and avoid unnecessary costs resulting from not having the necessary or appropriate data available to execute its objectives. In light of the challenges IRS faces to collect, manage, and use FATCA data to improve compliance in a resource-constrained environment, employing a comprehensive plan would help IRS maximize the use of collected data and better leverage individual campaigns to increase taxpayer compliance. Without such a plan, IRS’s ability to collect and leverage data collected under FATCA for compliance enforcement and other purposes is constrained. Analysis of FBAR and FATCA Data Provide Insights, Including the Possibility that Tens of Thousands of Forms 8938 May Have Been Filed Unnecessarily More than 900,000 Individual FBAR Filers Reported about $1.5 Trillion or More in Foreign Accounts in Both Calendar Years 2015 and 2016 We could not report on total values of foreign financial assets on Forms 8938 in tax years 2015 and 2016. However, we could provide a range of total maximum account values reported on FBARs during the same period. Specifically, we determined that more than 900,000 individuals filed FBARs in calendar years 2015 and 2016, and declared total maximum values of accounts ranging from about $1.5 trillion to more than $2 trillion each year. A little more than one in five—or about 21.7 percent—of the approximately 404,800 Forms 8938 filed with IRS in tax year 2016 were done so from U.S. persons living abroad, with the other 78.3 percent living in the United States. Table 1 shows that a higher proportion of Form 8938 filings from U.S. persons living abroad for tax year 2016 were filed on paper (43.3 percent) than Form 8938 filings from U.S. persons living in the United States during the same period (14.7 percent). We extracted these data from IRTF, which IRS officials said is the authoritative source for filers of Form 8938. However, we could not report complete information on foreign financial assets reported by Form 8938 filers because such data are incomplete; as noted above, IRS databases we used to extract Form 8938 data—IRTF and MTRDB—do not include asset information reported on paper filings of Forms 8938. Tens of Thousands of Forms 8938 May Have Been Filed Unnecessarily in Tax Year 2016 Of the approximately 404,800 Forms 8938 filed by individuals for tax year 2016—the most recent data available—we could access information on residency of filers and reported foreign financial assets from about 277,600 Forms 8938 that did not indicate that foreign financial assets and values were declared on other forms besides the Form 8938. Of the subset of these Forms 8938, more than one quarter—or about 73,500— reported foreign financial assets in amounts that indicate the Form 8938 may have been filed unnecessarily, since they reported specified foreign financial assets with aggregate values at or below reporting thresholds as of the last day of the tax year. Based on available Form 8938 data from tax year 2016, table 2 shows that about 61,900 filings from U.S. persons living in the United States and about 11,600 filings from U.S. persons living abroad during the same tax year reported specified foreign financial assets with aggregate values at or below end of tax year thresholds. These totals likely understate the total number of Forms 8938 that U.S. persons may have filed unnecessarily in tax year 2016; due to data limitations, these totals exclude Forms 8938 without asset information stored in IRS’s databases, including most Forms 8938 filed on paper and Forms 8938 where filers identified that they declared foreign financial assets on other forms besides the Form 8938. There is no clear explanation as to why some U.S. persons may have filed Forms 8938 unnecessarily. However, we identified a number of potential reasons from focus groups and other interviews with stakeholder groups. In focus groups we conducted, participants expressed confusion about IRS’s instructions for completing the Form 8938 and information provided on IRS’s website. In the instructions for completing Form 8938, IRS described the specific types of foreign financial assets that are to be reported on Form 8938, and the asset value thresholds that must be met for required reporting, depending on the location of residence and filing status of the taxpayer. IRS also posted responses to frequently asked questions on meeting FATCA reporting requirements on its website, and established a separate page on its website comparing foreign financial assets that must be reported on Form 8938 and/or FBAR. Nonetheless, focus group participants reported confusion on whether and how to report investment and retirement accounts and compulsory savings plans managed by their country of residence. In a meeting we convened with an organization representing tax attorneys, they told us taxpayers are unsure about what account values to report on the Form 8938. Tax practitioners participating in another focus group added that they filed Forms 8938 regardless of the aggregate value of the assets because it was too cumbersome for them to identify whether the assets exceeded reporting thresholds as of the end of the year or at any time during the year. IRS officials also cited a number of possible reasons why U.S. persons may be filing Forms 8938 unnecessarily. For example, it may be easier for U.S. persons to report all specified foreign financial assets they hold on the Form 8938, rather than determine whether the value of such assets met applicable thresholds. IRS officials also said that U.S. persons might complete a Form 8938 for reasons besides meeting tax-filing requirements, such as providing evidence of assets for a loan application. IRS’s Taxpayer Bill of Rights states that taxpayers are entitled to clear explanations of the laws and IRS procedures in all tax forms, instructions, publications, notices, and correspondence. Furthermore, one of IRS’s strategic goals is to empower taxpayers by making it easier for them to understand and meet their filing, reporting, and payment obligations. IRS officials said they hosted sessions for tax practitioners at IRS Nationwide Tax Forums to address FATCA reporting requirements. However, they said IRS has not taken direct steps to identify or implement actions to further clarify instructions and related guidance on IRS’s website for completing Form 8938, such as information on which foreign financial assets to report, how to calculate asset values, and determine whether such values exceed required reporting thresholds. Additionally, IRS officials said they have not conducted additional outreach to educate taxpayers on required reporting thresholds under FATCA, or notify Form 8938 filers of instances where aggregate values of specified foreign financial assets reported on Forms 8938 were below reporting thresholds. IRS officials said they have not made efforts to determine whether there is a pattern of unnecessary Form 8938 filings that they could address. Rather, they said they believed resources should be devoted to FATCA implementation in general. However, as shown above, we have identified many tens of thousands of instances where U.S. persons may have filed Forms 8938 unnecessarily. Without assessing factors contributing to unnecessary Form 8938 reporting—and identifying or implementing actions to further clarify and educate taxpayers on FATCA reporting requirements—IRS is missing opportunities to help taxpayers understand their filing and reporting obligations and minimize their compliance burdens while properly meeting their tax obligations. Additionally, IRS may be missing opportunities to reduce costs in processing forms that taxpayers did not need to file. Different Laws Established Overlapping Foreign Financial Asset Reporting Requirements and Compounded Taxpayer Compliance Burden Because of overlapping statutory reporting requirements, IRS and FinCEN—both bureaus within Treasury—collect duplicative foreign financial asset data using two different forms (Form 8938 and FBAR). Our evaluation and management guide for fragmentation, overlap, and duplication states that overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Table 3 shows that individuals required to report foreign financial assets on Form 8938, in many cases, also must meet FBAR reporting requirements. For example, specified individuals with foreign financial accounts exceeding $50,000 in aggregate value on the last day of the tax year must file both Form 8938 and FBAR if such values exceed the minimum Form 8938 thresholds; these thresholds depend on the filing status and address of specified individuals. Table 3 also shows that, in many cases, specified interests in foreign financial assets as defined in Form 8938 instructions are the same as the financial interest in such assets under FBAR. Further, as noted in table 3, the overlapping requirements lead to IRS and FinCEN collecting the same information on certain types of foreign financial assets. For example, both Form 8938 and FBAR collect information on foreign financial accounts for which a person has signature authority and a financial interest in the account. Form 8938 and FBAR also both collect duplicative information on several other types of foreign financial assets, such as foreign mutual funds and accounts at a foreign financial institution that include foreign stock or securities. Overlapping reporting requirements result in most Form 8938 filers also filing an FBAR during the same reporting year. Table 4 shows that close to 75 percent of Form 8938 filers in tax years 2015 and 2016 percent also filed an FBAR for the same year using the same TIN. Overlapping requirements to file both Form 8938 and FBAR increases the compliance burden on U.S. persons and adds complexity that can create confusion, potentially resulting in inaccurate or unnecessary reporting. Focus group participants in all five countries included in our study affirmed that U.S. persons experienced confusion and frustration with having to report duplicative foreign financial asset information on both forms. Focus group participants and others we interviewed also noted that U.S. persons incurred additional financial costs to complete and file both Form 8938 and FBAR. For instance, one tax practitioner in Canada said the charge was about $190 to report four-to-five accounts on an FBAR in addition to charging about $540 for basic tax return packages. An accounting firm based in Japan typically charged between $300 and $800 to complete a Form 8938 and between $150 and $500 to complete an FBAR, depending on the number of accounts reported on the forms. Proposed revisions to regulations implementing BSA proposed by FinCEN may also increase the number of duplicative foreign financial accounts reported on Form 8938 and FBAR. Currently, U.S. persons must report detailed information on all foreign financial accounts on Form 8938 if the value of such accounts and other specified foreign financial assets reaches applicable reporting thresholds. In contrast, U.S. persons are generally exempted from reporting detailed account information on FBARs if they report having signature or other authority over 25 or more foreign financial accounts. FinCEN’s proposed revisions to BSA regulations would eliminate the exemption, requiring U.S. persons to report detailed information on all foreign financial accounts in which he or she has a financial interest if the value of such accounts exceed FBAR’s $10,000 reporting threshold. FinCEN estimated that it will receive account information for the first time on about 5.4 million foreign financial accounts if it finalizes the proposed revisions. In turn, these revisions may lead to increased filings of duplicative asset data on both Form 8938 and FBAR, as U.S. persons may have to report detailed information on all foreign financial accounts using both forms. U.S. persons also face exposure to two different penalty regimes for any failures in accurately and completely reporting foreign financial asset information to two bureaus within Treasury—IRS and FinCEN. Officials from one organization representing U.S. persons living abroad said penalties due to failure to report certain accounts on one or both forms can be significant, even if little or no taxes are owed on those accounts. The duplicative reporting of foreign financial asset data on two different forms also creates additional costs to the government to process and store the same or similar information twice, and enforce reporting compliance with both requirements. In 2012, we recommended that Treasury direct the Office of Tax Policy, IRS, and FinCEN to determine whether the benefits of implementing a less duplicative reporting process exceed the costs and, if so, implement that process. Treasury did not implement our recommendation. While we continue to believe that the agencies should have considered whether less duplicative reporting could have been implemented, we do recognize that FATCA and FBAR were enacted under two different statutes to serve different purposes. As mentioned above, according to IRS, FATCA improves visibility into taxable income from foreign sources and enhances the agency’s ability to identify and pursue taxpayer noncompliance. In contrast, the information reported on the FBAR is collected to identify money laundering and other financial crimes; law enforcement agencies can use BSA information—including information collected from FBARs— to aid regulatory and criminal investigations. Additionally, data collected from Form 8938 and FBAR are used in different systems for use by different bureaus within Treasury. Fully addressing issues stemming from overlapping reporting requirements and the resulting collection of duplicative information—while at the same time ensuring that such information can be used for tax compliance and law enforcement purposes—can only be done by modifying the statutes governing the requirements. Further, IRS and FinCEN have varying degrees of access to foreign financial asset information collected from Form 8938 and FBAR to enforce tax compliance and financial crime laws. FATCA was enacted, in part, to improve visibility into taxable income from foreign sources. However, information provided on Forms 8938 is taxpayer return information protected by section 6103 of the Internal Revenue Code (IRC), which generally prohibits IRS from disclosing information provided on Forms 8938. IRS can share return information with other government agencies and others when it is allowed by statute. For example, under section 6103, IRS may disclose return information related to taxes imposed under the IRC—such as self-employment income tax, Social Security and Medicare tax and income tax withholding—to the Social Security Administration (SSA) as needed to carry out its responsibilities under the Social Security Act. However, according to FinCEN officials, FinCEN, law enforcement, and regulators often cannot access information submitted on Forms 8938. While section 6103 provides other exceptions to disclosure prohibitions—such as allowing IRS to share return information with law enforcement agencies for investigation and prosecution of nontax criminal laws—such information is generally only accessible pursuant to a court order. As noted above, information reported on the FBAR can be used by law enforcement agencies to aid regulatory and criminal investigations. This includes IRS, which has been delegated responsibility from FinCEN to enforce compliance with FBAR reporting requirements. IRS has used FBAR information in addressing taxpayer noncompliance with reporting and paying taxes on foreign assets and income. For example, taxpayers accepted into one of IRS’s offshore voluntary disclosure programs must have filed amended or late FBARs as part of their program applications. Investigators from IRS’s Criminal Investigation division generally reviewed applications to determine if the taxpayer has made a complete and truthful disclosure. IRS examiners can also use information from case files of program participants—such as information disclosed on FBARs— to identify new groups of taxpayers suspected of hiding income offshore. IRS can then choose to continue offering offshore programs and encourage these newly identified groups of taxpayers, as well as all taxpayers with unreported offshore accounts, to disclose their accounts voluntarily. In addition to eliminating overlapping reporting requirements, harmonizing statutes governing foreign financial asset reporting and use of information collected on such assets to make such statutes fully consistent could yield additional benefits to both IRS and the law enforcement community. Specifically, and as shown in appendix IV, there are specified foreign financial assets reported on Form 8938—such as foreign hedge funds and foreign private equity funds—that are not required to be reported on an FBAR. In contrast, there are other specified foreign financial assets reported on an FBAR—such as indirect interests in foreign financial assets through an entity—that are not required to be reported on Form 8938. Without congressional action to address overlap in foreign financial asset reporting requirements, IRS and FinCEN will neither be able to coordinate efforts to collect and use foreign financial asset information, nor reduce unnecessary burdens faced by U.S. persons in reporting duplicative foreign financial asset information. FFIs Face Overlapping Foreign Account Reporting Systems, but Alignment Would Entail Significant Changes in Law Two reporting systems for sharing foreign account information from foreign financial institutions are in operation globally—FATCA and the Common Reporting Standard (CRS). According to officials from banking associations and a consulting firm, FFIs in the countries where we examined FATCA implementation encountered challenges implementing and now maintaining two overlapping reporting systems for collecting and transmitting account information to other countries for a seemingly similar purpose, and collecting sufficient information from customers to ensure they meet the requirements of both systems. As noted above, we previously identified overlap as occurring when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. According to an IRS official, collecting account information under FATCA ushered in an era of greater transparency; as noted above, FATCA’s passage sought to reduce tax evasion by creating greater transparency and accountability with respect to offshore accounts and other assets held by U.S. taxpayers. When FATCA was first introduced, there was no international platform to share account information between countries. The United States and other countries worked together to reach an agreement on the electronic formatting that would be used to share the information. Other countries tax authorities’ became more interested in understanding the financial assets held abroad by their residents through an exchange of account information among themselves. In response, the Organisation for Economic Co-operation and Development (OECD) established the CRS reporting system for automatic exchange of information among member countries. According to the OECD, CRS was developed with a view to maximize efficiency and reduce cost for financial institutions. Thus, CRS drew extensively on the intergovernmental approach used to implement FATCA reporting requirements for FFIs. Countries participating in CRS exchange account information with each other using OECD’s Common Transmission System, which was modeled on FATCA’s International Data Exchange System. Figure 2 depicts the flow of account information between countries under FATCA and CRS. CRS reporting requirements are in many ways similar to FATCA, including required reporting of the account holders’ name and address, taxpayer identification number, account number, account balance, and income and sales proceeds. However, the requirements differ in significant ways. The biggest differences in requirements are driven by the nature of the U.S. tax system. The United States, like many countries, generally taxes citizens and resident aliens on their worldwide income regardless of where that income is earned. However, the United States differs from other countries because it generally subjects U.S. citizens who reside abroad to U.S. taxation in the same manner as U.S. residents. In contrast to U.S. policy, most other countries do not tax their citizens if they reside in a country other than their country of citizenship. Further, IGAs implementing FATCA require FFIs to report the foreign-held accounts of U.S. citizens and residents—including resident aliens—while CRS requires financial institutions in jurisdictions participating in CRS to report on almost all accounts held by nonresidents of the reporting country. Appendix V provides more detailed information on differences in reporting requirements, due diligence requirements, and definitions under FATCA and CRS. These differences in tax systems drive variations in due diligence procedures between FATCA and CRS. For example, FATCA aims to identify whether an account holder at a foreign institution is a U.S. person based on citizenship and tax residency information. In contrast, CRS aims to identify the tax residency of all account holders of a financial institution, and does not consider citizenship. Due to the multilateral nature of CRS, if an account holder is determined on the basis of the due diligence procedures to have residency in two or more countries, information would be exchanged with all jurisdictions in which the account holder is determined a resident for tax purposes. Under CRS rules, information about foreign accounts held by a U.S. citizen with a tax residence abroad would not be reported to IRS, but rather to the jurisdiction in which they were a resident for tax purposes. Because the United States taxes the worldwide income of U.S. citizens, CRS rules would need to require identification of account holders’ citizenship in member countries where they are residents if FATCA were to be aligned with CRS. Table 5 shows a comparison of individuals reported to IRS under FATCA and hypothetically under CRS. Treasury and IRS, as part of its 2017-2018 Priority Guidance Plan, are considering modifying certain elements of the existing FATCA regulations. For instance, Treasury and IRS are considering coordinating certain documentation requirements for participating FFIs with the requirements under IGAs. In December 2018, Treasury and IRS also proposed regulations intended, in part, to reduce the burdens of FATCA. The proposed regulations included a clarification of the definition of an investment entity that is similar to the guidance published by OECD interpreting the definition of a “managed by” investment entity under CRS. If the United States wanted to adopt CRS, some of the key differences between FATCA and CRS—as outlined above and in appendix V—could be aligned through regulation while others would require legislation. According to Treasury officials, to align FATCA and CRS, Congress would need to revise statutes to: provide for the collection of information for accounts that residents of partner jurisdictions maintain at U.S. financial institutions; require certain U.S. financial institutions to report the account balance (including, in the case of a cash value insurance contract or annuity contract, the cash value or surrender value) for all financial accounts maintained at a U.S. office and held by foreign residents; expand the current reporting required with respect to U.S. source income paid to accounts held by foreign residents to include similar non-U.S. source payments; require financial institutions to report the gross proceeds from the sale or redemption of property held in, or with respect to, a financial account; and require financial institutions to report information with respect to financial accounts held by certain passive entities with substantial foreign owners. While better aligning FATCA and CRS to some extent is possible, anything short of the United States fully adopting CRS would not fully eliminate the burdens of overlapping requirements that FFIs must currently meet under the two different systems. While having the United States adopt the CRS reporting system in lieu of FATCA could benefit FFIs that may otherwise have to operate two overlapping reporting systems, it would result in no additional benefit to IRS in terms of obtaining information on U.S. accounts. Additionally, it could generate additional costs and reporting burdens to U.S. financial institutions that would need to implement systems to meet CRS requirements. The extent of these costs is unknown. Further, adoption of CRS would create the circumstance where foreign accounts held by U.S. citizens with a tax residence in partner jurisdiction—including U.S. citizens who have a U.S. tax obligation—would not be reported to IRS. Agencies Coordinated Efforts to Address Challenges U.S. Persons Living Abroad Encountered from FATCA Implementation, but Opportunities Exist to Enhance Collaboration Some U.S. Persons Living Abroad Encountered Reduced Access to Financial Services Due in Part to Costs and Risks FFIs Faced from Implementing FATCA Tax practitioners and others we interviewed said that U.S. persons living abroad—whether or not they are required to complete a Form 8938—risk being denied access to foreign financial services. U.S. persons and tax practitioners located in four of the five countries where we conducted focus groups and interviews reported that some U.S. persons and U.S.- owned businesses encountered difficulties opening bank accounts with FFIs after FATCA was enacted, with some FFIs closing U.S. persons’ existing accounts or denying them opportunities to open new accounts. One focus group participant, for example, said that the financial institution closed down all accounts including business checking, savings, and money market accounts after FATCA was implemented, requiring this individual to find a local resident who could co-sign on a new account. Costs FFIs would incur from implementing FATCA were cited as a significant factor in increasing barriers faced by U.S. persons in accessing foreign financial services. Officials from one organization representing tax attorneys said that as a result of costs associated with FATCA implementation, FFIs have found it less burdensome to close accounts of U.S. persons or require the accounts to be moved to a Securities and Exchange Commission registered affiliate than comply with FATCA. Tax practitioners and an official from a bankers association added that because FFIs may gain only small margins of profit from U.S. persons, FFIs may believe it is too troublesome to do business with them. Additionally, officials from a foreign government agency told us that because FATCA is expensive for FFIs to continue implementing, banks in their country might charge U.S. persons seeking access to financial services additional fees to account for FATCA implementation costs. Tax practitioners, consultants working with FFIs to implement FATCA reporting requirements, and the National Taxpayer Advocate told us that FFIs with smaller asset sizes such as smaller trust companies were more prone to decline business with U.S persons. Officials from an advocacy group representing U.S. persons living abroad told the National Taxpayer Advocate that some smaller banks declined U.S. persons as customers as a business decision, believing it would cost more for them to comply with FATCA reporting requirements than maintain U.S. expatriates’ accounts. Banking associations we interviewed said that decisions made by FFIs on whether to accept U.S. persons as customers also depends on the overall risks and benefits of taking on individual U.S. persons, shaped in part from risks in not meeting FATCA reporting requirements. Representatives of a banking association and an advocacy group told us that some FFIs decided to avoid doing business with U.S. persons after they became concerned about potential penalties for failure to comply—either willfully or in error—with FATCA reporting requirements. One banking association added that such errors could affect other aspects of FFIs’ relationships with the U.S. government, such as nonprosecution agreements made with the U.S. Department of Justice. Officials from one consulting firm that helped FFIs meet FATCA reporting requirements added that FFIs’ determination of risk depends on many layers, such as the value of clients’ assets or the country in which clients reside or possess citizenship. After FATCA’s implementation, according to officials from the consulting firm, FFIs decided to turn away U.S. persons in some cases because the benefits of doing business with U.S. persons were less than the potential risks. For example, if a U.S. person only maintained a payroll account, the FFI may determine it would not make enough money to account for risks in incorrectly identifying the status of the customer as a U.S. or non-U.S. person. However, focus group participants from two countries said that FFIs may agree to accept U.S persons as customers if they have higher account balances that offset risks from FATCA reporting requirements. One focus group participant, for instance, said banks in his country will do business with a U.S. person if he or she has more than $500,000 in assets. Additionally, U.S. persons and tax practitioners we interviewed said that other factors such as language barriers and U.S. regulations designed to prevent money laundering may also inhibit U.S. persons’ access to brokerage accounts while overseas. Form 8938 Reporting Requirements for Individuals with Signature Authority on and Financial Interest in Accounts May Have Contributed to Employment and Promotion Denials Overseas Focus group participants and others we interviewed said that Form 8938 reporting requirements contributed to denials of employment and promotion opportunities for U.S. persons living abroad. Treasury officials noted that requirements imposed by FATCA do not directly hinder U.S. persons from gaining employment or promotion opportunities overseas. However, focus group participants, a consulting firm, and a foreign government agency noted that foreign-owned companies and nonprofit organizations such as churches did not want to hire or promote U.S. persons because they wanted to avoid exposing information to the U.S. government on their organizations’ accounts and client trust accounts where the U.S. person would have signature authority. As noted above, a U.S. person is generally required to report on the Form 8938 foreign financial accounts for which the person has signature authority if he or she has a financial interest in the account. Focus group participants and others noted that such requirements have adversely affected the ability of U.S. persons to serve on a corporate board or in a nonprofit organization, or maintain business relationships. Treasury and Department of Commerce officials stationed in one country included in our review added that FATCA implementation has played a role in dissuading foreign-owned corporations in some Asian countries from considering U.S. persons for corporate leadership positions such as directorships. This is in part because FATCA has triggered additional paperwork burden and operating costs for onboarding U.S. employees since they have had to help them meet Form 8938 reporting requirements. Two advocacy groups representing U.S. persons living abroad added that it is also harder for U.S.-based companies to justify relocating U.S. persons overseas and paying for such relocations since they also have had to help their U.S. employees meet Form 8938 reporting requirements in addition to meeting other tax filing requirements. U.S. Persons Living Abroad Encounter Challenges Obtaining Social Security Numbers Necessary to Meet U.S. Tax Obligations and Obtain Financial Services U.S. embassy documents indicate there was increased demand for Social Security Numbers (SSN) since FATCA’s passage in 2010, driven in part by U.S. citizens applying for an SSN to gain access to foreign financial services or resolve outstanding U.S. tax obligations before completing renunciation. However, officials from two organizations representing Americans living abroad cited significant challenges faced by some U.S. persons living abroad in obtaining SSNs required to meet their U.S. tax obligations or obtain financial services. U.S. persons living abroad might not possess an SSN because their parents did not obtain one for them as a minor. Often, this may have been due to the parents leaving the United States when the child was young. State officials also said that U.S. citizens applying for U.S. passports while overseas frequently forget their SSNs or do not know if their parents ever applied for an SSN on their behalf. Officials from organizations representing U.S. persons living abroad added that without an SSN, these persons are unable to claim refunds or other tax benefits when filing their tax returns, or participate in IRS programs to voluntarily disclose previously unreported tax liabilities and assets. Additionally, some might be unable to gain or maintain access to financial accounts or other assets in their countries of residence without an SSN. According to these officials and tax practitioners we interviewed, U.S. persons living abroad face greater challenges in obtaining SSNs than those living in the United States. For instance, they faced difficulties obtaining documentation from the United States that the Social Security Administration (SSA) requires with SSN applications; traveling to Social Security offices and U.S. embassies or consulates to certify documents or submit applications in person; and receiving valid SSNs from SSA in a timely manner to file tax returns or participate in offshore disclosure programs. SSA officials also identified several challenges U.S. persons experience when applying for an SSN from abroad. For instance, SSA officials said that efforts to authenticate documents submitted with SSN applications can cause delays for U.S. persons living abroad in obtaining an SSN. Additionally, SSN applicants living abroad face significantly longer wait times than applicants living in the United States once their applications are processed. According to SSA officials, after an application is processed, it can take 3 to 6 months–-depending on the country’s mail service–-for an individual to receive a Social Security Card. This is significantly longer than the 2-week period it takes SSN applicants to receive a card after mailing in their applications from within the United States. FATCA Implementation Contributed to Increased Renunciations of U.S. Citizenship, but the Extent of the Effect is Unclear According to Department of State (State) data, the annual number of approvals of requests for renunciations of U.S. citizenships increased nearly 178 percent during a 6-year period, from 1,601 in 2011—the year after FATCA was enacted—to 4,449 in 2016, the most recent year to which full data on renunciations were available. According to U.S. embassy documents and information provided by focus group participants and interviewees across all the countries we examined, FATCA was the reason or a contributing factor in some of these decisions and the resulting increase in total renunciations. Specific effects of FATCA implementation contributing to decisions to renounce U.S. citizenship included reduced access to foreign financial services and employment or promotion opportunities in a foreign-owned company—as identified above from our document reviews, focus groups, and interviews—and burdens in meeting FATCA reporting requirements. However, the extent to which FATCA implementation contributed to increased renunciations is unclear. State officials said that data are unavailable to determine the extent to which these renunciation decisions were the direct result of FATCA because State has no legal obligation to collect information on the motivation behind renunciation of citizenship. Treasury, State, and SSA Initially Collaborated to Remedy FATCA-Related Issues for U.S. Persons Abroad, but Problems Persist without Cross- Agency Efforts to Address Them In response to concerns about the availability of foreign financial services, Treasury implemented regulations that allow certain low-risk local FFIs to be deemed compliant with FATCA, but only if the FFIs do not implement policies or practices that discriminate against opening or maintaining accounts for specified U.S. persons. Treasury and State also previously established joint strategies to address these challenges. For instance, Treasury and State developed guidance on FATCA that was posted on embassy websites to educate U.S. persons and others. Additionally, Treasury and State officials conducted outreach events and workshops through U.S. embassies and American chambers of commerce worldwide to provide information on FATCA and other tax filing requirements, According to State officials, the U.S. embassies in at least two countries—Switzerland and France—also worked with foreign officials and/or FFIs to increase access to financial services for U.S. citizens residing in those countries. For instance, Treasury and State officials reached agreements with FFIs in Switzerland to provide a wider range of financial services to U.S. persons. Similarly, in 2017, SSA and State implemented an interagency agreement to streamline processes for providing SSNs to U.S. persons living abroad after FATCA’s implementation in 2010. SSA officials said they are also in discussions with State on improving SSA’s website to include more transparent, specific information for SSN overseas applicants about SSA documentation requirements. Tax practitioners, advocacy groups, and Treasury officials we interviewed said FFIs have become more willing to accept U.S. persons as customers compared to when FATCA was enacted in 2010. However, U.S. persons living abroad continue to face issues gaining access to foreign financial services. For example, in a September 2018 letter sent by the Chair of the Finance Committee of the Netherlands House of Representatives to a member of Congress, U.S. citizens born outside the United States and who have never lived, studied, or worked in the United States are effectively being denied access to financial services in the Netherlands. Focus group participants added that some banks will reject U.S. clients or charge heavy fees for them to open an account. Agencies have ongoing efforts to address FATCA-related issues, as listed below, but some are ad hoc, fragmented, or otherwise not part of a broader effort between Treasury and other agencies such as State or SSA to use ongoing collaborative mechanisms to monitor and share information on such issues, and jointly develop and implement steps to address them: Treasury officials said they are participating in discussions with FFIs to address residual issues with access to foreign financial services. However, they said they have not involved other agencies in these discussions. IRS officials, in response to concerns from the French government, said they are developing a program to help streamline foreign asset- related tax compliance requirements for a small group of U.S. born citizens that have been French residents most of their lives without an SSN, and—according to State officials—did not wish to take the necessary steps to renounce their citizenship. However, no effort has been made to address these issues more broadly. State encouraged U.S. citizens to alert the nearest U.S. embassy of any practices they encounter with regard to the provision of financial services. State documents noted that some Americans have been turned away by banks or required to meet a higher deposit threshold in part because of FATCA reporting requirements. State documentation also noted that there have been cases of U.S. citizens with existing bank accounts who have been asked to close them. However, State documentation we reviewed does not highlight collaborative efforts currently underway with Treasury or other agencies to address banking access issues U.S. persons living abroad are presently encountering worldwide. As described above, SSA and State streamlined processes and policies for U.S. persons abroad seeking to obtain SSNs. However, SSA officials said they have not been involved in any ongoing efforts involving Treasury to identify systemic issues and related solutions involving SSNs for the purposes of tax compliance and citizenship renunciations. Treasury officials said they spoke with SSA officials about problems U.S. persons living abroad face in obtaining SSNs, but SSA believed that cycle times for processing SSN applications submitted by U.S. persons living abroad were not significantly greater than for applications submitted by U.S. persons living in the United States, although mailing times could vary significantly and take up to 3 to 6 months. We have previously identified key practices to enhance and sustain interagency collaboration, including defining and articulating a common outcome, establishing mutually reinforcing or joint strategies, and developing mechanisms to monitor, evaluate, and report on results. One goal in IRS’s strategic plan is to collaborate with external partners proactively to improve tax administration, while objectives in SSA’s strategic plan include improving service delivery and expanding service delivery options. Additionally, according to State’s Bureau of Consular Affairs website, one of State’s key priorities is to protect the interests of U.S. citizens overseas, such as through ensuring responsive and efficient provision of consular services overseas. As noted above, there are a host of ongoing issues and challenges for U.S. persons living abroad from implementation of FATCA, such as loss of access to foreign financial services, denial of employment and promotion opportunities overseas, and difficulty obtaining SSNs from abroad. However, Treasury currently lacks a collaborative mechanism to coordinate efforts with other agencies to address these issues, and Treasury officials said they do not plan to establish them. Without effective collaborative mechanisms to monitor and share information and implement cross-agency solutions, future efforts to address such issues will continue to be fragmented and less effective than they otherwise could be. Conclusions In enacting FATCA, Congress sought to reduce tax evasion by creating greater transparency and accountability over offshore assets held by U.S. taxpayers. Because of FATCA, IRS receives information on foreign financial assets from hundreds of thousands of filers annually. IRS could use this information to help ensure taxpayers holding offshore assets report and pay taxes owed on income generated from such assets. However, to take full advantage of the information, IRS must address key challenges. Specifically, Taxpayer Identification Numbers (TIN) reported by FFIs are often inaccurate or incomplete, which makes it difficult for IRS to match information reported by FFIs to individual taxpayers. As such, IRS must develop a plan to mitigate the risks that these data issues pose to agency efforts to identify and combat taxpayer noncompliance. Lack of consistent, complete, and readily available Form 8938 and related parent individual tax return data also affects IRS’s compliance activities, making it more difficult for IRS business units to extract and analyze FATCA data to improve tax compliance efforts and reduce tax revenue loss from income generated from offshore assets. At the same time, IRS has stopped following its FATCA Compliance Roadmap it developed in 2016 because, according to IRS officials, IRS moved away from updating broad strategy documents to focus on individual compliance campaigns. However, in light of the challenges IRS continues to face in fully integrating FATCA information into its compliance programs, it will not maximize use of such information and effectively leverage individual compliance campaigns unless it employs a comprehensive plan that enables IRS to better leverage such campaigns to improve taxpayer compliance. Our analysis of available data indicates that many of the Forms 8938 filed in tax year 2016 may have been filed unnecessarily. Factors that are contributing to this unnecessary reporting are unclear. While IRS has provided instructions and guidance on its website for completing Form 8938, focus group participants and tax practitioners reported confusion on whether and how to report investments in foreign accounts. Taking steps to identify and address factors contributing to unnecessary Form 8938 reporting would help reduce taxpayer burden and reduce processing costs for IRS. Reporting requirements for foreign financial assets under FATCA overlap with reporting requirements under FBAR. These overlapping requirements—implemented under two different statutes—have resulted in most taxpayers filing Forms 8938 also filing FBARs with FinCEN. Duplicative filings on foreign financial assets cause confusion, frustration, and compliance burdens for taxpayers. Duplicative filings also increased costs to the government to process and store the same or similar information. Modifying the statutes governing the requirements can fully address the issues outlined above, and can allow for the use of FATCA information for prevention and detection of financial crimes. This is similar to other statutory allowances for IRS to disclose return information for other purposes, such as for determining Social Security income tax withholding. Lastly, FATCA has created challenges for some U.S. persons living abroad that go beyond increasing their tax compliance burdens. Some U.S. persons living abroad are still facing issues accessing financial services and employment and obtaining SSNs. Treasury, State, and SSA have taken some steps to address these issues both separately and in coordination with each other. However, Treasury, as the agency ultimately responsible for effective administration of FATCA, currently lacks a collaborative mechanism with State and SSA to address ongoing issues. Establishing a formal means to collaboratively address burdens faced by Americans abroad from FATCA can help agencies develop effective solutions to mitigate such burdens. Matter for Congressional Consideration We are making the following matter for congressional consideration: Congress should consider amending the Internal Revenue Code, Bank Secrecy Act of 1970, and other statutes, as needed, to address overlap in foreign financial asset reporting requirements for the purposes of tax compliance and detection, and prevention of financial crimes, such as by aligning the types of assets to be reported and asset reporting thresholds, and ensuring appropriate access to the reported information. Recommendations for Executive Action We are making the following four recommendations to IRS: The Commissioner of Internal Revenue should develop a plan to mitigate risks with compliance activities due to the lack of accurate and complete TINs of U.S. account holders collected from FFIs. (Recommendation 1) The Commissioner of Internal Revenue should ensure that appropriate business units conducting compliance enforcement and research have access to consistent and complete data collected from individuals’ electronic and paper filings of Form 8938 and elements of parent individual tax returns. As part of this effort, the Commissioner should ensure that IRS provides clear guidance to the business units for accessing such data in IRS’s Compliance Data Warehouse. (Recommendation 2) The Commissioner of Internal Revenue should employ a comprehensive plan for managing efforts to leverage FATCA data in agency compliance efforts. The plan should document and track activities over time to ensure individuals and FFIs comply with FATCA reporting assess and mitigate data quality risks from FFIs; improve the quality, management, and accessibility of FATCA data for compliance, research, and other purposes; and establish, monitor, and evaluate compliance efforts involving FATCA data intended to improve voluntary compliance and address noncompliance with FATCA reporting requirements. (Recommendation 3) The Commissioner of Internal Revenue should assess factors contributing to unnecessary Form 8938 reporting and take steps, as appropriate, to address the issue. Depending on the results of the assessment, potential options may include: identifying and implementing steps to further clarify IRS Form 8938 instructions and related guidance on IRS’s website on determining what foreign financial assets to report, and how to calculate and report asset values subject to reporting thresholds; and conducting additional outreach to educate taxpayers on required reporting thresholds, including notifying taxpayers that may have unnecessarily filed an IRS Form 8938 to reduce such filings. (Recommendation 4) We are also making the following recommendation to Treasury: The Secretary of the Treasury should lead efforts, in coordination with the Secretary of State and Commissioner of Social Security, to establish a formal means to collaboratively address ongoing issues— including issues accessing financial services and employment and obtaining SSNs—that U.S. persons living abroad encounter from implementation of FATCA reporting requirements. (Recommendation 5) We are also making the following recommendation to State: The Secretary of State, in coordination with the Secretary of the Treasury and Commissioner of Social Security, should establish a formal means to collaboratively address ongoing issues—including issues accessing financial services and employment and obtaining SSNs—that U.S. persons living abroad encounter from implementation of FATCA reporting requirements. (Recommendation 6) We are also making the following recommendation to SSA: The Commissioner of Social Security, in coordination with the Secretaries of State and Treasury, should establish a formal means to collaboratively address ongoing issues—including issues accessing financial services and employment and obtaining SSNs—that U.S. persons living abroad encounter from implementation of FATCA reporting requirements. (Recommendation 7) Agency Comments and our Evaluation We provided a draft of this report to the Secretaries of State and the Treasury, Commissioner of Internal Revenue, and Acting Commissioner of Social Security. IRS provided written comments that are summarized below and reprinted in appendix VI. IRS did not state whether it agreed or disagreed with our four recommendations but otherwise provided responses. Regarding our recommendation to develop a plan to mitigate risks with compliance activities due to the lack of accurate and complete TINs of U.S. account holders collected from FFIs (recommendation 1), IRS reiterated that it provided a transition period, through the end of 2019, for compliance with the TIN requirements for FFIs in countries with Model 1 IGAs with the United States. IRS also said that it continued to make progress on improving FATCA filing compliance, citing efforts such as initiating a campaign addressing FFIs that do not meet their compliance responsibilities. While these efforts may help IRS obtain more accurate and complete information from financial accounts, IRS did not specify how it will mitigate the ongoing hurdles it faces in matching accounts reported by FFIs without valid TINs to accounts reported by individual tax filers and ensure compliance. Regarding our recommendation that appropriate business units have access to consistent and complete data collected from Forms 8938 and tax returns filed by individuals (recommendation 2), IRS reiterated that RAAS has been working to obtain read-only access to the IPM database but that limited budgetary resources are delaying implementation. Enabling access to consistent and complete Form 8938 and tax return data would help IRS better target compliance initiatives and leverage limited available enforcement resources. While IRS continues to work on enabling access to IPM, it could still provide clear guidance to its business units for accessing Form 8938 and tax return data in IRS’s Compliance Data Warehouse, as we recommended. Regarding our recommendation to employ a comprehensive plan for managing efforts to leverage FATCA data in agency compliance efforts (recommendation 3), IRS said the resources that would be required to develop a comprehensive plan would be better spent on enforcement activities. While implementing enforcement activities could increase compliance with FATCA reporting requirements, it risks not maximizing the value of such efforts without a comprehensive plan to manage and address the myriad of challenges discussed in this report. Further, it is our belief that IRS’s failure to execute the FATCA roadmap is not justification for abandoning a strategic approach going forward. Regarding our recommendation to assess factors contributing to unnecessary Form 8938 reporting and take appropriate steps to address the issue (recommendation 4), IRS said it will continue to observe filings of Form 8938 and, to the extent that there are unnecessary filings, assess options to inform account holders to reduce reporting and filing burdens followed by appropriate steps to implement any selected options. Our analysis of available data indicates that many Forms 8938 may have been filed unnecessarily. Implementing our recommendation reduces the risk that taxpayers file—and IRS processes—forms unnecessarily. Treasury provided written comments but did not state whether it agreed or disagreed with our recommendation that it lead efforts, in coordination with State and SSA, to establish a formal means to collaboratively address ongoing issues that U.S. persons living abroad encounter from implementation of FATCA reporting requirements (recommendation 5). Treasury said it will work collaboratively with State and SSA to answer questions that Americans abroad have regarding their tax obligations and, where appropriate, to direct U.S. citizens to resources that will help them understand the procedures applied by SSA to apply for an SSN. However, Treasury said it is not the appropriate agency to lead coordination efforts involving foreign employment issues and issues regarding access to foreign financial services and obtaining SSNs. As we noted above, Treasury is ultimately responsible for effective administration of FATCA. As such, it is in a better position than State or SSA to adjust regulations and guidance implementing FATCA to address burdens FFIs and foreign employers face from FATCA implementation while ensuring tax compliance. Additionally, Treasury has an interest in helping U.S. persons receive valid SSNs from SSA in a timely manner to meet their tax obligations. Treasury’s written response is reprinted in appendix VII. State and SSA also provided written comments in which they concurred with our recommendations to establish a formal means to address collaboratively together with Treasury ongoing issues that U.S. persons living abroad encounter with FATCA (recommendations 6 and 7). State and SSA’s written comments are reprinted in appendices VIII and IX, respectively. Treasury, State, and SSA provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of State and the Treasury, Commissioner of Internal Revenue, Acting Commissioner of Social Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who contributed to this report are listed in appendix X. Appendix I: Objectives, Scope, and Methodology The objectives of this report are to (1) assess the Internal Revenue Service’s (IRS) efforts to use information collected under the Foreign Account Tax Compliance Act (FATCA) to improve taxpayer compliance; (2) examine available foreign financial asset reports submitted by U.S. persons, including submissions that were below required filing thresholds; (3) examine the extent to which the Department of the Treasury (Treasury) administers overlapping reporting requirements on foreign financial assets; (4) describe similarities and differences between FATCA and Common Reporting Standard (CRS) reporting requirements; and (5) examine the effects of FATCA implementation that are unique to U.S. persons living abroad. For our first objective, we reviewed Treasury Inspector General for Tax Administration reports and collected information from Treasury and IRS to summarize efforts to collect complete and valid Taxpayer Identification Numbers (TIN) from foreign financial institutions (FFI). We identified criteria from our prior work identifying key practices for risk management. The key practices are derived from the Software Engineering Institute’s Capability Maturity Model® Integration for Development and Office of Management and Budget guidance. We applied these criteria to assess steps IRS has taken to manage risks in not receiving complete and valid TIN information from FFIs. We also applied criteria from our prior work on use of documented frameworks to IRS documentation on FATCA compliance activities to determine the extent to which IRS implemented a comprehensive plan to maximize use of collected data to enforce compliance with FATCA. For our second objective, we identified total maximum account values reported by individual filers of Financial Crimes Enforcement Network (FinCEN) Form 114s (commonly known as the Report of Foreign Bank and Financial Accounts, or FBAR) in calendar years 2015 and 2016. See appendix III for more details on our methodology to evaluate these data. We also summarized the numbers of IRS Forms 8938, Statement of Specified Foreign Financial Assets (Form 8938) filed in tax year 2016, accounting for the data limitations described below. We also identified Forms 8938 filed in tax year 2016—the most recent year for which data were available—with available residency and asset information that reported specified foreign financial assets with aggregate values at or below end-of-year tax thresholds, which vary depending on the location of residence and filing status of such filers. For our third objective, we reviewed IRS and FinCEN documentation, and applied criteria from Fragmentation, Overlap, and Duplication: An Evaluation and Management Guide to identify the extent to which IRS and FinCEN were engaged in overlapping activities, and collecting duplicative information on foreign financial assets held by U.S. persons. We assessed the extent to which individual filers who submitted a Form 8938 in 2015 and 2016 also submitted an FBAR for the same year by determining the number and percentage of Forms 8938 with TINs that also match the TIN listed on the corresponding FBAR for the same year. For the three objectives described above, we assessed the reliability of data submitted on Forms 8938 filed by individuals for tax years 2015 and 2016, the most recent data available. These data were extracted from IRS’s Individual Return Transaction File (IRTF) and Modernized Tax Return Database (MTRDB) through IRS’s Compliance Data Warehouse (CDW). We also assessed the reliability of data from FBARs for calendar years 2015 and 2016 by (1) reviewing documentation about the data and the systems that produced them; (2) conducting electronic tests, such as identifying data with significant numbers of missing Form 8938 or FBAR records, or values of foreign financial assets reported outside an expected range; (3) tracing selections or random samples of data to source documents; and (4) interviewing IRS and FinCEN officials knowledgeable about the data. We also reviewed Form 8938 and relevant parent tax return data stored in IRS databases to determine whether IRS management is using quality information collected from Forms 8938 to achieve its objectives, as defined in our Standards for Internal Control in the Federal Government. We determined that data extracted from IRTF on characteristics of Form 8938 filers and from FBAR filings was sufficiently reliable for our purposes, subject to caveats identified in this report. However, we determined we could not obtain complete data on foreign financial assets reported on Forms 8938 filed on paper. For our fourth objective, we reviewed model international agreements and other documentation, and interviewed officials from Treasury, IRS, and the Organisation for Economic Co-operation and Development to compare and contrast FATCA and CRS reporting requirements. We also used the collected information to identify what changes, if any, the United States and other countries could implement to align FATCA and CRS reporting requirements. For our fifth objective, we collected documentation and conducted focus groups and semi-structured interviews with 21 U.S. persons living abroad that were subject to FATCA reporting requirements. We also conducted focus groups and interviews with tax practitioners, banking and CPA organizations, government agencies, advocacy groups representing Americans living abroad, and other organizations from the United States and five other countries (Canada, Japan, Singapore, Switzerland, and the United Kingdom). We selected these countries based on geography, relatively high numbers of U.S. expatriates and Form 8938 filers, tax information sharing agreements, and other tax treaties with the United States. The findings from the focus groups and interviews are not generalizable to other U.S. persons, tax practitioners or organizations, but were selected to represent the viewpoints of U.S. persons, FFIs, and host country tax authorities required to transmit information on foreign accounts and other specified foreign financial assets to IRS. We conducted a thematic analysis of the focus groups and interviews, and reviewed cables from U.S. embassies to identify the unique effects of FATCA implementation on U.S. persons living abroad. We collected documentation from and interviewed Treasury, IRS, Department of State, and Social Security Administration officials on steps to monitor and mitigate such effects. We also identified criteria from our prior work on key practices to enhance and sustain interagency collaboration and mechanisms to facilitate coordination. We applied the criteria to agencies’ collaborative efforts addressing issues U.S. persons living abroad faced from FATCA’s implementation, and identified the extent to which agencies established effective collaborative mechanisms to identify, assess, and implement cross-agency solutions to such issues. We conducted this performance audit from August 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: IRS Data Management Systems Storing Data from IRS Forms 8938 and Related Elements of Individual Tax Returns The following IRS databases store data collected from individuals’ electronic and paper filings of Forms 8938 and/or elements of individual parent tax returns—the filer’s address and filing status—used to determine specified reporting thresholds for Form 8938 filers: Individual Master File (IMF), which serves as IRS’s system for processing individual taxpayer account data. Using this system, accounts are updated, taxes are assessed, and refunds are generated as required during each tax-filing period. Individual Returns Transaction File (IRTF), which stores edited, transcribed, and error-corrected data from the Form 1040 series and related forms for the current processing year and two prior years. Modernized Tax Return Database (MTRDB), which serves as the official repository of all electronic returns processed through IRS’s Modernized e-File system. Tax return data is stored immediately after returns are processed. International Compliance Management Model (ICMM)-FATCA International Returns (ICMM-FIR), which collects, parses, and stores data from incoming form reports–such as Forms 8938 and 8966–into the FATCA Database (FDB), which serves as the repository where ICMM-FIR stores data and from which downstream applications can pull data. Integrated Production Model (IPM), which is a downstream data repository that houses IMF data, information returns, and other data. According to IRS officials, data from IPM are consolidated and made available to a variety of downstream, security certified, systems for use in conducive analysis, case selection, and report preparation. Additionally, data from these and other IRS databases are copied to IRS’s Compliance Data Warehouse (CDW) periodically, which captures data from multiple production systems and organizes the data in a way that is conductive to analysis. Table 6 highlights several problems with the consistency and completeness of Form 8938 and relevant parent tax return data stored across the listed databases. Inconsistent and incomplete data on address and filing status of Form 8938 filers: As noted above, elements of parent tax returns— specifically the filer’s country of residence and filing status—are used to determine specified reporting thresholds for Form 8938 filers. However, IRTF and MTRDB have inconsistent and incomplete data on addresses linked to Form 8938 filers, and report inconsistent numbers of Forms 8938 filed from a U.S. residence. For example, the variable identified as containing data on foreign countries of residence in IRTF shows approximately 8,100 foreign filers in tax years 2015 and 2016, whereas a similar variable in MTRDB shows approximately 89,000 foreign filers for those same years. Additionally, FDB does not contain country codes from paper filings of Form 8938. ICCM-FIR stores information from some elements from parent tax returns—such as TINs and document locator numbers. According to IRS officials, however, ICMM-FIR lacks data on country codes and filing status of Form 8938 filers. IRS officials said that ICMM-FIR was not designed or intended to store data on Form 8938 filers; rather, it was designed to be a database for use in comparing Form 8938 and 8966 data. In general, IRS officials indicated that they would like to adjust the way ICMM-FIR stores data, but that would require modifying the way the database was established. Incomplete data on assets reported on Forms 8938: MTRDB contains detailed information on specified foreign financial assets submitted on electronic filings of Form 8938 and the country code from which the Form 8938 was filed. IRS officials said it is not designed to store information submitted on paper filings of Forms 8938 and parent tax returns. IRS officials said that while IMF processes information transcribed from individual income tax returns, there is no requirement to cross-reference information from the tax return with information submitted with an accompanying Form 8938. Additionally, while IRS officials told us that IRTF is the authoritative source for filers of Form 8938, it does not store account and other asset information submitted from Forms 8938. When asked whether there is any move to store account and other asset information collected from Forms 8938 into IRTF, IRS officials said that the decision on what returns or portions of returns are transcribed are subject to resource constraints and are prioritized from year to year. Appendix III: Methodology and Detailed Information on 2015 and 2016 Individual FBAR Filings Table 7 shows that more than 900,000 individuals filed Financial Crimes Enforcement Network (FinCEN) Form 114s (commonly known as the Report of Foreign Bank and Financial Accounts, or FBAR) in calendar years 2015 and 2016, and declared total maximum values of accounts ranging from about $1.5 trillion to more than $2 trillion each year. We are providing a range of estimates because we found a large number of filings made potentially in error. In some cases, for instance, FBAR filers reported more than $100 trillion in foreign financial accounts. We assume many of these filings are likely made in error, but have only limited means to determine which filings have errors, and which filings have accurate information. Because we cannot independently verify the accuracy of all self-reported FBAR data, we decided to present a range of data with (1) a lower bound discarding all FBAR filings reporting total values of reported foreign financial accounts at or above $1 billion; and (2) an upper bound discarding all filings reporting total values of such accounts at or above $5 billion. Table 2 excludes amended and duplicated FBAR filings. This table also excludes FBAR filings that reported a financial interest in 25 or more financial accounts, but reported total maximum account values of $0 from parts II and III of the FBAR. Although we identified problems with the data, we determined they were reliable enough to provide an estimated range of asset values to report the scale of foreign financial accounts held by U.S. persons. Table 8 shows a detailed breakdown of 2015 and 2016 FBAR filings by residence and categories of total maximum account values reported on the FBARs. Appendix IV: Detailed Comparison of Individual Foreign Financial Asset Reporting Requirements Appendix IV: Detailed Comparison of Individual Foreign Financial Asset Reporting Requirements 26 U.S.C. § 6038D; 26 C.F.R. §§ 1.6038D-1 to 1.6038D-8. 26 C.F.R. § 1.6038D-2. Filers in this category include those who identify as single, married filing separately, “head of household,” or “qualifying widow(er).” Includes maximum value of specified foreign financial assets (Form 8938) or maximum value of financial accounts maintained by a financial institution physically located in a foreign country (FBAR). Under FATCA, any income, gains, losses, deductions, credits, gross proceeds, or distributions from holding or disposing of the account are or would be required to be reported, included, or otherwise reflected on a person’s income tax return. Under FBAR reporting requirements, a person has signature or other authority if he or she has the authority (alone or in conjunction with another) to control the disposition of money, funds or other assets held in a financial account by direct communication (whether in writing or otherwise) to the person with whom the financial account is maintained. The account itself is subject to reporting, but the contents of the account do not have to be separately reported. Appendix V: Key Differences between FATCA Intergovernmental Agreements and CRS Appendix VI: Comments from the Internal Revenue Service Appendix VII: Comments from the Department of the Treasury Appendix VIII: Comments from the Department of State Appendix IX: Comments from the Social Security Administration Appendix X: GAO Contact and Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Brian James (Assistant Director), Mark Ryan (Analyst-in-Charge), Ariana Graham, George Guttman, Krista Loose, Daniel Mahoney, Cynthia Saunders, A.J. Stephens, and Elwood White made key contributions to this report. Michael John Bechetti, Ted Burik, and Jacqueline Chapin also provided assistance.
Why GAO Did This Study Concerns over efforts by U.S. taxpayers to use offshore accounts to hide income or evade taxes contributed to the passage of FATCA in 2010, which sought to create greater transparency and accountability over offshore assets held by U.S. taxpayers. House Report 114-624 included a provision for GAO to evaluate FATCA implementation and determine the effects of FATCA on U.S. citizens living abroad. GAO—among other things—(1) assessed IRS's efforts to use FATCA-related information to improve taxpayer compliance; (2) examined the extent to which Treasury administers overlapping reporting requirements on financial assets held overseas; and (3) examined the effects of FATCA implementation unique to U.S. persons living abroad. GAO reviewed applicable documentation; analyzed tax data; and interviewed officials from IRS, other federal agencies and organizations, selected tax practitioners, and more than 20 U.S. persons living overseas. What GAO Found Data quality and management issues have limited the effectiveness of the Internal Revenue Service's (IRS) efforts to improve taxpayer compliance using foreign financial asset data collected under the Foreign Account Tax Compliance Act (FATCA). Specifically, IRS has had difficulties matching the information reported by foreign financial institutions (FFI) with U.S. taxpayers' tax filings due to missing or inaccurate Taxpayer Identification Numbers provided by FFIs. Further, IRS lacks access to consistent and complete data on foreign financial assets and other data reported in tax filings by U.S. persons, in part, because some IRS databases do not store foreign asset data reported from paper filings. IRS has also stopped pursuing a comprehensive plan to leverage FATCA data to improve taxpayer compliance because, according to IRS officials, IRS moved away from updating broad strategy documents to focus on individual compliance campaigns. Ensuring access to consistent and complete data collected from U.S. persons—and employing a plan to leverage such data—would help IRS better leverage such campaigns and increase taxpayer compliance. Due to overlapping statutory reporting requirements, IRS and the Financial Crimes Enforcement Network (FinCEN)—both within the Department of the Treasury (Treasury)—collect duplicative foreign financial account and other asset information from U.S. persons. Consequently, in tax years 2015 and 2016, close to 75 percent of U.S. persons who reported information on foreign accounts and other assets on their tax returns also filed a separate form with FinCEN. The overlapping requirements increase the compliance burden on U.S. persons and add complexity that can create confusion, potentially resulting in inaccurate or unnecessary reporting. Modifying the statutes governing the requirements to allow for the sharing of FATCA information for the prevention and detection of financial crimes would eliminate the need for duplicative reporting. This is similar to other statutory allowances for IRS to disclose return information for other purposes, such as for determining Social Security income tax withholding. According to documents GAO reviewed, and focus groups and interviews GAO conducted, FFIs closed some U.S. persons' existing accounts or denied them opportunities to open new accounts after FATCA was enacted due to increased costs, and risks they pose under FATCA reporting requirements. According to Department of State (State) data, annual approvals of renunciations of U.S. citizenship increased from 1,601 to 4,449—or nearly 178 percent—from 2011 through 2016, attributable in part to the difficulties cited above. Treasury previously established joint strategies with State to address challenges U.S. persons faced in accessing foreign financial services. However, it lacks a collaborative mechanism to coordinate efforts with other agencies to address ongoing challenges in accessing such services or obtaining Social Security Numbers. Implementation of a formal means to collaboratively address burdens faced by Americans abroad from FATCA can help federal agencies develop more effective solutions to mitigate such burdens by monitoring and sharing information on such issues, and jointly developing and implementing steps to address them. What GAO Recommends GAO is making one matter for congressional consideration to address overlap in foreign asset reporting requirements. GAO is making seven recommendations to IRS and other agencies to enhance IRS's ability to leverage FATCA data to enforce compliance, address unnecessary reporting, and better collaborate to mitigate burdens on U.S. persons living abroad. State and Social Security Administration agreed with GAO's recommendations. Treasury and IRS neither agreed nor disagreed with GAO's recommendations.
gao_GAO-18-419
gao_GAO-18-419_0
Background Authorization of Secret Service Protection during Presidential Campaigns During the 2016 presidential campaign, a Secret Service detail was to be activated once a candidate for the Office of the President or Vice President requested protection, met the requirements for major candidate status (e.g., entered at least 10 state primaries), and received authorization by the Secretary of Homeland Security after consultation with an advisory committee. Under the direction of the Secretary of Homeland Security, the Secret Service is authorized to provide protection for spouses of major presidential and vice presidential candidates within 120 days of the general presidential election. There is no statute that addresses the protection of candidates’ children during the campaign. During the 2016 presidential campaign, the Secret Service provided protection for certain children of candidates at the request of the President. According to Secret Service officials, the Secret Service has historically provided protection for individuals not specifically identified in statute when directed by the President. In connection with the 2016 presidential campaign, the Secret Service provided protection for 12 individuals—4 presidential candidates, 2 vice presidential candidates, and 6 of the candidates’ family members. Figure 1 below shows the dates of protection through Election Day, November 8, 2016. Role of the Secret Service in Providing Presidential Campaign Protection Secret Service protective operations have evolved over the years. Originally, protection involved special agents serving as bodyguards. Protection now includes not only special agents in close proximity to the protected individual, but also advance security surveys of locations to be visited, coordination with state and local law enforcement entities, and analysis of present and future threats. Site surveys and threat assessments help the Secret Service determine the resources and assets needed to accompany each candidate and other individuals protected during the presidential campaign. These resources and assets, among other things, generally include: special agents who provide 24/7 protection while on detail; advance teams who provide site security; Explosive Ordnance Disposal and other technical support personnel (e.g., counter-surveillance and counter sniper personnel); magnetometer screening capabilities; and protective intelligence personnel who investigate threats. Travel Laws and Regulations Federal law provides for agencies to pay for or reimburse transportation and lodging expenses for their employees when they are traveling on official business. It further directs the General Services Administration (GSA) to issue regulations governing this travel. The FTR issued by GSA is applicable to Secret Service special agents’ transportation and use of hotel rooms when traveling during presidential campaigns to protect candidates and their family members. Transportation. According to the FTR, coach-class service is to be utilized unless an agency determines that an exception is warranted. For example, an exception may be granted to allow a special agent to use business class accommodations when the protected individual is doing the same and security demands warrant it. In the case of presidential campaign travel, the Secret Service may also accompany protected individuals aboard chartered aircraft. The Secret Service reimburses campaign committees for the seats occupied by its special agents. In 1977, we were asked to review the Secret Service’s reimbursement method, and in that decision stated that GAO did not object to the method used by the Secret Service as long as it was used consistently and the amount reimbursed did not exceed the first-class airfare. Lodging and other use of hotel rooms. The Secret Service utilizes hotel rooms for various purposes when protecting a candidate. The purpose of the room dictates the authority the Secret Service relies on to authorize payment and the related requirements. Hotel rooms used exclusively for special agent overnight sleeping facilities are governed by the FTR. The FTR allows agencies to pay for lodging based on per diem allowances set by GSA for the applicable location and date or the actual expenses of the travel. Actual expense allowance, which can be in excess of the per diem rate, is permitted for a variety of reasons, such as costs escalating due to special events (e.g., sporting events or disasters) or because of mission requirements. However, the maximum amount that an employee may be reimbursed under the actual expense allowance method is limited to 300 percent of the applicable per diem rate. The Secret Service also utilizes hotel rooms for operational purposes. For example, the Secret Service may use a room as a command center or reserve rooms adjacent to the protected individual to better secure the individual. In addition, to meet operational security demands, the Secret Service may require a certain number of special agents to stay in the particular hotel that the protected individual is staying and within certain proximity to the individual. The legal authorities the Secret Service relies on to pay for these kinds of rooms do not limit how much the agency can pay. Secret Service’s 2016 Presidential Campaign Travel Expenses Totaled Approximately $58 Million, Including $17.1 Million in Reimbursements to the Campaign Committees The Secret Service’s travel expenses for the 12 individuals protected during the 2016 presidential campaign totaled approximately $58 million, according to our analysis of Secret Service data. Travel expenses included airfare, vehicle rentals, hotel rooms, meals and incidental expenses, and baggage charges for special agents accompanying protected individuals. The $58 million in travel expenses was used by the Secret Service to support 3,236 travel stops made by the 12 protected individuals throughout the presidential campaign. The breakdown of these expenses and number of travel stops by campaign committee and protected individual are shown in figure 2 below. Of the $58 million the Secret Service incurred in 2016 presidential campaign travel expenses, $17.1 million was for reimbursements to the 4 campaign committees for 2,548 chartered aircraft flights. In the case of campaign travel, Secret Service special agents often fly with protected individuals on aircraft chartered by the campaign committees. The Secret Service reimburses the campaign committees for the number of seats occupied by special agents on board each charter flight. Figure 3 below shows the amount and number of flights for which the Secret Service reimbursed each of the campaign committees. Secret Service Did Not Always Follow its Travel Policies, Resulting in Overpayments of at Least an Estimated $3.9 Million Secret Service Generally Followed its Policies and Applicable Regulations for Lodging Payments during the 2016 Presidential Campaign for the Trips Reviewed We reviewed special agents’ lodging expenses while accompanying individuals protected during the 2016 presidential campaign on 40 randomly selected overnight trips. Our review found that (1) for most trips—30 of 40—the documented hotel expenses were within GSA per diem lodging rates, (2) the Secret Service generally followed its policy of requiring a lodging variance (i.e., waiver) for any hotel rooms exceeding the GSA lodging rate for that location, and (3) the Secret Service did not exceed the maximum amount allowed for lodging for these trips. The Secret Service required field offices responsible for booking hotel rooms to request and submit a waiver for any room that may exceed the designated GSA lodging rate by any amount. Our review of the receipts for hotel room expenses incurred by the Secret Service found that each trip involved multiple special agents staying in multiple rooms. Specifically, of the 40 trips we reviewed, 30 included hotel rooms that were within GSA lodging rates and 9 included hotel stays exceeding the GSA lodging rate. The Secret Service was unable to locate a hotel bill for 1 trip and we therefore were unable to determine the rate paid for that trip. In accordance with Secret Service policy, special agents submitted waivers to the agency’s Logistics Resource Center (LRC) for all 9 hotel stays exceeding the GSA lodging rate. According to LRC officials, before approving a waiver, they generally wanted to know how many alternative hotels were contacted, whether any hotels were available at or below the GSA lodging rate, and whether staying at a hotel at or below the GSA lodging rate would incur additional expenses that would negate the savings. For example, if a rental vehicle would be required, use and parking of the vehicle may have resulted in total costs that exceeded the price of the more expensive hotel. According to LRC officials, in order to spend travel money judiciously, some special agents stayed at hotels nearby the protected individual’s hotel that had rates at or closer to the GSA lodging rate. Under the FTR’s actual expense reimbursement method, agencies may pay up to 300 percent of the applicable total GSA per diem allowance— the GSA established rates for (1) lodging and (2) meals and incidental expenses—for an employee’s daily expenses. However, the agency is to subtract any allowance granted for meals and incidental expenses from the total, with the remainder being available for lodging. DHS and Secret Service policy, however, restricts the 300 percent actual expense allowance for lodging to 300 percent of the GSA lodging rate only. Consistent with DHS and Secret Service policy, none of the hotel rates paid exclusively for lodging in the 40 trips we reviewed exceeded the applicable GSA lodging rate by more than 300 percent. As a result, we determined that the Secret Service’s expenditures for lodging for the trips we reviewed were consistent with its policies and applicable regulations. Secret Service Did Not Follow its Policies for Chartered Aircraft Flights and Did Not Thoroughly Review Invoices Prior to Payment Secret Service Overpaid the Campaign Committees an Estimated $3.9 Million or More for Chartered Aircraft Flights As discussed earlier, as part of their mission to protect presidential candidates, Secret Service special agents frequently accompany candidates on chartered aircraft provided by the presidential campaigns. The Secret Service is to later reimburse the candidate’s campaign committee for the cost of having special agents fly on those planes. The Secret Service’s policy for determining the amount to reimburse has been used since at least 1977. Under this policy, the Secret Service is to pay the lower of two applicable fares when reimbursing the campaign committees for special agents’ travel on chartered aircraft flights. Specifically, according to the policy the Secret Service is to compare the lowest commercially available first-class airfare for a flight segment (one airport to another airport) to the pro rata fare of the charter (total charter cost divided by the number of passengers). The Secret Service is then to reimburse the campaign committee for the lower of the two fares. The following text box includes an example of the pro rata fare calculation. In July 2015, an attorney from the law firm representing the Hillary for America Committee sent Secret Service Financial Management Division (FMD) officials an e-mail stating that in their view, the reimbursements for special agents’ seats should be the pro rata fare based on an FEC regulation. In response, in August 2015, the Secret Service’s Office of the Chief Counsel made a decision to agree with the interpretation of this law firm. As a result, the Secret Service ceased to adhere to its longstanding reimbursement policy and agency officials were directed to use the pro rata calculation method for reimbursing all campaigns for agent airfares. Consequently, the Secret Service did not conduct the comparison between first-class and pro rata fares during the 2016 presidential campaign. Instead, the Secret Service solely paid the pro rata fare to the campaign committees. In March 2016, in response to a congressional inquiry about presidential campaign charter flight reimbursements, the Office of the Chief Counsel determined that its August 2015 decision was a mistake. Specifically, the Office recognized that the FEC regulation at issue did not apply to the Secret Service’s use of chartered aircraft. According to the Office of the Chief Counsel, they notified an official in the Office of Protective Operations, which collects submissions for reimbursements from the protected individual or the related campaign committee. However, the Office of the Chief Counsel did not notify LRC, which is to obtain the first- class airfares for comparison from the Secret Service’s travel agency. Further, the Office of the Chief Counsel was uncertain but believed FMD, which issues payments for the flights, was notified. FMD officials told us that they were not notified. As a result, the Secret Service continued to reimburse the campaign committees the pro rata fares for the remainder of the 2016 political campaign (i.e., through mid-November 2016). Despite being aware of the error for eight months before the end of the 2016 presidential campaign that the pro rata fare should be compared to the lowest available first-class airfare, the Office of the Chief Counsel did not ensure the agency reverted to its long standing policy. During this 8 month period, the Secret Service accompanied protected individuals on 1,671 (66 percent) of the 2,548 total campaign-related flight segments. As a result of solely reimbursing the pro rata fare instead of reimbursing the lower of the pro rata fare versus the lowest commercially available first- class airfare, we estimate based on our sample of 650 flight segments that the Secret Service overpaid the 4 campaign committees at least $3.9 million for special agents’ seats on chartered aircraft. Federal agencies are generally required to try to collect on debts— including overpayments—they determine are owed to them. A federal debt or claim is any amount of funds that has been determined by an appropriate official of the federal government to be owed to the United States. It includes, without limitation, overpayments. Under the federal debt collection authorities as provided in 31 U.S.C. chapter 37, federal agencies are required to try to collect on claims arising out of their activities. However, they have the authority to compromise (i.e., accept less than full value) claims, or suspend or end collection, such as when the cost of collecting the claim is likely to be more than the amount recovered. In response to our finding that the Secret Service had overpaid for travel on chartered aircraft, Secret Service officials told us in February 2018 that they planned to take action to determine the overpayment amounts and seek refunds from the campaign committees. In light of the problems we discuss in appendix II regarding information on aircraft flights provided by the campaign committees and available historical data on airfares, Secret Service officials told us they were attempting to calculate the overpayments and would weigh the feasibility and costs of collecting refunds. However, as of April 2018, the Secret Service lacked specific plans, timeframes, and milestones for calculating the amounts of overpayments to the campaign committees and making key decisions on how and the extent to which the Secret Service will proceed with collections. Making such determinations can help ensure the Secret Service is complying with applicable federal law and recovering funds that could be used to support its protective operations or deposited into the general fund of the United States Treasury as appropriate. Secret Service Did Not Adhere to Its Directive on Policy Revisions According to Secret Service officials, the decision to change the reimbursement calculation method in August 2015 was inconsistent with the Secret Service’s directive on policy revisions. Specifically, the Secret Service’s directive on policy revisions states that the “responsible office”—FMD in this case—is accountable for ensuring policies are current and accurate. In addition, this office is to review, research, and revise the policy, if such a revision is deemed necessary. Further, all significantly affected offices and divisions of the Secret Service, including members of the Secret Service’s Executive Resources Board, are to be provided the opportunity to read and comment on the changes, among other required actions. See figure 4 for a summary of key steps in the Secret Service’s policy creation, revision, and issuance process. According to Secret Service officials, however, the process outlined in the directive on policy revisions was not followed in August 2015. As a result, the decision to change the reimbursement calculation method was not fully vetted or reviewed by all members of the Secret Service’s Executive Resource Board as would be required under the directive on policy revisions. According to agency officials and confirmed in communications we reviewed, the Office of the Chief Counsel misinterpreted the regulation and directed that the erroneous interpretation be followed. The official leading FMD at the time, who was in the role on a temporary basis, adhered to the Office of the Chief Counsel’s interpretation of the regulation because the matter was legal in nature. Agency officials further added that the increased operational tempo (i.e., heavy workload) at the time may have resulted in a failure to adhere to the Secret Service’s directive on policy revisions. An important role within the Secret Service’s policy creation and revision process is the directives control point. The directives control point is to help develop and implement policy that is clear, enforceable, and effective. In addition, the directives control point provides guidance for filing, structuring, and organizing policy instruments. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives to achieve the entity’s objectives and address related risks. Secret Service officials stated that the agency could better ensure that its existing directive for policy revisions is followed by requiring that its directives control point be notified of any legal advice or direction proposed by the Office of the Chief Counsel that could modify or amend agency policy. By requiring—in policy and practice—that the Secret Service’s directives control point be notified when the Office of the Chief Counsel provides advice to offices that is likely to result in policy changes, the Secret Service could better ensure that operational changes inconsistent with existing policy are not made without the full consideration of all affected parties. Moreover, it could reduce errors and the potential for unnecessary costs associated with decisions that do not go through the required review process. Secret Service Did Not Ensure the Accuracy of Charter Flight Invoices Prior to Reimbursing Campaign Committees Secret Service policy requires that protected individuals—and by extension their campaign committees—seeking reimbursement for special agents on chartered aircraft flights to submit an invoice with the following information: (1) Name, address, and bank account information for the protected individual. (3) Date(s) of charter. (4) Itinerary by flight segment (the three letter airport code should be provided for the departure and arrival airports for each segment). (5) Total aircraft cost per flight segment. (6) Total number of passengers for each flight segment (to include seats occupied by the Secret Service). (7) Total number of seats occupied by the Secret Service for each flight segment. The policy also requires that if an invoice is incomplete or inaccurate that it should be returned to the protected individual within seven days of receipt for completion or correction. We found that 20 of the 76 invoices submitted to the Secret Service during the 2016 presidential campaign had incomplete or inaccurate information, and therefore should have been returned to the protected individual, or the related campaign committee. The 76 invoices included 2,548 flight segments. Information for 558 (22 percent) of the flight segments was incomplete or inaccurate. However, the Secret Service did not return any invoices to the four candidates or their campaign committees during the 2016 presidential campaign, according to Secret Service officials. Specifically, we found the following instances of incomplete and inaccurate information in the charter flight invoices provided by the campaign committees on behalf of protected individuals to the Secret Service: Airport Code: The Hillary for America Committee submitted two invoices containing two flight segments missing an airport code. The Carson America Committee submitted one invoice that did not clearly show the destination airport for seven flight segments and one invoice with three flight segments missing an airport code. The Donald J. Trump for President Committee submitted 12 invoices for then- candidate Trump with 336 flight segments missing an airport code. Only a city name with multiple possible airports was listed, leaving it unclear which airport was used. For example, in several instances “New York, NY” was listed, which could be LaGuardia Airport or JFK International Airport. Total Cost or Passengers: The Donald J. Trump for President Committee submitted 4 invoices for flights taken by Vice Presidential Candidate Mike Pence with 210 flight segments which did not include the total cost or the total number of passengers for each flight segment. The total cost and number of passengers are necessary to verify the pro rata cost of the flight segment. Double Billing: The Donald J. Trump for President Committee double-billed the Secret Service for three flight segments taken on March 1, 2016 resulting in a cumulative overpayment of approximately $21,000 by the Secret Service for these segments. Other Errors: The invoices for the Hillary for America Committee had 1 (less than 1 percent) of 1,317 flight segments with a mathematical error; the Donald J. Trump for President Committee had errors on 16 (2 percent) of 965 flight segments; and the Bernie 2016 Committee had errors on 29 (18 percent) of 159 flight segments. These 46 flight segments with mathematical errors resulted in a net Secret Service underpayment to the campaign committees of approximately $63,000. According to Secret Service officials, although these errors were made by the campaign committees, Secret Service officials failed to detect the errors. Per the Secret Service’s reimbursement policy, it is the responsibility of the special agents overseeing the protected individual’s travel to review the invoices to ensure they include the required information and the provided information is accurate. The policy further states that absent complete and accurate information, the invoices are to be rejected for correction prior to reimbursement. Based on our review of the invoices, the special agents verified the dates of the flights and number of special agents on board the flight segments included in the invoices, but did not, for example, reject invoices that did not contain the three letter airport code or total number of passengers. According to Secret Service officials, the incomplete invoices should have been rejected, but were not because of the operational tempo associated with the presidential campaign. As discussed earlier, operational tempo was also a rationale provided by Secret Service officials for why they did not adhere to the directive on policy revisions. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives, such as compliance with policies. In addition, the standards suggest that agency management should evaluate excessive pressure on personnel and help personnel fulfill their assigned duties. To help ensure that the Secret Service is adhering to its travel policies, the Secret Service may need to assess its existing control activities and determine how they can be enhanced to address the fast- paced operational tempo of presidential campaigns. Further, according to FMD officials, when invoices marked certified reached FMD for payment, it was assumed by FMD that the invoices had been certified as complete and accurate, as indicated by the signature of a special agent or an authorized certifying officer. Secret Service policy does not assign responsibility for verifying the accuracy of the pro rata fare and checking that flight segments have not already been billed. Additionally, for three of the four campaign committees, the Secret Service had no assurance when paying the pro rata fare that it was being charged its share correctly since it did not receive copies of the charter companies’ invoices. Specifically, the Secret Service relied on invoices created by the campaign committees for reimbursement purposes without supporting receipts, invoices, or other documentation to verify the charges against. According to Secret Service officials, only the Hillary for America Committee forwarded copies of invoices from the charter companies it used, allowing the Secret Service to verify the accuracy of the amounts billed. The Secret Service policy on reimbursement of chartered aircraft flights does not require that copies of charter company invoices or receipts be forwarded by the protected individual or their campaign committee. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Such activities include proper execution of transactions (e.g., assuring that only valid transactions are entered into) and controls over information processing (e.g., comparing charter flight invoices to the amounts billed to the Secret Service by the campaign committees). Secret Service officials agreed that the accuracy of flight segment details and costs should be verified prior to reimbursing for charter flights. In addition, they further agreed that responsibility for verifying the accuracy of the pro rata fare and checking that flight segments have not already been billed should be assigned. They also agreed that the Secret Service should require the charter companies’ invoices to verify that the campaign committees are correctly charging the Secret Service for its share of the total flight cost. Without updating its charter aircraft reimbursement policy, the Secret Service does not have reasonable assurance that correct payments will be made. These changes include: (1) assigning responsibility for verifying that all calculations done by the campaign committees on behalf of the protected individual are accurate, (2) requiring a secondary review process to confirm the accuracy of charter flight costs prior to making payment, and (3) requiring that copies of charter companies’ invoices be provided to ensure that the reported pro rata costs are accurate prior to reimbursement. In response to our finding, in February 2018 the Secret Service began drafting an initial version of proposed policy changes, consistent with its directive on revising policy. Specifically, Secret Service officials started initial policy research and began reviewing and drafting the policy, consistent with step two of their policy revision process (see figure 4). However, several additional steps remain to be completed before the planned changes are implemented. Until the Secret Service completes all the necessary steps to update its charter aircraft reimbursement policy, it remains at risk for making incorrect payments. Current Policy Does Not Ensure Correct Reimbursements for Chartered Aircraft Flights Secret Service’s charter aircraft reimbursement policy does not specify whether its travel agency is to include taxes when identifying the lowest available first-class airfare. As discussed earlier, Secret Service is to pay the lower of two applicable fares (lowest available first-class fare, and the pro rata fare) when reimbursing the campaign committees for special agents’ travel on chartered aircraft flights. The Secret Service obtains the lowest available first-class airfare from its travel agency. LRC officials initially told us that the Secret Service’s travel agency had been including taxes in the lowest available first-class airfare. However, after inquiring with the travel agency, an LRC official learned that taxes had not been included. After further discussion with us, Secret Service officials told us that taxes should be included. Including taxes can make the difference between a first-class airfare being less or more expensive than the pro rata fare for a charter flight, therefore dictating which fare the Secret Service should reimburse the protected individual and campaign committee. For example, if a pro rata fare costs $1,000, and the lowest available first-class airfare (without taxes) is $950, then the lower fare is the first-class airfare. However, if the lowest available first-class airfare (with taxes) is $1,050, then the lower fare is the pro rata fare. The Secret Service’s policy on reimbursement of special agents’ seats on chartered aircraft also lacks important details to ensure that its travel agency can accurately identify the lowest available first-class airfares and make accurate reimbursements. The policy requires the protected individual to provide the Secret Service the 3-letter airport code for the departure and arrival airports for each flight segment for which it is seeking reimbursement. However, it does not specify that the 3-letter airport code needs to be the International Air Transport Association (IATA) code and not the Federal Aviation Administration (FAA) code. Airports in different countries can have the same IATA and FAA codes. Providing the FAA code can result in the Secret Service’s travel agency identifying the wrong airport when determining the lowest first-class airfare for a travel segment since the travel agency searches IATA codes. For example, when we asked the Secret Service’s travel agency to research the lowest available first-class airfare for campaign travel segments based on the reported destination codes in campaign committee invoices the travel agency identified “SGJ” as Sagarai, Papua New Guinea based on the IATA code. However, SGJ is the FAA code for the Northeast Florida Regional Airport. Similarly, another reported destination code in a campaign committee’s invoice, LOM, is the FAA code for Wings Field Airport, Pennsylvania and is also the IATA code for Lagos de Moreno, Colombia. Since the travel agency searches on the basis of IATA codes, using FAA codes that are designated as foreign destinations in the IATA system can result in confusion for the travel agency when identifying the lowest available first-class airfare for a flight segment. Secret Service officials told us that they had not considered specifying whether the lowest first-class airfares should include taxes since the Secret Service had been using the same representative at its travel agency since 1986 to identify the lowest available first class fare. They said they assumed that their representative knew the policy through practice. Also, Secret Service officials told us that they were not aware of the difference between IATA and FAA codes. Secret Service officials agreed that the reimbursement policy should be revised to make it clear that taxes are to be included when the Service’s travel agency identifies the lowest available first-class airfare when determining the correct reimbursement amount, and that protected individuals are to provide the IATA code for airports. Standards for Internal Control in the Federal Government states that management should internally and externally communicate the necessary information to achieve the entity’s objectives and that effective information and communication are vital for an entity to achieve its objectives. The Secret Service could better ensure that its travel agency is able to identify the lowest commercially available first-class airfare for comparison to the pro rata fare by updating its charter aircraft reimbursement policy to specify that (1) taxes are to be included in the lowest commercially available first-class airfare, and (2) protected individuals’ invoices include the IATA airport codes for arrival and departure airports. In response to our finding, in February 2018 the Secret Service started to draft an initial version of proposed changes to its charter aircraft reimbursement policy, consistent with its directive on revising policy. Secret Service officials were in the process of conducting initial policy research, reviewing, and drafting the policy, consistent with step two of their policy revision process (see figure 4). However, the Secret Service needs to complete several additional steps before the planned changes go into effect. Until then, the Secret Service remains at risk of not correctly identifying the lowest applicable airfare. Conclusions The Secret Service plays a vital role in protecting our nation’s leaders, including presidential and vice presidential candidates, and their family members. During the 2016 presidential campaign, for the trips we reviewed, the Secret Service generally followed its internal policies and federal regulations governing payment for lodging costs incurred while protecting candidates. However, due to an erroneous legal decision in August 2015, the Secret Service did not follow its reimbursement policy for chartered aircraft during the campaign. By not adhering to its policy, the Secret Service overpaid campaign committees at least an estimated $3.9 million dollars for charter flights. Until the Secret Service determines the amounts owed and how it will proceed with seeking repayment from the various campaign committees, these funds will not be recovered by the federal government. Further, in making the erroneous legal decision in August 2015, the Secret Service did not adhere to its directive on policy revisions. The decision to effectively change a policy was not fully vetted, reviewed, or communicated in accordance with the directive. This was largely due to the lack of a requirement to notify the directive control point when legal decisions are made that can result in policy changes. This could result in similar policy changes not being reviewed in the future. Finally, presidential campaigns create a fast-paced operational tempo at the Secret Service, and according to agency officials, this tempo contributed to their failure to comply with travel policies during the 2016 presidential campaign. Until Secret Service evaluates the pressure caused by this tempo and implements appropriate mechanisms, it cannot ensure that agency officials responsible for travel reimbursements are complying with policy during presidential campaigns. In addition, Secret Service’s charter aircraft reimbursement policy does not assign primary and secondary reviews of invoices provided by campaign committees. The policy also does not require that campaign committees and the agency’s travel agency provide all the information necessary to verify the accuracy of the invoices. Without these requirements, Secret Service may continue to reimburse campaign committees incorrect amounts. Recommendations for Executive Action We are making the following five recommendations to the Director of the Secret Service. Consistent with the federal debt collection authorities as provided in 31 U.S.C. chapter 37, the Director should complete the process of calculating the amounts of its overpayments to the campaign committees for special agents’ seats on chartered aircraft during the 2016 presidential campaign, and determine how it should proceed with respect to collecting on identified debts. (Recommendation 1) To help ensure that the agency’s existing directive on policy revisions is followed, the Director should require in policy and practice that the directives control point be notified when the Office of the Chief Counsel provides advice to offices that is likely to result in policy changes. (Recommendation 2) The Director should assess its existing control activities and implement appropriate mechanisms to help ensure compliance with the agency’s travel cost policies during presidential campaigns. (Recommendation 3) The Director should update the charter aircraft reimbursement policy to assign the offices responsible for verifying that all calculations done by the campaign committees are accurate, and require a secondary review process prior to making payment. (Recommendation 4) The Director should update the charter aircraft reimbursement policy to specify that protected individuals are to provide IATA codes and copies of the charter companies’ invoices, and that the Secret Service’s travel agency is to provide lowest available first-class airfares that include taxes. (Recommendation 5) Agency Comments We provided a draft of this report for review and comment to DHS, GSA, and FEC. DHS provided written comments, which are reproduced in appendix III. In its comments, DHS concurred with our recommendations. DHS also stated it had taken or planned to take actions to address all five of our recommendations. In addition, after we provided this report to DHS for comment, Secret Service provided us documentation, including a revised travel policy, highlighting actions they have taken to address our recommendations. We will review the documentation and take steps to close the recommendations in the future, as appropriate. DHS and FEC provided technical comments, which we incorporated as appropriate. GSA and FEC did not provide written comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 1 day from the report date. At that time, we will send copies to the Secretary of Homeland Security, Administrator of the General Services Administration, and Staff Director of the Federal Election Commission. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope and Methodology This report addresses the U.S. Secret Service’s (Secret Service) 2016 presidential campaign travel expenses and payment of those expenses. Specifically, our objectives were to examine the following questions: (1) How much did the Secret Service incur in transportation, lodging, and other travel-related expenses when providing protection during the 2016 presidential campaign? (2) To what extent did the Secret Service reasonably assure that payments and reimbursements for travel-related protection expenses were made in accordance with applicable laws, regulations, and policies during the 2016 presidential campaign? To determine how much the Secret Service incurred in travel-related expenses, we obtained expense data from the Secret Service for each of the individuals protected for the 2016 presidential campaign. In total, the Secret Service protected 12 individuals associated with 4 campaign committees (see table 1 below). We analyzed the travel expenses for each of these protected individuals to determine the total travel expenses incurred by the Secret Service for each campaign committee and for the 2016 presidential campaign as a whole. Travel expenses include those captured by the Secret Service under object class 21—travel and transportation of persons. Object class 21 expenses include airfare, vehicle rentals, hotel rooms, meals and incidental expenses, and baggage charges for special agents accompanying protected individuals. Additionally, we determined the amount of the total travel-related expenses that were reimbursements to the campaign committees—all of which were for special agents’ seats on campaign chartered aircraft. To assess the reliability of the Secret Service’s expense data, we discussed with the Secret Service officials how the data are entered and maintained in the Secret Service’s official financial system of record— Travel Manager, Oracle, PRISM, Sunflower system—which is used to track operating and travel expenses, among other things. We also reviewed the data for any obvious errors and anomalies. We compared the data to the invoices the Secret Service received from the campaign committees seeking reimbursements in order to verify the amounts the campaigns were reimbursed. Further, we compared the Secret Service’s reimbursement data to data the campaign committees reported to the Federal Election Commission (FEC) on payments they received from the Secret Service. As a result, we determined that the expense data were sufficiently reliable for reporting the Secret Service’s total travel expenses, expenses broken out by campaign committee and protected individual, and the portion of expenses that were reimbursements to the committees. To determine the number of travel stops made by the campaign committees for which the Secret Service provided protection, we used data from the Secret Service’s Agent Manpower Protection System. To assess the reliability of these data, we reviewed responses provided by the Secret Service on how the data are entered and maintained in the system. We further matched a sample of the travel stops data to hotel bills for those stops. As a result, we determined that the data on travel stops were sufficiently reliable for reporting the total number of travel stops made during the campaign and number of stops per campaign committee. To determine whether the campaign committees charged the Secret Service appropriate rates for the use of candidate-owned assets, we tried to identify whether any portion of the Secret Service’s reimbursements to the campaign committees were for the use of candidate-owned assets. Candidates flew on various types of charter aircraft, including jets and helicopters. Pursuant to law and FEC regulations, campaign committees must report and maintain certain information regarding the use of these aircraft. However, this information was not sufficient for us to determine whether aircraft for which the Secret Service provided reimbursement were owned by candidates. Further, the Secret Service does not collect information about a campaign’s use of candidate-owned assets, including aircraft. We contacted all four campaign committees using various methods, including email, phone, and in-person visits to identify reimbursements received for candidate-owned assets, but none of the committees responded to our questions. As a result, we were unable to determine whether any portion of the Secret Service’s reimbursements were for the use of candidate-owned assets. To determine the extent to which the Secret Service’s payments and reimbursements for travel-related protection expenses were made in accordance with applicable laws, regulations, and policies, we analyzed the Secret Service’s lodging payments and charter aircraft reimbursements. Of the 962 overnight trips taken during the 2016 presidential campaign, we randomly selected 40—10 for each of the presidential candidates—to assess the Secret Service’s compliance with (1) its internal policy requiring a waiver when a hotel room exceeds the General Services Administration (GSA) per diem rate by any amount, and (2) provisions of Federal Travel Regulation (FTR) that limit hotel spending to 300 percent of the GSA rate. To determine the GSA per diem lodging rate, we reviewed the GSA rates applicable on the date of the hotel stay and for that location. If the amount of the room exceeded the GSA rate we identified whether the Secret Service had a waiver for the trip and also checked whether the amount paid exceeded the maximum amount available for lodging under Department of Homeland Security (DHS) and Secret Service policy and under the FTR. The time and effort associated with collecting trip bills from many field offices were primary considerations in determining the number of candidates’ trips to review. The Secret Service’s retention of hotel bills is decentralized; that is, the field office responsible for the geographic area where the protective operation occurs retains hard copies of the bills. Although the results of our analysis are not generalizable to all overnight trips taken during the 2016 presidential campaign, it provided us insight to the Secret Service’s compliance with its lodging policy and the FTR. With regard to whether the Secret Service reimbursed the four campaign committees the correct amounts for special agent travel on campaign chartered aircraft, we compared the Secret Service’s payments to the committees to our estimate of what the Secret Service would have paid had its own charter aircraft reimbursement policy been followed. We determined the Secret Service did not use the correct reimbursement method throughout the 2016 presidential campaign. To determine whether the Secret Service followed its directive on the review and approval of policy changes, we compared the steps required to effect a change in policy to the steps taken by the Secret Service when its reimbursement method was altered. To estimate whether and, if so, by how much the Secret Service overpaid the campaign committees for special agents’ seats on chartered aircraft flights based on the reimbursement policy change mentioned above, we selected a generalizable stratified random sample of 650 flight segments from the 2,318 flight segments taken from November 1, 2015 through the end of the 2016 presidential campaign that had an identifiable airport. Appendix II provides further technical details on the statistical methods we used. To determine whether the Secret Service should try to collect on the overpayments to the campaign committees, we reviewed relevant federal authorities, including 31 U.S.C. chapter 37. To determine whether the Secret Service followed its policy with regard to accepting and reviewing chartered aircraft invoices, we compared all 76 invoices submitted by the four campaign committees to the agency’s policy requirements for invoice completeness and accuracy. Further, we used Standards for Internal Control in the Federal Government to assess whether the Secret Service’s requirements for charter aircraft invoices, and the review of the invoices, are specific enough to help ensure that the Secret Service is making correct reimbursements for charter aircraft flights. We conducted this performance audit from April 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Analysis of a Stratified Random Sample of Flight Segments for GAO’s Overpayment Analysis To estimate whether and, if so, by how much the U.S. Secret Service (Secret Service) overpaid the campaign committees for special agents’ seats on chartered aircraft flights, we selected a generalizable stratified random sample of flight segments from campaign invoices sent to the Secret Service. Specifically, we selected 650 flight segments from the 2,318 flight segments taken from November 1, 2015 through the end of the 2016 presidential campaign that had an identifiable airport. We stratified the population of 2,318 flight segments into 11 mutually exclusive strata by campaign (Trump, Clinton, Sanders, and Carson) and three size categories based on the number of special agents that indicated being on board a flight. We chose to stratify based on the number of special agents on board to minimize the variance of the total cost within each stratum in an attempt to gain statistical efficiency in the sample design. The sample size of 650 flight segments was based primarily on available resources to have the Secret Service’s travel agency extract cost data from the airfare database. We allocated the sample of 650 flight segments to the 11 strata using proportional allocation within each campaign. We then adjusted the allocation in each stratum in an attempt to match a Neyman allocation method that would minimize the variance of an estimate of total cost. We randomly selected the allocated sample size of flight segments within each of the 11 strata. For each of the 650 flight segments selected in the sample, we obtained two measures of the lowest first-class airfare from the Secret Service’s travel agency, one with fees and taxes and one without (base fare). This was due to some confusion at the Secret Service about whether taxes and fees should be included when determining the lowest first-class airfare. We then compared these first-class airfares to the individual fare (i.e., the pro rata fare) paid by the Secret Service to the campaign committees. We classified a flight segment as overpaid if the lowest first-class airfare was less than the pro rata fare paid by the Secret Service. To determine the total amount of overpayment per flight segment, we multiplied the difference between the pro rata fare paid by the agency and the lowest first-class airfare by the number of Secret Service special agents on board the flight. We assigned flight segments that were classified as not overpaid a total overpaid value of zero. From our sample of 650 flight segments, we identified 295 flights for which the Secret Service overpaid a total of about $1.5 million. To estimate the proportion of overpaid flight segments and the total amount overpaid by the Secret Service for all 2,318 flight segments in the population from which we sampled, we weighted the sample results by the inverse of the probability of selection based on the stratified sample design. We used estimation methods appropriate for a stratified random sample design and generated 95 percent confidence intervals for each estimate. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. The weighted percentage estimates of the full population from our sample have margins of error at the 95 percent confidence level of plus or minus 4 percentage points or fewer and the estimate of the total amount overpaid by the Secret Service has a relative error of plus or minus 12 percent of the estimate or less. Based on these results, we estimate that total overpayments in the population of 2,318 flight segments from November 1, 2015 through the end of the campaign would be at least $3.9 million. We estimate that the Secret Service overpaid invoices for about 49 percent (+/- 4 percentage points) of the flight segments. The estimated $3.9 million represents the lower bound of the 95 percent confidence interval of the estimated total dollar amount overpaid based on our sample. The lower bound represents relative error of about 12 percent. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Joseph P. Cruz (Assistant Director), Lisa Canini, Jeffrey Fiore, Chad Johnson, Janet Temko-Blinder, and Jonathan Tumin made key contributions to this report. Also contributing to this report were David Alexander, Jim Ashley, Dominick Dale, Eric Hauswirth, John Mingus, and Carol Petersen.
Why GAO Did This Study The Secret Service incurs millions of dollars in travel expenses to provide security during the fast-paced operational tempo of a presidential campaign. In connection with the 2016 presidential campaign, the Secret Service provided protection for four presidential candidates, two vice presidential candidates, and six of the candidates' family members. GAO was asked to review the Secret Service's travel-related expenses for the 2016 presidential campaign. This report examines (1) how much the Secret Service incurred in travel-related expenses, and (2) the extent to which travel-related payments and reimbursements were made in accordance with laws, regulations, and policies. GAO analyzed Secret Service data to determine the travel expenses incurred by the agency for the 2016 presidential campaign. GAO also randomly selected 40 overnight trips to assess the Secret Service's compliance with provisions of its lodging policies and the Federal Travel Regulation. GAO analyzed the Secret Service's payments to campaign committees to determine whether committees were reimbursed the correct amounts for charter flights. What GAO Found The U.S. Secret Service's (Secret Service) travel expenses during the 2016 presidential campaign totaled approximately $58 million. Of the $58 million, $17.1 million was for reimbursements to the four campaign committees for chartered aircraft flights. In the case of campaign travel, Secret Service special agents often fly with protected individuals on aircraft chartered by the campaign committees. The Secret Service reimburses the campaign committees for the number of seats occupied by special agents on board each charter flight. For the 40 overnight trips GAO reviewed, the Secret Service generally followed its policies and regulations for lodging payments. However, GAO found that the agency overpaid the campaign committees at least an estimated $3.9 million when reimbursing them for special agents' seats on charter flights. Since at least 1977, the Secret Service's policy has been to pay the lower of two fares when reimbursing campaign committees for special agents' travel on chartered aircraft flights. Specifically, the Secret Service is to pay the lower of the following two fares: the lowest commercially available first-class airfare, or the pro rata fare—the cost of the agent's seat on the charter flight calculated by taking the total cost of the charter divided by the number of passengers on board. However, during the 2016 presidential campaign, Secret Service officials misinterpreted a Federal Election Commission regulation, and as a result, did not conduct the comparison. Instead, the Secret Service solely paid the pro rata fare to the campaign committees. Eight months before the end of the 2016 presidential campaign, Secret Service officials determined the interpretation was erroneous, but did not ensure the agency reverted to its long standing policy. During these 8 months, 66 percent of all campaign-related flights with special agents on board were taken. Federal agencies are generally required to collect on debts that have been determined by an appropriate official of the federal government to be owed to the United States. Debts include overpayments. Pursuing debt collection, however, will require the Secret Service to calculate the specific amount it overpaid to the campaign committees and determine how to proceed with seeking repayment from the various committees, as appropriate. What GAO Recommends GAO is making five recommendations, including that the Secret Service should (1) calculate its overpayments to the campaign committees for special agents' seats on chartered aircraft flights, and (2) determine how it should proceed with respect to collecting on identified debts. The Department of Homeland Security concurred with the recommendations and identified actions underway to address them.
gao_GAO-19-35
gao_GAO-19-35_0
Background Performance management systems can be powerful tools in helping an agency achieve its mission and ensuring employees at every level of the organization are working toward common ends. According to OPM regulations, performance management is a systematic process by which an agency involves its employees, both as individuals and members of a group, in improving organizational effectiveness in the accomplishment of agency mission and goals. An agency’s performance management system defines policies and parameters established by an agency for the administration of performance appraisal programs. Under federal law and corresponding regulations, agencies are required to develop at least one employee performance appraisal system. OPM is required to review and approve an agency’s performance appraisal system(s) to ensure it is consistent with the requirements of applicable law, regulation, and OPM policy, and defines the general policies and parameters the agency will use to rate employees. Once the appraisal system is approved, the agency establishes a performance appraisal program. The agency’s performance appraisal program—which does not require OPM review or approval— defines the specific procedures, methods, and requirements for planning, monitoring, and rating employee performance. The program is tailored to the agency’s needs. OPM policy identifies five phases to the performance management cycle: (1) planning work and setting expectations; (2) continually monitoring performance; (3) developing the capacity to perform; (4) rating periodically to summarize performance; and (5) rewarding good performance (see table 1). According to OPM, performance management is a continuous cycle in which an agency involves its employees, both, as individuals and members of a group, in improving organizational effectiveness in accomplishing agency mission and goals (see figure 1). Each phase of the performance management cycle plays an important part in helping to provide structure and focus to an employee’s roles and responsibilities within the organization. Within each phase of the cycle, employees are given the opportunity to provide input, ask questions, and request feedback from their supervisors on their performance. One of the tools agencies can use to determine the effectiveness of their performance management cycle is data from OPM’s annual FEVS. To help understand federal employees’ opinions about what matters most to them and how they feel about their jobs, their supervisors, and their agencies, FEVS scores can help agencies identify challenges and improve guidance. FEVS measures employees’ perceptions of whether, and to what extent, conditions characterizing successful organizations are present in their agencies. According to OPM, the federal workforce is the backbone of the government. Employee opinions shared through FEVS provide an essential catalyst to achieving effective government. Employees Responded Most Positively to Statements Related to Planning and Setting Expectations Phase; Least Positively to Those Related to Rewarding Performance From 2010 through 2017, surveyed employees generally demonstrated positive responses to FEVS statements related to four of OPM’s five performance management phases, including: planning and setting expectations, monitoring performance, developing the capacity to perform, and rating performance (as shown in figure 2). Employees had the lowest levels of agreement with statements related to rewarding performance (or an estimated 39 percent positive response). Phase 1: Planning Work and Setting Expectations We have previously reported that an explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems in high-performing organizations. These organizations use their performance management systems to improve performance by helping individuals see the connection between their daily activities and organizational goals, a line of sight, and encouraging individuals to focus on their roles and responsibilities to help achieve these goals. Such organizations continuously review and revise their performance management systems to support their strategic and performance goals, as well as their core values and transformational objectives. Based on surveyed employees’ responses, agencies were more successful at planning and setting expectations, which includes how an employee’s work relates to the agency’s goals and priorities, than at all other phases of performance management. The response to these statements highlights the role agencies have in providing information to employees about their responsibilities within the organization. Of the three selected FEVS statements for this phase, “I know how my work relates to the agency’s goals and priorities,” was the statement with the highest percent of employees who agreed or strongly agreed across all of our selected FEVS statements from 2010 to 2017 (see figure 3). Phase 2: Continually Monitoring Performance Performance management and feedback should be used to help employees improve so that they can do the work or—in the event they cannot do the work—so management can take appropriate action for unacceptable performance. The first opportunity a supervisor has to observe and correct poor performance is in day-to-day performance management activities. We have previously reported that, in general, agencies have three means to address employees’ poor performance, with dismissal as a last resort: (1) day-to-day performance management activities (which should be provided to all employees, regardless of their performance levels); (2) dismissal during probationary periods; and (3) use of formal procedures to dismiss employees. We have also reported that supervisors who take performance management seriously and have the necessary training and support can help poorly performing employees either improve or realize they are not a good fit for the position. However, some supervisors may lack experience and training in performance management, as well as the understanding of the procedures for taking corrective actions against poor performers. We previously recommended that OPM, in conjunction with the Chief Human Capital Officers (CHCO) Council, assess the adequacy of leadership training that agencies provide to supervisors to help ensure supervisors obtain the skills needed to effectively conduct performance management responsibilities. In response, OPM conducted a survey to assess the adequacy of leadership training that agencies provide to supervisors. Based on the survey results, OPM issued a memorandum in May 2018 recommending a number of actions agencies should take to improve the accessibility, adequacy, and effectiveness of supervisory training. Of the FEVS statements we analyzed, the statement, “In my work unit, steps are taken to deal with a poor performer who cannot or will not improve,” had the lowest percent positive agreement by surveyed employees each year from 2010 to 2017 government-wide. However, the other two statements selected for this phase were viewed much more positively by surveyed employees (see figure 4). When we further analyzed the responses to the statement on poor performance, employee responses differed in agreement based on the respondent’s supervisory level. On average, an estimated 25 percent of surveyed employees who identified themselves as nonsupervisors and team leaders agreed with this statement from 2010 through 2017, compared with an estimated average of 54 percent of surveyed employees who identified themselves as managers (see figure 5). Phase 3: Developing the Capacity to Perform According to OPM guidance, the capacity to perform means having the competencies, the resources, and the opportunities available to complete the job. We have previously reported that the essential aim of training and development programs is to assist an agency in achieving its mission and goals by improving individual and, ultimately, organizational performance. In addition, constrained budgets and the need to address gaps in critical federal skills and competencies make it essential that agencies identify the appropriate level of investment and establish priorities for employee training and development. This allows the most important training needs to be addressed first. However, fewer surveyed employees agreed with the statement, “My training needs are assessed,” than with the other statements in this phase (see figure 6). Phase 4: Rating Periodically to Summarize Performance Supervisors should establish performance standards that clearly express what is expected of the employee. An average estimated 82 percent of surveyed employees agreed or strongly agreed with the statement, “I am held accountable for achieving results,” from 2010 to 2017 (see figure 7). Overall, this statement had the second highest level of agreement of the 15 statements selected for our review. According to OPM’s website for performance management, while accountability means being held answerable for accomplishing a goal or assignment, the guidance cautions against using accountability only for punishing employees as fear and anxiety may permeate the work environment. This may prevent employees from trying new methods or proposing new ideas for fear of failure. According to OPM’s website for performance management, if approached correctly, accountability can produce positive, valuable results. Phase 5: Rewarding Good Performance According to OPM guidance, rewards are used often and well in an effective organization. We have previously reported that high-performing organizations seek to create effective incentive and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. Rewarding means recognizing employees, individually and as members of groups, for their performance and acknowledging their contributions to the agency’s mission. According to OPM’s website for performance management, the types of awards include: cash; honorary recognition; informal recognition; or time off without charge to leave or loss of pay. From 2010 to 2017, an estimated 39 percent of surveyed employees consistently agreed when asked statements related to how their agency rewards performance (see figure 8). Of the five phases of performance management, the statements related to this phase consistently had the least positive agreement of surveyed employees. We have previously reported that effective performance management requires the organization’s leadership to make meaningful distinctions between acceptable and outstanding performance of individuals. Approximately one-third of surveyed employees agreed or strongly agreed with the statement, “In my work unit, differences in performance are recognized in a meaningful way.” Meaningful distinctions in performance ratings are the starting point for candid and constructive conversations between supervisors and staff. These distinctions also add transparency to the ratings and rewards process. In addition, such distinctions help employees better understand their relative contributions to organizational success, areas where they are doing well, and areas where improvements are needed. Employees in Supervisory Roles Responded More Positively to Statements Related to Rewarding Performance than Other Employees We also found that, across our selected statements, many of the largest gaps between supervisors and other employees were related to rewarding performance. Specifically, the responses to the statement, “Promotions in my work unit are based on merit,” varied the most based upon the supervisory status of the employee (see figure 9). Senior leaders agreed or strongly agreed with this statement at an average estimated 40 percentage points more than employees in a nonsupervisory role. We have previously reported that agencies must design and administer merit promotion programs to ensure a systematic means of selection for promotion based on merit. We have also previously reported that perceptions of favoritism, particularly when combined with unclear guidance, a lack of transparency, and limited feedback, negatively impact employee morale. Senior leaders and managers agreed or strongly agreed with the statement, “In my work unit, differences in performance are recognized in a meaningful way,” more frequently than surveyed employees who identified themselves as nonsupervisors (see figure 10). Those who identified themselves as team leaders and nonsupervisors agreed with the statement less frequently than all of the other categories of supervisory status. For example, in 2017, an estimated 69 percent of senior leaders agreed or strongly agreed with the statement, compared to an estimated 48 percent of supervisors and an estimated 33 percent of nonsupervisors and team leaders. Finally, senior leaders and managers agreed or strongly agreed with the statement, “Employees are recognized for providing high quality products and services,” more frequently than nonsupervisors (see figure 11). Selected Agencies Implement Some Similar Practices That May Help Improve Employee Performance Management An effective performance management system can be a strategic tool to improve employee engagement and achieve an agency’s desired results. We found that selected agencies demonstrated some similar practices. This may have been a contributing factor in having relatively high scores on FEVS performance management related statements. Specifically, employees at the Bureau of Labor Statistics (BLS), the Centers for Disease Control and Prevention (CDC), the Drug Enforcement Administration (DEA), and the Office of the Comptroller of the Currency (OCC) consistently agreed or strongly agreed to selected FEVS statements related to the five phases of OPM’s performance management cycle. While these agencies developed different performance management systems to reflect their specific structures and priorities, we found a number of practices common to all four agencies that are intended to help reinforce effective employee performance management and improve agency performance (see figure 12). All four agencies agreed that these practices helped contribute to their employees’ responses to the selected FEVS statements and improved performance management. Strong Organizational Culture and Dedication to Agency Mission We have previously reported that organizations with more constructive cultures generally perform better and are more effective. Within constructive cultures, employees exhibit a stronger commitment to mission focus, accountability, coordination, and adaptability. According to OPM FEVS guidance, climate assessments like FEVS are, consequently, important to organizational improvement largely because of the key role culture plays in directing organizational performance. Each of the agencies in our review cited a strong organizational culture that was based on and tied to their agency’s mission. Table 2 highlights examples from CDC and DEA. Data Driven Using FEVS and Other Survey Data Each of the four selected agencies in our review demonstrated a focus on analyzing FEVS data to identify areas of improvement and create action plans around the analysis. According to OPM guidance on FEVS, the results from the survey can be used by agency leaders to assist in identifying areas in need of improvement as well as highlight important agency successes. FEVS findings allow agencies to assess trends by comparing earlier results with the 2017 results to (1) compare agency results with the government-wide results, (2) identify current strengths and challenges, and (3) focus on short- and long-term action targets that will help agencies reach their strategic human resource management goals. The recommended approach to assessing and driving change in agencies utilizes FEVS results in conjunction with other resources, such as results from other internal surveys, administrative data, focus groups, exit interviews, and so on. We have previously reported that for agencies to attain the ultimate goal of improving organizational performance, they must take a holistic approach—analyzing data, developing and implementing strategies to improve engagement, and linking their efforts to improved performance. We have also previously reported that OPM stated that agencies are increasingly using FEVS as a management tool to help them understand issues at all levels of an organization, and to take specific action to improve employee engagement and performance. Further, OPM officials noted that if agencies, managers, and supervisors know that their employees will have the opportunity to provide feedback each year, they are more likely to take responsibility for influencing positive change. We found that all four of the selected agencies were building a culture of analyzing their FEVS results to identify areas of improvement, and develop action plans to achieve results, including improving performance management (see table 3). In addition, three of the four selected agencies also used other practices. These practices include using other available survey results to corroborate identified action plans and identify additional areas needing support to create a more complete picture of the employee perspective. We have previously reported that an agency’s FEVS scores should be used as one of several data sources as leaders attempt to develop a comprehensive picture of engagement within an organization, and better target their engagement efforts, particularly in times of limited resources. The key is identifying what practices to implement and how to implement them. This can and should come from multiple sources. Three of four of the case study agencies—BLS, CDC, and DEA—use supplemental survey data to help focus agency efforts to improve performance management. For example, DEA developed its own internal survey—Leadership Engagement Survey—in 2016 because it identified leadership as a key driver for organizational climate and employee engagement. According to agency officials, there was a strong internal push to use the survey results to identify areas of improvement. The fourth agency, OCC, had administered a separate internal engagement survey from 2013 to 2016. According to agency officials, however, they discontinued this effort to focus exclusively on FEVS as the primary survey data source, and to reduce the redundancy of two surveys. However, OCC emphasized the need to consider FEVS data as only one source of data, at a point in time, and to use a diversity of other data (quantitative and qualitative) to inform the survey results. Focus on Training As we have previously reported, agencies invest significant time and resources in recruiting potential employees, training them, and providing them with institutional knowledge that may not be easily or cost-effectively replaceable. Therefore, effective performance management–which consists of activities such as expectation-setting, coaching, and feedback—can help sustain and improve employee performance management. We have also reported that good supervisors are key to the success of any performance management system. Supervisors provide the day-to-day performance management activities that can help sustain and improve the performance of more talented staff, and can help marginal performers to become better. However, agencies may not be providing supervisors with the appropriate training that prepares them for success, such as having difficult performance management conversations. Moreover, we have previously reported that mission- critical skills gaps across the federal government pose a high risk because they impede the government from cost effectively serving the public and achieving results. Strategies to address these gaps include training and development activities focused on improving employees’ skills needed for mission success. All four selected agencies had taken steps in identifying appropriate training for not only supervisors, but also all employees. For example, BLS conducted a general training needs assessment (TNA) for all employees in 2016. The officials stated that the purpose of the TNA was to give employees an avenue to express their interests in various kinds of training. Employee responses were used to inform elements of the BLS training plan for fiscal year 2017. As a result of the TNA, BLS is conducting a training evaluation of its vendor-provided writing courses. During this evaluation, BLS hopes to determine if the techniques and material taught in these courses have actually resulted in expected improvements in the writing of those employees who have taken the course as observed by their supervisors and managers. TNA results showed that managers also expressed a strong interest in additional training on employee leave, labor relations, and employee relations. BLS officials stated that courses on these topics were provided as part of the agency’s fiscal year 2017 training plan. As another example, CDC recently developed two onboarding checklists for new executives in 2017 for training purposes. The intent was to provide a comprehensive, consistent onboarding experience so that new executives are more engaged and knowledgeable. In addition, within the last year, the agency developed a mentoring circle for new supervisors that meets monthly. The purpose of the circle is to provide new supervisors with insider help from their peers, such as how to handle difficult situations. Supervisors are also provided assistance through the agency’s performance management appraisal working group. This group meets quarterly to discuss how to better assist supervisors and employees with performance management related questions. Improved Internal Communication from Agency Management We have previously reported that successful organizations empower and involve their employees to gain insights about operations from a frontline perspective, increase their understanding and acceptance of organizational goals and objectives, and improve motivation and morale. We have also previously reported that what matters most in improving engagement levels is valuing employees—that is, an authentic focus on their performance, career development, and inclusion and involvement in decisions affecting their work. Each of the selected agencies in our review stated that they had made efforts over the last few years to improve internal communication between management and employees, as well as increase the transparency of actions taken and decisions made by management. For instance, BLS hosts quarterly breakfast sessions with the BLS Commissioner in which employees have access to agency leadership where they can offer suggestions or feedback. BLS also provides agency information through its intranet website, which is updated almost daily. Examples include features such as the BLS Daily Report, What’s Up at BLS, and BLS tweets. Specifically, the What’s Up at BLS feature of the BLS intranet is an internal communications hub that includes four sections, including “Employee and Team Spotlight”—highlighting the work of employees and teams across the agency—and “Changing Lanes,” which features stories about employees who decided to switch their career paths by changing occupations or programs within BLS. According to OCC officials, the agency has increased the frequency of agency-wide communications and those from middle management that cascade priorities, decisions, and organizational changes to employees. OCC has also executed enterprise change management to manage the people side of change, including building awareness, knowledge, and ability through stakeholder analysis and communications planning. It also maintains an engagement portal for teams to document action plans related to employee engagement—of which there are more than 200 action items related to improved communications using a top-down and two-way approach. OPM Provides Performance Management Resources to Agencies, but Some Information is Not Easily Accessible or Routinely Shared OPM Provides Performance Management Resources for Agencies on Its Website but Some Information is Not Easily Accessible or Regularly Updated As the government’s chief human resources agency and personnel policy leader, OPM’s role in the federal government is to, among other things, design and promulgate regulations, policy, and guidance covering all aspects of the employee life cycle from hire to retire, including performance management. OPM provides such performance management guidance and resources to agencies on its website, as shown in figure 13, as well as in a new Performance Management Portal (portal) accessible through the Office of Management and Budget’s (OMB) MAX Information System (MAX). Examples of guidance and resources include information for the five phases of the performance management cycle, descriptions on the how to write performance standards, critical components of effective and timely feedback, answers to performance management frequently asked questions, and a list of the various award programs open to employees from all federal agencies. In addition, the Chief Human Capital Officers (CHCO) Council’s website includes information provided by OPM on performance management as well as various OPM memorandums to CHCOs, human resource directors, and agency leaders. According to OPM officials, information on the performance management website is reserved for policy guidance based on current and applicable law and regulation. As such, only minor updates have been made to the website because the law and regulatory requirements for performance management have not recently changed. However, there is no date included on the website that indicates when it was last updated. OPM officials stated that the last update made to the website was in June 2016 when an external entity requested that a public service award be added to OPM’s awards list page. However, OPM has issued training, guidance, and other performance management related resources since the last website update in June 2016. Specifically, we examined more than 100 performance management related online links on both OPM’s and the CHCO Council’s websites, and found that in some instances, the CHCO Council’s website included more up-to-date information issued by OPM that was not found on OPM’s performance management website. Some examples include: The release of OPM’s web-based training course, “Basic Employee Relations: Your Accountability as a Supervisor or Manager,” dated October 12, 2016; Management Tools for Maximizing Employee Performance, dated January 11, 2017; Performance Management Guidance and Successful Practices in Support of Agency Plans for Maximizing Employee Performance, dated July 17, 2017; The release of OPM’s web-based training course, “Performance Management Plus—Engaging for Success,” dated October 6, 2017; Federal Supervisory Training Program Survey Results, dated May 21, Guidance for Implementation of Executive Order 13839 - Promoting Accountability and Streamlining Removal Procedures Consistent with Merit System Principles, dated July 5, 2018. According to OPM officials, the agency does not coordinate with the CHCO Council on its website postings. However, OPM officials stated that performance management guidance approved by OPM is provided to the CHCO Council. We did not find any reference to the CHCO Council’s website using OPM’s internal search engine with the term “performance management” (see figure 14). As a result, agency officials and federal employees who are looking for comprehensive information on performance management using OPM’s website may be unable to easily find or access related performance management guidance or resources. A 2016 Office of Management and Budget memorandum on federal agency public websites and digital services states that federal agency public websites and digital services are the primary means by which the public receives information from and interacts with the federal government, provides government information or services to a specific user group across a variety of delivery platform and devices, and supports the proper performance of an agency function. The memorandum states that, “Federal websites and digital services should provide quality information that is readily accessible to all.” In addition, federal internal control standards state that management should use quality information to achieve the entity’s objective. Quality information should be appropriate, current, complete, accurate, accessible, and timely. However, OPM does not have a process for regularly updating its performance management website with new guidance and resources to ensure that the information is readily available. Agency employees, such as human capital specialists, who visit OPM’s performance management website may be unable to find or access the most recent guidance and training available. In addition to its website, OPM officials stated that the agency recently launched the Performance Management Portal (portal) in September 2017 on OMB MAX to communicate with agencies and provide information and resources related to non-SES performance management, as highlighted earlier. OPM officials said that the portal will be updated with information regarding announcements or updated guidance as needed, or when it is released and becomes available. Although not as comprehensive as the information included on OPM’s performance management website, the portal included slides from OPM’s semiannual facilitated performance management forums and updated information on awards guidance for non-SES employees for fiscal year 2017—neither of which were on OPM’s website. As the government’s chief human resources agency, agencies may see OPM as their primary source of performance management guidance. By establishing a process to ensure that information on the performance management website is regularly updated to include the most recent guidance, agencies would have access to the most current information. OPM Could Better Lead Efforts to Identify and Share Emerging Performance Management Research and Innovation OPM provides opportunities for agencies to share promising practices. For example, OPM has several efforts in place that allow agencies to share promising information with each other such as at its semiannual Performance Management Forums (forums), annual Performance Management Steering Committee meetings, and through the previously mentioned portal. According to OPM, the forums provide agencies with updated information, guidance, and support to encourage performance excellence amongst employees. In 2017, OPM began holding annual steering committee meetings which allow interagency representatives to discuss the needs of the federal performance management community, to identify and/or request potential content for future forums, and to share promising practices and lessons learned regarding performance management, according to OPM officials. However, there is no formal process in place or mechanism for agencies to routinely and independently share their own experiences and lessons learned in implementing performance management efforts. For instance, the portal does not currently allow for agencies to post and share their own promising practices with each other in a centralized location. Instead, agencies must rely on OPM to post such information on the portal. OPM officials stated that, although permission to view the portal is granted to all users in the executive branch with a MAX account, OPM is the only agency that has permission to make edits to the portal. OPM officials said they are exploring options to allow for an interactive experience with other agencies. Federal internal control standards state that management should externally communicate the necessary quality information to achieve the entity’s objective. Additionally, our prior work on collaboration practices has shown that agencies can enhance and sustain collaborative efforts, and identify and address needs by leveraging resources, such as through sharing information. Establishing a mechanism to allow agencies to routinely share promising practices and lessons learned from their experiences could assist agencies that are undertaking or considering similar efforts and help inform agencies’ decision-making related to performance management. In addition to driving modernization, OPM identified innovation as one of its five values in its most recent strategic plan for fiscal years 2018 through 2022. Specifically, OPM stated that the agency “constantly seeks new ways to accomplish its work and generate extraordinary results. OPM is dedicated to delivering creative and forward-looking solutions and advancing the modernization of human resources management.” OPM officials stated that innovation was included as one of OPM’s values because the agency seeks to embrace forward-leaning policies and practices within all aspects of human capital management. While OPM officials told us that they maintain a constant scan of the environment to identify and follow promising practices—which could include innovative concepts—in the private sector and other sources to include performance management and performance management systems, they did not specifically identify which promising practices they incorporated into guidance or training. In addition, when we asked OPM to identify innovative performance management practices based on its own research, officials provided us with articles from leading experts that focused on eliminating performance ratings, using a growth mindset concept, and the SCARF model—status, certainty, autonomy, relatedness, and fairness—for collaborating with and influencing others. They also provided references and their notes on new performance management system programs at three corporations. OPM officials said they have not placed these articles, references, or notes on their performance management website or shared them with agencies, and have no plans to do so at this time. Instead, OPM officials stated they were monitoring the progress of these new practices to assess if the methods were effective in maximizing employee and organizational outcomes, in addition to stimulating collaboration and innovation. However, OPM provided no criteria in use to determine when the results would be considered effective or when they could be shared with agencies. Without OPM sharing their research results, agencies may be unaware of current practices in the performance management field because they may not be conducting their own research. Including innovation as an agency value is not sufficient to change an organization’s culture for it to become innovative; it is necessary to also introduce, for example, a strategy to identify and address emerging research and promising practices in performance management. Such a strategic approach could include criteria that identify what research results to share with agencies, when to share them, and by which process (for example, by website). It would also enable OPM to increase transparency and consistency in identifying emerging innovations. One of our case study agencies told us that in the absence of OPM providing research results, the agency used its own resources to research and identify leading practices in the private sector that could potentially apply to their own performance management system, such as focusing on ongoing performance conversations and recognition to increase engagement and performance, while reducing burdensome administrative requirements that do not add value. Officials at this agency stated that OPM’s guidance was not modernized to the extent that the human capital and performance management industry was changing. Without OPM taking the lead to share emerging and innovative research, agencies, and therefore their employees, may not benefit from the best information available. Although OPM identified innovation as one of its five values, we were unable to find any recent information on innovation for performance management in the government on OPM’s website. Specifically, we used “innovation performance management” as a search term on the website and found the “Promoting Innovation in Government” web page, which included archived material and was no longer being updated (see figure 15). As a result, agencies that use OPM’s website as a source of performance management guidance would be unable to find any current resources on performance management innovation. OPM officials explained that older material is archived based on the current leadership’s vision. The officials also confirmed that OPM did not have other active websites that contained innovative performance management practices gathered from external sources, which could be shared with other federal agencies. Implementing a strategic approach to sharing innovation in performance management would then allow OPM to provide relevant and updated information that agencies could use to modernize their performance management systems. Conclusions Managing employee performance has been a long-standing government- wide issue. As the current administration moves to reform the federal government to become leaner, accountable, and efficient, an effective performance management system is necessary to increase productivity, sustain transformation, and foster a culture of engagement that enables high performance. Federal agencies have a primary responsibility for managing their employees’ performance, but OPM maintains a key role in developing and overseeing human resources programs and policies that support the needs of federal agencies. As the government’s chief human resources agency and personnel policy leader, OPM is responsible for designing and promulgating regulations, policy, and guidance covering all aspects of the employee life cycle, including performance management. While OPM provides performance management resources on its website, some information is not regularly updated and can be challenging to find. Establishing a process to provide agencies with current, accurate, and easy access to guidance and resources would provide them with the most recent guidance and resources available. To be at the forefront of innovation, OPM must consistently challenge traditional performance management practices, and identify opportunities to present and promote new and creative solutions to agencies. Although OPM has identified potential innovative and promising practices for performance management through its own research, OPM has not actively shared these practices with agencies. In addition, agencies do not have access to a common forum by which they could routinely and independently share their own promising practices and lessons learned to avoid common pitfalls. In times of limited resources, developing a strategic approach to identify and share emerging research and innovations in performance management would help agencies inform and, as needed, reform their performance management approaches. As a result, federal employees may have more opportunities to maximize their performance. Recommendations for Executive Action We are making the following three recommendations to OPM. Specifically: 1. The Director of OPM, in consultation with the CHCO Council, should establish and implement a process for regularly updating the performance management website to include all available guidance and resources, making this information easily accessible, and providing links to other related websites. (Recommendation 1) 2. The Director of OPM, in consultation with the CHCO Council, should develop and implement a mechanism for agencies to routinely and independently share promising practices and lessons learned, such as through allowing agencies to post such information on OPM’s Performance Management portal. (Recommendation 2) 3. The Director of OPM, in consultation with the CHCO Council, should develop a strategic approach for identifying and sharing emerging research and innovations in performance management. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Secretaries of the Departments of Health and Human Services (Centers for Disease Control and Prevention), Labor (Bureau of Labor Statistics), and Treasury (Office of the Comptroller of the Currency), the Acting Attorney General (Drug Enforcement Administration) and the Acting Director of OPM. In its written comments, reproduced in appendix II, OPM agreed with our findings and concurred with our recommendations. It added that it would establish and implement a process for regularly updating its performance management website, among other things. OPM and the Departments of Health and Human Services, Labor, and Treasury also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of the Departments of Health and Human Services, the Department of Labor, the Department of the Treasury, the Acting Attorney General, the Acting Director of OPM, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report (1) describes federal employee perceptions of performance management as measured by the results of selected statements from the Office of Personnel Management’s (OPM) annual survey of federal employees, the Federal Employment Viewpoint Survey (FEVS); (2) identifies practices that selected agencies use to develop and implement strategies to improve performance management; and (3) evaluates OPM’s guidance and resources to support agency efforts to improve performance management government-wide. FEVS provides a snapshot of employees’ perceptions about how effectively agencies manage their workforce. Topic areas are employees’ (1) work experience, (2) work unit, (3) agency, (4) supervisor, (5) leadership, (6) satisfaction, (7) work-life, and (8) demographics. OPM has administered FEVS annually since 2010. From 2002 to 2010, OPM administered the survey biennially. FEVS includes a core set of statements. Agencies have the option of adding questions to the surveys sent to their employees. FEVS is based on a sample of full- and part-time, permanent, non-seasonal employees of departments and large, small, and independent agencies. According to OPM, the sample is designed to ensure representative survey results would be reported by agency, subagency, and senior leader status as well as for the overall federal workforce. Once the necessary sample size is determined for an agency, if more than 75 percent of the workforce would be sampled, OPM conducts a full census of all permanent, nonseasonal employees. To describe government-wide trends in employee perceptions of performance management, we selected 15 FEVS statements that generally align with OPM’s five phases of performance management cycle: (1) planning and setting expectations; (2) continually monitoring performance; (3) developing the capacity to perform; (4) rating periodically to summarize performance; and (5) rewarding good performance (see table 4). We used indexes such as the Employee Engagement Index, the Human Capital Assessment and Accountability Framework Results-Oriented Performance Culture Index, and the Public Partnership for Public Service’s Best Places to Work categories to help guide our selection process of three FEVS statements per OPM performance management phase. We did not look at how surveyed employees responded to the statements when considering which ones to select. Upon selection of our statements, we consulted with our internal human capital (HC) experts as well as external HC experts at OPM and the Merit Systems Protection Board to determine the appropriateness of our FEVS statement selection and categorization. They generally agreed that these statements aligned with the phases. However, FEVS was not designed to measure performance management and, although these statements all provide useful insights, they do not necessarily represent all key aspects of performance management. In addition, we analyzed the 15 FEVS performance management-related questions by supervisory status for the 24 Chief Financial Officers Act (CFO Act) departments and agencies for the years 2010 through 2017. We conducted this analysis because our prior work had shown that supervisory status was the employee population variable that displayed the greatest degree of difference in responses between the categories of respondents in it. For this report, we did not analyze the extent of differences in responses in the performance management questions by other employee population groups, such as age or gender, because that was outside the scope of our engagement. We examined the results for the 15 FEVS questions by supervisory groups, and report the 4 that had the greatest degree of differences by supervisory levels. All of these 4 had differences of at least 28 percentage points between the most and least favorable categories of respondents while the remaining 11 had differences in the range of 2 to 25 percentage points between the views of senior leaders and nonsupervisory employees. We calculated the average percent of employees who agreed or strongly agreed with the three statements comprising the phase for those who answered all three statements to identify trends. Survey respondents who did not answer one or more of the phase statements were not included. Because OPM followed a probability procedure based on random selections for most agencies, the FEVS sample is only one of a large number of samples that could have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of the FEVS statement estimates using the margin of error at the 95 percent level of confidence. This margin of error is the half-width of the 95 percent confidence interval for a FEVS estimate. A 95 percent confidence interval is the interval that would contain the actual population value for 95 percent of the samples that OPM could have been drawn. To assess the reliability of the FEVS data, in addition to assessing the sampling error associated with the estimates we examined descriptive summary statistics and the distribution of both the survey data and the human capital framework indexes, and assessed the extent of item- missing data. We also reviewed FEVS technical documentation. On the basis of these procedures, we believe the data were sufficiently reliable for use in the analysis presented in this report. To identify practices used by selected agencies to develop and implement strategies to improve performance management, we complemented our government-wide analysis with an additional analysis of agencies (those agencies and units within 1 of the 24 CFO Act departments). Specifically, we analyzed agency results for the same 15 statements in 2015 (the most recent data available at the time) to select a nongeneralizable sample of four agencies to obtain illustrative examples of how they approached performance management and their strategies to improve performance within their agencies. We calculated averages for the agencies based on their scores for our selected statements, and rank ordered them based on these averages. Among other attributes, these agencies had the highest levels of employee agreement with FEVS statements dealing with their performance management processes. We selected agencies that had the highest average scores for the performance management phases. In addition to the FEVS data, we also used secondary factors such as the number of respondents, agency size, mission, and types of employees to identify the following agencies: (1) Bureau of Labor Statistics, Department of Labor; (2) Centers for Disease Control and Prevention, Department of Health and Human Services; (3) Drug Enforcement Administration, Department of Justice; and the (4) Office of the Comptroller of the Currency, Department of the Treasury. We developed a set of standard questions that asked about agency strategies to improve performance management and relevant successes, which we administered to human resources/human capital officials and other officials responsible for performance management at the agencies. We reviewed and analyzed the responses the agencies provided, and identified and reported examples of practices that all four described, which are intended to improve performance management. We also asked agencies about the types of guidance and resources they obtained from OPM. The four common practices we identified do not represent the only practices these agencies employ to improve performance management at their agency. In addition, the practices are not intended to be representative of all those employed by all other federal agencies. To evaluate the guidance and resources OPM provides to agencies to improve performance management government-wide, we reviewed both OPM’s performance management website and the Chief Human Capital Officers (CHCO) Council’s website to identify available guidance, resources, and tools. We compared these documents to OMB’s memorandum on federal agency public websites, OPM’s strategic plan for fiscal years 2018 through 2022, and internal controls. We observed the Performance Management Portal, hosted on OMB’s MAX website, in July 2018 with an OPM official as we did not have access to the portal. We also reviewed agency documentation and other OPM-referenced websites that contained performance management-related information. We used OPM’s internal site search engines and search terms, such as “performance management” and “performance management innovation,” to identify relevant guidance. During the course of our review, we compared performance management guidance posted on the OPM and CHCO websites as well as the portal, and identified discrepancies between what we found on the respective websites. We discussed the discrepancies with OPM officials and included their responses within the report. To supplement the documentary evidence obtained, we also interviewed officials from OPM, the CHCO Council, and selected case study agencies to describe the extent to which OPM assists agencies on performance management. We conducted this performance audit from December 2016 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Office of Personnel Management Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Thomas Gilbert, Assistant Director; Dewi Djunaidy, Analyst-in-Charge; Jehan Chase; Martin DeAlteriis; Krista Loose; and Susan Sato made major contributions to this report. Also contributing to this report were Carl Barden; Won Lee; Robert Robinson; and Stewart Small. Related GAO Products Federal Employee Misconduct: Actions Needed to Ensure Agencies Have Tools to Effectively Address Misconduct. GAO-18-48. Washington, D.C.: July 16, 2018. Federal Workforce: Distribution of Performance Ratings Across the Federal Government, 2013. GAO-16-520R. Washington, D.C.: May 9, 2016. Federal Workforce: Additional Analysis and Sharing of Promising Practices Could Improve Employee Engagement and Performance. GAO-15-585. Washington, D.C.: July 14, 2015. Federal Workforce: Improved Supervision and Better Use of Probationary Periods Are Needed to Address Substandard Employee Performance. GAO-15-191. Washington, D.C.: February 6, 2015. Results-Oriented Management: OPM Needs to Do More to Ensure Meaningful Distinctions Are Made in SES Ratings and Performance Awards. GAO-15-189. Washington, D.C.: January 22, 2015. Federal Workforce: OPM and Agencies Need to Strengthen Efforts to Identify and Close Mission-Critical Skills Gaps. GAO-15-223. Washington, D.C.: January 30, 2015. Federal Workforce: Human Capital Management Challenges and the Path to Reform. GAO-14-723T. Washington, D.C.: July 15, 2014. Office of Personnel Management: Agency Needs to Improve Outcome Measures to Demonstrate the Value of Its Innovation Lab. GAO-14-306. Washington, D.C.: March 31, 2014. Federal Employees: Opportunities Exist to Strengthen Performance Management Pilot. GAO-13-755. Washington, D.C.: September 12, 2013. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003.
Why GAO Did This Study Managing employee performance has been a long-standing government-wide issue and the subject of numerous reforms since the beginning of the modern civil service. Without effective performance management, agencies risk not only losing the skills of top talent, they also risk missing the opportunity to effectively address increasingly complex and evolving mission challenges. GAO was asked to examine federal non-Senior Executive Service performance management systems. This report examines (1) government-wide trends in employee perceptions of performance management as measured by the results of selected FEVS statements, (2) practices that selected agencies use to improve performance management, and (3) OPM's guidance and resources to support agency efforts to improve performance management government-wide. GAO analyzed responses to selected FEVS statements related to the five performance management phases from 2010 through 2017; selected four agencies based on the highest average scores for the five phases, among other criteria, to identify practices which may contribute to improved performance management; reviewed OPM documents; and interviewed OPM and other agency officials. What GAO Found GAO found that from 2010 through 2017, surveyed employees generally demonstrated positive responses to selected Federal Employee Viewpoint Survey (FEVS) statements related to four of the Office of Personnel Management's (OPM) five performance management phases, including: planning and setting expectations, monitoring performance, developing the capacity to perform, and rating performance. Employees responded least positively to statements related to rewarding performance, with only 39 percent of employees, on average, agreeing with statements regarding this phase. Of the four agencies with among the highest average scores for the performance management phases (Bureau of Labor Statistics, Centers for Disease Control and Prevention, Drug Enforcement Administration, and Office of the Comptroller of the Currency), GAO identified practices that may contribute to improved performance management including strong organizational culture and dedication to mission; use of FEVS and other survey data; and a focus on training. OPM provides guidance and opportunities for agencies to share promising practices on performance management; however, some of this information is not easily accessible on its performance management website. In addition, OPM does not leverage its leadership position to formally identify and share emerging performance management research and innovation with agencies. As a result, agencies, and therefore their employees, may not benefit from the best information available. What GAO Recommends GAO is making three recommendations, including that OPM improve its website and share innovations in performance management with agencies. OPM agreed with GAO's recommendations.
gao_GAO-18-217
gao_GAO-18-217_0
Background GAO’s Standards for Internal Control in the Federal Government state that federal agencies—such as DOD—must demonstrate a commitment to training, mentoring, retaining, and selecting competent individuals, which would include program managers. These standards explain that federal agencies like DOD should provide training that enables individuals to develop competencies appropriate for key roles, reinforces standards of conduct, and can be tailored based on the needs of the role; mentor individuals by providing guidance on their performance based on standards of conduct and expectations of competence; retain individuals by providing incentives to motivate and reinforce expected levels of performance and desired conduct; and select individuals for key roles by conducting procedures to determine whether a particular candidate fits the organization’s needs and has the competence for the proposed role. The Project Management Institute, as well as four companies that we included in this review, have also identified these activities as critical for developing program managers. Program managers for DOD’s 78 major defense acquisition programs, along with program executive officers, their respective deputies, and program managers for certain non-major programs, occupy what DOD refers to as program management key leadership positions. There were 446 program management key leadership positions at the end of fiscal year 2016. They are in turn part of a broader program management career field, which numbers approximately 17,000 civilian and military personnel. The Air Force typically brings its future program managers for major defense acquisition programs into the career field early in their careers, and then provides training and experiences to prepare them for the role. In contrast, the Army and Navy typically bring their future program managers into the career field later in their careers and from other fields, such as engineering. As shown in table 1, at the end of fiscal year 2016, most program manager positions for major defense acquisition programs were held by military personnel. According to military service officials, when a military officer fills a program manager position, a civilian usually fills the deputy program manager position for that program and vice versa. Overarching guidance, training, and oversight for the defense acquisition workforce is provided centrally by DOD in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, which includes Human Capital Initiatives and the Defense Acquisition University. Other officials and organizations that play key roles include the Defense Acquisition Functional leader for program management, who is responsible for establishing a competency model that reflects the knowledge and skills required to be successful in the career field, as well as position descriptions, requirements for key leadership positions, certification standards, and continuous learning activities; the Directors for Acquisition Career Management in each of the military services, who serve as key advisors for policy, coordination, implementation, and oversight of acquisition workforce programs within their services; and acquisition commands and program executive offices within each military service, which work together to manage acquisition programs and initiatives to improve the workforce. Over the last decade, Congress has passed several laws aimed at bolstering the acquisition workforce and specifically the program management career field. Provisions have included requiring DOD to develop a comprehensive strategy for enhancing the role of program managers, provide advancement opportunities for military personnel, and establish training programs for the acquisition workforce. Congress also established the Defense Acquisition Workforce Development Fund (DAWDF) in 2008 to provide funds for the recruitment, training, and retention of DOD acquisition personnel. Since the establishment of DAWDF, DOD has obligated more than $3.5 billion in DAWDF funds for these purposes. Of the more than $440 million in DAWDF funds obligated in fiscal year 2016, almost $12 million was obligated for the program management career field: $0.4 million was obligated for recruitment, $10.5 million was obligated for training, and $0.9 million was obligated for retention and recognition. Additional funds supported the salaries of 33 people hired into the career field during fiscal year 2016. To bolster the number of civilian personnel that could be selected for a program manager position, the National Defense Authorization Act for Fiscal Year 2018 requires DOD to implement a civilian program manager development program. The act states that the plan for such a program shall include consideration of qualifications, training, assignments and rotations, and retention benefits, among other things. Leading Organizations Use a Combination of Practices to Develop Program Manager Talent We identified 10 practices, across four distinct areas, used by leading organizations to develop program manager talent based on our extensive review of Project Management Institute documents and discussions with AstraZeneca, Boeing, DXC Technology, and Rio Tinto. These four areas correspond to the internal control standards discussed previously. Program managers at these companies share similar basic responsibilities with DOD program managers, including overseeing the development and production of goods and services in a timely and cost- effective manner. As shown in figure 1 below, leading organizations provide a mix of formal and informal training opportunities focused on sharing knowledge and providing experiences that prepare people for program management, offer mentoring opportunities to guide people along career paths use a mix of financial and nonfinancial incentives to retain high select program managers based on identification of high-potential talent and then assign program managers based on program needs. Boeing representatives noted that by using a combination of these practices, over the past 15 years, their program managers have primarily left positions due to promotion or retirement. Rio Tinto representatives noted that in a challenging environment for finding suitable external talent, they have been able to use these practices to successfully develop most of the talent they need internally. DXC Technology representatives noted that these practices enabled their program managers to receive better feedback and address skill gaps. An AstraZeneca representative noted that these practices have made it easier for people to get the range of experiences they need to move into leadership positions. Leading Organizations Focus Training on Sharing Knowledge and Gaining Experience The Project Management Institute identifies training as the most common component of development. Leading organizations we spoke with use venues like training classes to share knowledge and experiences. These organizations also expand people’s knowledge and experience by encouraging rotation of talent across organizational boundaries. Leading organizations also provide access to on-the-job learning opportunities and repositories of best practices and lessons learned. Examples of practices used by commercial companies we spoke with are described below. Practice #1—Training classes that allow program managers to share experiences: Boeing representatives told us that the company sends employees aspiring to be program managers to a 5-day, in-residence program manager workshop. Attendees simulate challenging program management scenarios and get exposure to senior executives who discuss best practices and share experiences. They are expected to make decisions quickly, and play different roles throughout the simulation so they can gain a better understanding of the consequences of their decisions. Similarly, DXC Technology holds multiday workshops for program managers where they participate in role-playing scenarios in which they have to react to a given situation that a program manager could face. One of the key benefits of the workshop noted by DXC Technology representatives is that they receive individual feedback on areas for improvement. Practice #2—Rotational assignments: According to Boeing representatives, the company selects high-performing midcareer employees interested in program management for a 2-year rotation program in which they take leadership roles and solve difficult challenges facing a part of the business. These could be internal assignments within an individual’s current business unit, or external assignments that cross organizational boundaries, for example, between Boeing’s commercial, defense, and services businesses. Boeing representatives noted this as a valuable leadership opportunity for the people involved, which helps drive change in the organizations to which they are assigned. In order to expand people’s capabilities and give them a broader perspective on the business, AstraZeneca regularly notifies its workforce—via a monthly newsletter and an online portal—of rotational opportunities lasting 6 months to a year. These rotations could be within an individual’s business unit, or in a different location or part of the business. Practice #3—On-the-job learning and information repositories: Rio Tinto representatives told us that the company has managers from one project participate in reviews and events for other projects in order to transfer knowledge. For example, a manager from a mining operation based in one country might visit a mining operation in another country to share ideas. Rio Tinto also retains the formal reviews that take place at the end of each project, as well as the lessons learned by the team itself, in an accessible document management system. Similarly, AstraZeneca uses online collaboration software to house project information that might help others. It has also established a community of practice and networking groups to share knowledge, and provides people moving into management positions a checklist of tasks and meetings to complete within their first 6 months. Boeing representatives told us that one way the company provides on-the-job training and support to program managers is by temporarily bringing in experts with prior experience to participate in a wide variety of activities across all types of programs. These activities include verifying designs and proactively identifying and resolving challenges such as manufacturing problems. Leading Organizations Facilitate Mentoring Relationships and Establish Program Management Career Paths The Project Management Institute identifies mentoring as a way of encouraging and supporting people. Leading organizations we spoke with have programs in place to facilitate mentor and mentee relationships. They expect senior people to serve as mentors. The organizations we spoke with also mentor employees by laying out the career paths they might need to follow to achieve the highest levels of program management within the organization. Examples of practices used by commercial companies we spoke with are described below. Practice #4—Mentoring programs with senior leader involvement: According to Boeing representatives, the company offers voluntary mentoring programs—both formal and informal—at different points throughout an employee’s career cycle, including the early stages. Depending on the career goals of an individual, Boeing offers both mentors and sponsors, who are senior leaders that nominate people— especially high performers—for specific opportunities. At Boeing, there is an expectation that senior leaders will be involved in mentoring. For example, midcareer program managers can be matched with executives based on the preferences of the two parties. Relationships are reevaluated annually. Through these relationships, mentees get exposure to critical decisions, as well as other parts of the business. Rio Tinto representatives told us that the company has a formal mentoring program targeted at high-potential talent that partners people with senior leaders, including those from different departments. Senior leaders at Rio Tinto are expected to participate in long-term career development discussions for people two levels below them. The company also provides senior executives and other lower-level managers access to external coaches who focus more on leadership than technical company matters. Practice #5—Career paths that describe skills needed to advance: According to DXC Technology representatives, the company has documented a program management career path that details the skills needed to be a program manager. The company annually identifies the developmental needs of employees, who can then take steps such as moving to another program to gain the required experience to address any gaps. This helps management make decisions that benefit both the individual and the company. Boeing representatives told us that the company has developed a general career path for many of its career fields, including program management, and encourages people to develop the skills they need by gaining experience in different career fields and business units. Boeing program managers we met with described the range of experiences they had within the company that equipped them for their roles, such as working on different kinds of aircraft and in technical and business functions. Leading Organizations Use a Mix of Financial and Nonfinancial Incentives to Retain People Leading practices identified by us and the Project Management Institute suggest that a combination of financial and nonfinancial incentives can be used to retain high performers. For example, leading organizations we spoke with offer student loan repayments and financing of higher education in compensation packages as financial incentives. They also provide monetary awards to recognize excellence in job performance and contributions to organizational goals. Nonfinancial incentives could include senior leadership recognizing strong performance in program management and emphasizing the idea that program management is prestigious, challenging, and key to business success. Examples of practices used by commercial companies we spoke with are described below. Practice #6—Financial rewards for good performance: Rio Tinto representatives told us that the company offers incentives that are based on performance. The company includes pay raises linked to annual performance ratings, which are determined by the extent to which a program manager meets objectives including cost and schedule goals. According to Boeing representatives, the company annually assesses program managers based on technical and financial performance measures and employee feedback. These assessments help determine annual salary increases and bonuses. Practice #7—Education subsidies: Boeing offers tuition assistance to all people after they have been at the company for at least 1 year. This can support degree programs, professional certificates, and individual courses in fields of study at over 270 colleges and universities. Boeing representatives noted that this has helped foster a high degree of loyalty from people. Practice #8—Recognition: Boeing representatives told us that program managers for major programs hold a high level of responsibility and accountability. When program managers are successful at running effective programs, they are often moved to larger and more complex programs with much greater responsibility. AstraZeneca announces recognition for program achievements such as meeting delivery targets via e-mail and at town hall meetings, and significant achievements can also be recognized through nomination for annual company-wide awards. Leading Organizations Select Program Managers Based on Identification of High-Potential Talent and Alignment with Program Needs The Project Management Institute emphasizes the importance of identifying top talent and future high performers for key roles. Leading practices for selecting program managers are rooted in the identification of high-potential talent and the alignment of that talent with program needs. Leading organizations we spoke with engage senior management in identifying high performing people and monitoring their job assignments, performance, and career progression. They also select program managers with the blend of skills, experience, knowledge, and expertise required to be effective within a particular program environment. Examples of practices used by commercial companies we spoke with are described below. Practice #9—Identification of high-potential talent by senior leaders: Rio Tinto representatives told us that senior leaders at the company annually assess the potential and performance of its people and then classify them in one of nine categories that include those who need additional experiences and developmental opportunities, those in the right role and at the right level that need to be kept engaged, and those considered high potential who need challenging opportunities. AstraZeneca identifies and keeps track of high-potential people through annual talent assessments addressing each person’s strengths and gaps, as well as potential roles, development actions, and associated time frames. The assessments also include an individual’s professional aspirations. According to Boeing representatives, the company uses its succession planning process to identify a pool of qualified people able to step into executive and program manager positions, including those who are ready to step into a role immediately, and those who need some additional development. Practice #10—Assignment based on skills, experiences, and program needs: According to DXC Technology representatives, the company assigns program managers to roles based on a review of their demonstrated management and subject matter competencies. For example, an individual is evaluated on experience such as managing programs of a certain size or level of complexity, as well as the outcomes they achieved on those programs in terms of cost, schedule, and client feedback. An individual is also evaluated on whether he or she has the specific skills needed to manage a particular program, such as those related to data migration or software application design. Boeing representatives told us that the company takes into account a wide variety of factors when assigning a program manager to a program. Factors could include the size, dollar value, and complexity of a program, as well as the developmental needs of a program manager. Military Service Practices Show a Mixed Level of Alignment with Leading Practices Our analysis of the practices used by the military services to train, mentor, retain, and select program managers for major defense acquisition programs shows a mix in the level of alignment with the leading practices. We based our analysis on a review of DOD, military service, and relevant sub-component documentation on training, mentoring, retaining, and selecting program managers, including policies, guidance, strategic plans, curricula, online portals, and acquisition workforce data. Table 2 provides our assessment of the alignment of military service practices with the 10 leading practices. Practices used by each of the military services align extensively with 4 of the 10 leading practices. For 5 of the 10, practices used by at least one of the military services do not align extensively with leading practices, and for the remaining practice related to financial rewards for good performance, none of the services’ practices align extensively. We discussed these assessments with each military service Director for Acquisition Career Management, and they generally agreed with our assessments. Practices for All of the Military Services Align Extensively with 4 of the 10 Leading Practices Military service practices align extensively with four of the leading practices, as shown in table 3 below. For the first practice, alignment is largely the result of steps taken by DOD to comply with the Defense Acquisition Workforce Improvement Act, enacted as part of the National Defense Authorization Act for Fiscal Year 1991. This legislation set forth education, training, and experience requirements that program managers must meet prior to being assigned to a major defense acquisition program or significant non-major defense acquisition program. All four practices that have extensive alignment reflect a combination of DOD-wide initiatives and approaches unique to the military services. The following summarizes our assessment of these practices. Practice #1—Training classes that allow program managers to share experiences: DOD provides centralized training that brings together current and prospective program managers to strengthen their skill sets and share their experiences. The Defense Acquisition University has developed a training curriculum of courses that people must complete—in conjunction with experience and education standards—to be certified as ready to take on increasingly challenging assignments. The highest level courses required for program managers incorporate simulations, case studies, senior agency and industry speakers, and team projects to strengthen participants’ analytical, critical thinking, and decision-making skills. According to a Defense Acquisition University official, each year approximately 350 people attend these courses. According to the military services’ Directors for Acquisition Career Management, all current major defense acquisition program managers met their certification requirements. The military services have also developed their own training for program managers that brings peers together and addresses service-specific issues. For example, the Navy has established program management colleges at its largest systems commands. These colleges teach curricula specific to Navy processes. The Navy also provides approximately 200 program managers each year with training courses focused on understanding commercial industry and managing relationships with contractors. These classes, offered through business schools, are taught by academic faculty, senior naval officials, and private sector executives and focus on factors program managers need to be aware of to understand industry behavior and decision-making. According to DOD’s acquisition workforce strategic plan for fiscal years 2016 through 2021, the department intends to improve the type of training it provides program managers, the timing of when courses are provided, and the delivery method. The plan also noted DOD’s intent to strengthen qualification requirements for program management positions by further developing the list of proficiencies associated with certifications, including leadership skills for all levels and technical skills needed by those in the “beginner” and “intermediate” level program management positions. In September 2016, the defense acquisition functional leader for program management finalized and issued this list. Practice #3—On-the-job learning and information repositories: Each of the services provides its own unique on-the-job training or repositories to share lessons learned from acquisition programs. The Air Force provides people in the program management career field with detailed task lists that support on-the-job learning along their career paths. For example, people are encouraged to demonstrate competence in areas such as schedule management. The Army has developed an online portal that houses lessons learned from acquisition programs that were documented around program milestones or upon termination. Users can view and search lessons submitted by others, participate in discussion forums, and reference acquisition case histories. The portal contains over 800 lessons learned, with over 400 relating specifically to program management. The Navy has created a series of physical “war rooms” that display materials on the evolution and organization of the Navy, the service’s acquisition history, how to manage a major program, the unique challenges of ship building, and case studies. The Navy hosts a 5-day training program for program managers in these rooms in order to transfer lessons learned from previous acquisition programs. The Defense Acquisition University has also established an online program management community of practice that houses a range of tools and documents that communicate lessons learned. Practice #8—Recognition: DOD leadership acknowledges the challenges and importance of program management by designating the most senior positions in the career field—including program managers— as key leadership positions. These positions require a significant level of authority commensurate with the responsibility and accountability for acquisition program success. Based on our analysis of DOD acquisition workforce data, while the program management career field represents just over 10 percent of the overall acquisition workforce, it accounts for almost 40 percent of key leadership positions. Senior leadership in each of the services also provides their own types of recognition for good performance in program management. For example, each service has an annual award recognizing high-performing program managers. In addition, program management is an award category for the DOD-wide Defense Acquisition Workforce Individual Achievement Award, which includes recognition for winners at an awards ceremony held at the Pentagon. Practice #10—Assignment based on skills, experiences, and program needs: All of the services evaluate the skills and experiences of candidates for program manager roles, and ensure they have the required qualifications. As part of their processes for filling these roles, the services take note of specific needs associated with a program. In the Army, civilian and military personnel apply each year and are competitively selected by a board of senior Army acquisition leaders who use instructions from the Secretary of the Army to select the best qualified individuals. Once selected by the board, the Army uses another process to match the skills and experience of the individual to those required by the program manager position based on factors such as functional, technical, and educational experience. In the Navy, civilian and military personnel apply and compete for specific programs. As part of the documentation of candidate selection, the Navy requires a description of how the candidate’s skills align with the current status of the program. The Air Force designates whether a program will have a military or civilian program manager in advance. The senior official who approves program manager selections considers program needs along with individual qualifications and functional requirements. In addition, the military services consult with the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics on the selection of program managers for those programs where that office is the decision authority. For Half the Leading Practices, There Is at Least One Military Service with Practices That Do Not Align Extensively For five of the leading practices, at least one of the military services’ practices do not align extensively, as shown in table 4 below. The following summarizes our assessment of instances in which one or two military services may be using a leading practice, but not all three services. We also identify examples of military service actions that could serve as a model for meeting those leading practices. Practice #2—Rotational assignments: Each of the services provides civilian and military program management personnel with opportunities to rotate internally among other units or functions. However, while the military services have identified external rotations with industry as a way to gain valuable experience and improve people’s business acumen, practices in this area vary. For example, The Air Force has an external industry rotation program that is open to both civilian and military personnel. In total, about seven military and civilian program management personnel participate in this program each year, according to the Air Force Director for Acquisition Career Management. The Army’s external industry rotation program is open only to military personnel, and approximately 11 program management personnel participate each year, according to the Army Director for Acquisition Career Management. The Director also noted that some local Army organizations send civilian personnel on industry rotations, but was not aware of participation by civilian personnel in the program management career field. The Navy uses the Secretary of Defense Executive Fellows program to provide experience with commercial industry. This program is open to participants from all the military services. Until 2017, participation in the program was restricted to only military personnel. Over the past 5 years, between two and five Navy military acquisition personnel per year participated in the program, according to the Navy Director for Acquisition Career Management. The Directors for Acquisition Career Management noted that two of the inherent difficulties with sending civilians on potentially year-long industry rotations are that their organizational unit would need to fund the participant’s travel costs, and would also need to find people to perform the participant’s duties in their absence. The Air Force’s industry rotation program avoids the travel cost problem by finding civilians opportunities with local companies. In addition, the program is targeted at more junior personnel than the programs used by either the Army or Navy, reducing the difficulty of filling their position while they are on a rotation. As a result of the focus on military personnel participating in industry rotations, civilian personnel in the Army and Navy miss an opportunity to improve their business acumen and gain valuable experience that would better prepare them for program manager roles. They could benefit from consideration of the approaches taken by the Air Force. Practice #4—Mentoring programs with senior leader involvement: Each of the services offers some kind of voluntary mentoring program. However, only the Air Force and Army have a documented expectation that senior civilian and military personnel serve as mentors. The Navy provides a range of mentoring resources, but only has a documented expectation that senior military personnel serve as mentors. The Navy Director for Acquisition Career Management agrees that this expectation is not documented for civilians, but believes that senior civilian leaders in program management are aware that mentoring is a responsibility. However, because it is not documented, some senior civilian leaders might not be aware of this expectation. Practice #5—Career paths that describe skills needed to advance: Each of the services has outlined the steps people need to take to become program managers and provided opportunities for both civilians and military to advance to these and even higher level positions. However, the descriptions of the skills people should obtain to advance along the various career paths are inconsistent among the services. The Air Force includes the skills and competencies people need to achieve specific career goals in the competency-based task lists previously discussed as a tool to support on-the-job learning. The task lists are the same for civilian and military personnel. The Army describes the skills and competencies civilians need to advance via a one-page roadmap. While there is a one-page roadmap for military personnel, it does not discuss or link to skills and competencies. The online version of the civilian roadmap includes direct links to an existing DOD tool that people can use to identify and address gaps in their experience and capture demonstrated experience in a wide range of program management competencies, such as stakeholder management. People and their supervisors are encouraged to use this tool to develop individual career development plans. The tool also provides a common set of standards that organizations can use to mitigate skill gaps through hiring or using developmental opportunities. The Navy’s systems command responsible for delivering and supporting aircraft provides a career roadmap for the program management career field, as well as detailed descriptions of the different levels of skills and competencies needed to advance. However, the systems command responsible for delivering and supporting ships does not have a formal career roadmap. Both Army and Navy Directors for Acquisition Career Management are aware of these inconsistencies, and are working to put approaches in place in fiscal year 2018 to address them and ensure that key groups in the program management career field are not missing important information about skills they should develop. Practice #7—Education subsidies: All the services offer tuition assistance to military and civilian personnel to further their education, which has helped increase the percentage of program management personnel with a graduate degree from 46 percent in fiscal year 2008 to 57 percent in fiscal year 2016. The services also offer student loan repayments, but use them for different purposes. The Army and Navy use DAWDF-funded student loan repayments—and the requirement that recipients sign an agreement to serve for 3 years—as a retention tool for program management personnel. However, the Air Force only uses these repayments as a recruiting tool, despite the fact that they can be used for both recruitment and retention. This decision stems from the results of a 2016 study the Air Force commissioned from the RAND Corporation that found limited utility in offering retention bonuses as a tool to retain talent. The Director for Acquisition Career Management told us that the Air Force is scaling back its use of all financial retention incentives and prefers to use student loan repayments as a recruiting tool. The service agreement therefore only covers the early part of someone’s career with the Air Force, instead of being a way to drive retention of more senior personnel. Prior GAO work has found that financial retention incentives are among the most effective flexibilities that agencies have for managing their workforce, and that insufficient use of existing flexibilities can significantly hinder the ability of agencies to retain and manage personnel. Practice #9—Identification of high-potential talent by senior leaders: The Army regularly and systematically involves senior management in identifying high-potential program management talent among civilian and military personnel. It requires senior managers to annually evaluate the leadership potential of all civilian acquisition personnel at midcareer or above, and the Army’s annual evaluation for all military officers assesses their potential for positions of greater responsibility. The Air Force has a similar process for military personnel, but not civilians. The onus is on civilian personnel to nominate themselves for development programs and resources, rather than being identified and guided toward those opportunities by senior leaders. The Navy only identifies high-potential military and civilian talent on an informal basis, which varies across the service. The Air Force and Navy risk overlooking high-potential talent as a result of their approaches. The Directors for Acquisition Career Management for both services acknowledge the ad hoc nature of their practices, and are looking into steps they could take in fiscal year 2018 to more systematically identify high-potential talent. None of the Military Services’ Practices Align Extensively with the Practice of Providing Financial Rewards for Good Performance None of the military services’ practices align extensively with leading practices for providing financial rewards for good performance, as shown in table 5 below. Commercial companies have more flexibility than DOD to financially reward good performance. They are not subject to the legal restrictions on compensation that federal agencies must consider, and can offer types of compensation, such as stock options, that federal agencies cannot. Despite this, DOD has mechanisms to financially reward high- performing people. However, these incentives are either unavailable to all program management personnel because of the various pay systems used by DOD, or are underutilized by the military services. For example, military and civilian personnel are compensated under different systems. Military pay and allowances are delineated in Title 37 of the U.S. Code, and while there are provisions for retention bonuses that would cover acquisition officers, there are none that reward high performance. Most DOD civilian personnel, on the other hand, are covered by the General Schedule classification, a pay system that is used in many agencies across the federal government. For the most part, people in this pay system receive set pay increases as long as their performance is at an acceptable level. The military services also have the option to convert civilian personnel to the Civilian Acquisition Workforce Personnel Demonstration Project, known as AcqDemo, where people including those in the program management career field have the opportunity to earn varying levels of pay increases or bonuses based on their performance. The military services’ use of AcqDemo varies. According to AcqDemo data collected by DOD’s Human Capital Initiatives office, as of the end of fiscal year 2016, approximately 64 percent of the Army’s civilian program management workforce is covered by the system. Army officials told us that the level of coverage has increased since then, and that organizations containing the remaining eligible workforce are considering participation in fiscal year 2018. Furthermore, officials told us that all Army program managers are covered by AcqDemo. However, only 38 percent of the Navy’s civilian program management workforce is covered by the system, and 29 percent of the Air Force’s. According to the AcqDemo program manager and the Air Force and Navy Directors for Acquisition Career Management, organizations are hesitant to extend coverage because they are apprehensive about whether what is currently a demonstration program will become permanent, and the time it takes management to reach formal agreement with local bargaining units. The greater coverage of AcqDemo across the Army’s civilian program management workforce compared to the Air Force and Navy suggests that these two services may have opportunities to learn lessons from the Army’s experience. Congress recently took actions that could address some of the concerns about AcqDemo. The National Defense Authorization Act for Fiscal Year 2018, for example, extends the authorized timeline for AcqDemo use from December 31, 2020 to December 31, 2023, and increases the total number of people who may participate in the program at any one time from 120,000 to 130,000. As of February 2017, a total of approximately 36,000 people across DOD were participating in AcqDemo. The military services can also use DAWDF funding to recognize high- performing civilian personnel, but have only made limited use of this funding for program management personnel. The Directors for Acquisition Career Management reported the following awards between fiscal years 2008 and 2017: The Air Force awarded $5,000 to one recipient in fiscal year 2017. The Army awarded a total of $70,000 to 351 recipients on one team in fiscal year 2015. The Navy awarded a total of $10,000 to seven recipients between fiscal years 2008 and 2017. Requests for DAWDF funds are left to the discretion of acquisition commands. According to the military services’ Directors for Acquisition Career Management, local commanders are not frequently requesting DAWDF funds for program management recognition awards. One director stated that this was because they want to avoid the perception of treating civilian personnel differently from military personnel. As a result, the military services are missing an opportunity to financially reward good performance and potentially losing talented civilians by not using all available retention tools. The Army Director stated that Army organizations have also used other financial performance incentives, such as spot awards for civilian program management personnel that are not funded by DAWDF. This director also noted that government-wide budgetary limitations for individual monetary awards have reduced the flexibility to offer rewards for performance. The National Defense Authorization Act for Fiscal Year 2018 requires DOD to commission a review of military and civilian program manager incentives, including a financial incentive structure to reward program managers for delivering capabilities on budget and on time. This represents an opportunity for DOD to identify and begin to address concerns about the equitable treatment of civilian and military program management personnel. Conclusions The military services recognize that they need skilled program managers to develop acquisition programs and have taken steps to develop that top-notch talent. Of note, DOD has developed a solid training regimen and established minimum training, experience, and education requirements for people to manage acquisitions of various dollar thresholds. The services have also established repositories that share lessons learned and provide on-the-job learning opportunities to supplement the formal training. Yet, when compared to leading practices, we found that several practices used by the military services for training, mentoring, retaining, and selecting people for program manager positions could be improved. For instance, the Air Force has practices that extensively align with all leading practices for training and mentoring, but we identified some practices for retaining and selecting program managers that do not. We assessed the Army as having practices that extensively align with all leading practices for selecting program managers, but identified some practices for training, mentoring, and retaining program managers that do not. We assessed the Navy as having practices that do not extensively align with leading practices in each of the areas of training, mentoring, retaining, and selecting program managers. In nearly all cases, the military services could improve their practices by learning from ideas and initiatives being used by another military service or by commercial companies and ensuring that civilian and military personnel have similar opportunities to develop. While commercial companies have more flexibility in providing financial incentives to their program managers, the military services could make greater use of financial mechanisms provided by Congress—such as DAWDF and AcqDemo—to reward high performing civilian personnel. DOD also has an opportunity to identify for Congress any concerns about the equitable treatment of civilian and military program management personnel when it comes to rewarding good performance. Taking these actions could encourage high-potential talent to remain in the program management career field and strengthen the next generation of program managers. Recommendations for Executive Action We are making a total of eight recommendations, including three to the Air Force, two to the Army, and three to the Navy. Specifically: The Secretary of the Air Force should take steps to address areas of civilian and military program manager retention and selection that do not align extensively with leading practices. This could include using approaches already used by the other military services or commercial companies. (Recommendation 1) The Secretary of the Air Force should make greater use of existing financial mechanisms such as DAWDF to recognize high performers. (Recommendation 2) The Secretary of the Air Force should identify lessons learned by the Army related to the Army’s experience to extend coverage of AcqDemo across the civilian program management workforce. (Recommendation 3) The Secretary of the Army should take steps to address areas of civilian and military program manager training, mentoring, and retention that do not align extensively with leading practices. This could include using approaches already used by the other military services or commercial companies. (Recommendation 4) The Secretary of the Army should make greater use of existing financial mechanisms such as DAWDF to recognize high performers. (Recommendation 5) The Secretary of the Navy should take steps to address areas of civilian and military program manager training, mentoring, retention, and selection that do not align extensively with leading practices. This could include using approaches already used by the other military services or commercial companies. (Recommendation 6) The Secretary of the Navy should make greater use of existing financial mechanisms such as DAWDF to recognize high performers. (Recommendation 7) The Secretary of the Navy should identify lessons learned by the Army related to the Army’s experience to extend coverage of AcqDemo across the civilian program management workforce. (Recommendation 8) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with our eight recommendations and in some cases identified ongoing efforts among the military services to address the recommendations and increase alignment with leading practices. In addition, DOD noted the importance of addressing restrictions on how it can reward and retain military personnel, and requested that this issue be included in an ongoing study of DOD workforce incentives. DOD also stated that some of its recent accomplishments and improvements were not mentioned in the report. For example, DOD noted that representatives from the program management community meet regularly to discuss and share lessons learned and best practices. Recent accomplishments include updated competencies, career tracking and development tools, and improvements to classroom and online training. Our report recognizes the progress made by DOD in these areas and highlights some specific examples. We also agree that there is a broader range of efforts underway to enhance the development of program managers. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; and the Secretaries of the Air Force, Army, and Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report addresses (1) how leading organizations train, mentor, retain, and select program managers and (2) the extent to which military service practices for training, mentoring, retaining, and selecting program managers align with those of leading organizations. To identify how leading organizations train, mentor, retain, and select program managers, we first reviewed GAO’s Standards for Internal Control in the Federal Government to identify criteria regarding the controls that federal agencies such as the Department of Defense (DOD) should have in place to manage talent. To identify leading practices for implementing these internal control standards, we first reviewed key documentation, including relevant legislation and prior GAO reports related to program management. We also reviewed prior GAO reports on managing the federal workforce, and in particular those reports that addressed retention mechanisms. We obtained and reviewed documentation from the Project Management Institute, a not-for-profit association that provides global standards for project and program management, related to program management and managing talent. We also worked with the Project Management Institute to identify suitable companies for us to approach to learn about leading practices, based on their membership in the Project Management Institute’s Global Executive Council, and insights from Project Management Institute representatives regarding these companies’ practices for training, mentoring, retaining, or selecting program managers. We spoke with or visited these companies, and where possible, companies provided relevant documentation to support their examples. The selected companies were the following: AstraZeneca is a biopharmaceutical company that focuses on the discovery, development, and commercialization of prescription medicines. AstraZeneca reported total revenues of $23 billion in 2016. Boeing Company is a global aerospace company and manufacturer of commercial airplanes and defense, space, and security platforms and systems. Boeing reported total revenues of $94.6 billion in 2016. DXC Technology is an end-to-end information technology services company. Created by the merger of CSC and the Enterprise Services business of Hewlett Packard Enterprise, DXC Technology serves nearly 6,000 private and public sector clients across 70 countries, delivering next-generation information technology services and solutions. Rio Tinto is a metal and minerals mining company that finds, mines, processes, and markets mineral resources including iron ore, aluminum, copper, diamonds, and energy. Rio Tinto reported total revenues of $33.8 billion in 2016. Based on our review of Project Management Institute documentation and prior GAO reports, as well as our discussions with commercial companies, we identified a set of leading practices for training, mentoring, retaining, and selecting program managers. We shared this set of leading practices with Project Management Institute representatives and made adjustments based on their feedback. To identify the extent to which military service practices align with those of leading organizations, we analyzed DOD, military service, and relevant sub-component documentation on training, mentoring, retaining, and selecting program managers for DOD’s current portfolio of 78 major defense acquisition programs as defined in our most recent assessment of the portfolio. We also interviewed the following DOD and military service organizations during our review: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Office of Human Capital Initiatives. Office of the Under Secretary of Defense for Personnel and Readiness, Office of the Defense Civilian Personnel Advisory Service. Office of the Assistant Secretary of Defense for Acquisition Defense Acquisition University. Department of the Air Force Director for Acquisition Career Management. Department of the Army Director for Acquisition Career Management. Department of the Navy Director for Acquisition Career Management. 4th Estate Director for Acquisition Career Management. Naval Air Systems Command. Naval Sea Systems Command. We also interviewed a former Assistant Secretary of the Army and Deputy Assistant Secretary of the Air Force with expertise in defense acquisition. We used pertinent documentation and information from interviews with officials to assess the extent to which each of the services’ practices aligned with leading practices. Specifically, we assigned ratings for three levels of alignment. Extensive alignment means that the service’s practice contains all of the elements of the leading practice and is not limited to a subset of the population. Partial alignment means that the service’s practice contains some, but not all, elements of the leading practice, or is limited to a subset of the population, such as military or civilian personnel only, or a particular organization within the service. Little to no alignment means that the service’s practice contains minimal or no elements of the leading practice. The following is a list of elements for each practice: 1. Training classes that allow program managers to share experiences: Training classes that involve current or prospective program managers and that allow for knowledge and experience sharing. 2. Rotational assignments: Internal and external—that is, industry— rotational assignments available to military and civilian personnel. 3. On-the-job learning and information repositories: Resources that provide access to guidance on how to perform program management activities and learn from past program management experiences. 4. Mentoring programs with senior leader involvement: Existence of programs that facilitate mentor-mentee relationships and expectation that senior personnel serve as mentors. 5. Career paths that describe skills needed to advance: Documentation for military and civilian personnel of skills needed at different stages of career path(s) to becoming a program manager. 6. Financial rewards for good performance: Consistent use of DAWDF to fund recognition awards for 1 percent or more of civilian program management personnel and AcqDemo coverage of a majority of the civilian program management workforce. 7. Education subsidies: Tuition assistance for further education and use of DAWDF-funded student loan repayments as a retention—versus recruitment—tool. 8. Recognition: Senior-level recognition of prestige and challenging nature of program manager role and of good performance in the role. 9. Identification of high-potential talent by senior leaders: Processes for senior leaders to assess military and civilian program management personnel and identify those considered high potential. 10. Assignment based on skills, experiences, and program needs: Program manager selection processes that assess candidate skills and experiences and specific needs of a program. One analyst performed the initial assessment for each service, and the supporting evidence was then reviewed by the Assistant Director, with any disagreement discussed and resolved as a team. These discussions also informed requests for more information and documentation from each of the services. Assessments were updated based on what was provided by the services. We also reviewed the military services’ practices for approaches that one or more services had adopted that aligned with leading practices, and that could potentially be adopted by the other services to improve their alignment. We shared our assessments with the military service Directors for Acquisition Career Management to give them the opportunity to note additional approaches or initiatives that might inform our assessments, and incorporated their input as appropriate. We reviewed data from DataMart, DOD’s acquisition workforce database, on the composition of the acquisition workforce and the program management career field as of the end of fiscal year 2016, including the extent of coverage of the Civilian Acquisition Workforce Personnel Development (AcqDemo) project. To assess the reliability of DOD’s DataMart data, we (1) reviewed existing information about the data and the system that produced them, (2) interviewed knowledgeable agency officials, and (3) reviewed written answers to questions about the system’s data reliability, including data collection and entry, underlying data sources, and use of internal controls. We determined that the data were sufficiently reliable for the purposes of our reporting objectives. We conducted this performance audit from August 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Michael J. Sullivan, (202) 512-4841 or sullivanm@gao.gov. Staff Acknowledgments In addition to the contact named above, Cheryl Andrew (Assistant Director), Emily Bond, Robert Bullock, Lorraine Ettaro, Kurt Gurka, Ruben Gzirian, Ashley Rawson, Lucas Scarasso, and Robin Wilson made key contributions to this report.
Why GAO Did This Study The Department of Defense's (DOD) major acquisition programs continue to experience cost and schedule overruns. GAO previously found that selecting skilled program managers is a key factor to achieving successful program outcomes. DOD relies on military and civilian program managers to deliver its most expensive new weapon systems, meaning its approach to training, mentoring, retaining, and selecting program managers is critical. House Report 114-537 included a provision for GAO to review the career paths, development, and incentives for program managers. This report addresses how leading organizations train, mentor, retain, and ultimately select program managers; and the extent to which military service practices align with those leading practices. To conduct this work, GAO identified leading practices documented in prior work and by the Project Management Institute, and interviewed commercial companies identified by the Institute as leaders in this field. GAO also analyzed military service practices for developing program managers and compared those to leading practices. What GAO Found Leading organizations use 10 key practices to train, mentor, retain, and ultimately select skilled program managers. GAO found that military service practices for developing program managers align extensively with four of the leading practices, as shown in the table below. At least one military service's practices do not align extensively with five of the leading practices, as shown in the table below. For the remaining leading practice, none of the military services' practices align extensively, as shown in the table below. Military service officials generally agreed with the assessments. More consistent alignment with leading practices—adapted for military and civilian personnel as appropriate and including greater use of existing financial rewards—would enhance the services' ability to manage acquisition programs. What GAO Recommends GAO is making eight recommendations, including that the military services improve practices that do not align extensively with leading practices and make greater use of existing financial rewards for good performance. DOD concurred with the recommendations.
gao_GAO-18-451T
gao_GAO-18-451T_0
More Than 350 Patents Have Been Challenged under the CBM Program, and About One-Third of These Patents Were Ruled Unpatentable We found in our March 2018 report that, from September 2012 through September 2017, parties accused of patent infringement filed 524 petitions with the Patent Trial and Appeal Board challenging the validity of 359 distinct patents under the CBM program, resulting in rulings against about one-third of these patents. The average monthly number of CBM petitions fluctuated during this period and tapered off over time (see fig. 1). Specifically, during this 5-year period, an average of more than 9 petitions per month were filed under the CBM program, but this average rate declined to fewer than 5 per month in the last fiscal year, with no petitions filed in August or September 2017. Stakeholders we interviewed suggested several possible reasons for the decline in CBM petitions, including recent decisions from the U.S. Court of Appeals for the Federal Circuit and U.S. Supreme Court that clarified which patents are eligible for CBM review; that CBM petitioners successfully targeted the lowest-quality business method patents— patents that should not have been issued because they did not meet the patentability requirements—in the early years of the program, and now those patents have been eliminated; and that owners of business method patents are more wary of asserting their intellectual property through infringement lawsuits and risking its invalidation. Some stakeholders expressed concern about multiple petitions being filed against the same patent. Specifically, stakeholders have suggested that petitioners are, in some cases, using the CBM program and the inter partes review program as tools to increase costs borne by patent owners, and in the case of the CBM program, as a tool to delay district court proceedings. In addition, some stakeholders asserted that this manner of use of the administrative proceedings authorized by the AIA amounts to harassment. However, our analysis of petition data showed that the vast majority of patents challenged under the CBM program were challenged once or twice. Stakeholders we interviewed outlined several reasons why petitioners may file more than one petition against a single patent. For example, the board limits the number of pages that a petitioner may use to submit prior art and arguments for invalidity and therefore some petitioners might file more than one petition so they can present all of their art and arguments at once. Overall, through September 2017, the Patent Trial and Appeal Board had completed reviews of 329 of the 359 patents challenged under the program, and for about one-third of these patents the board ruled at least some challenged patent claims unpatentable. Data on petition outcomes are open to different interpretations depending on how they are presented. For example, under the CBM program, board judges ruled some or all of the patent claims considered at trial unpatentable in 96.7 percent of the petitions for which they issued a final written decision from September 2012 through September 2017. On the basis of this statistic, the board could seem to invalidate the majority of the patents it reviews, as noted by some stakeholders. However, this outcome is predictable given the criteria for accepting, or instituting, a CBM trial—a judge panel will institute a petition to the trial phase if it is “more likely than not” that at least one of the claims challenged in a petition is unpatentable—which tips outcomes for instituted petitions toward rulings of unpatentability. In addition, board judges do not issue final written decisions for all petitions that enter the trial phase because the parties often reach a settlement before the final written decision. When taking into account all of the CBM petitions that had an outcome as of September 30, 2017, board judges ruled some or all of the claims considered at trial unpatentable in 35.6 percent of the cases. The Board Met Timeliness Requirements and Took Steps to Analyze Decisions and Improve Proceedings but Does Not Have Guidance to Ensure Decision Consistency We found in our March 2018 report that the Patent Trial and Appeal Board has completed all trials under AIA-authorized proceedings within statutorily directed time frames, according to board data, and the board has taken steps to review issues that could affect the consistency of its trial proceedings and decisions and to engage with stakeholders to improve its proceedings. Board officials we interviewed told us the timeliness of decisions to institute a trial and of final written decisions has not been a concern in the 5 years that the board has operated. According to board officials, as of November 2017, two AIA trials—one under the inter partes review program and one under the CBM program—have been extended, for good cause, past the typical 1-year time limit between the institution decision and the final written decision, as allowed by statute. The Patent Trial and Appeal Board has decision review processes that help ensure trial decisions are reviewed as appropriate, but the board cannot ensure the consistency of its trial decisions because it does not have guidance for reviewing the decisions or the processes that lead to them. For trials still in progress, board officials told us there are several ways management gets involved in reviews—including reviews of ongoing trials if and when a paneled judge raises any issue deserving of management attention. Such issues are brought to the attention of the chief judge or other members of the board’s management team and are acted upon at their discretion. Board officials also told us that a separate internal review process has evolved over time, whereby a small group of board judges, in consultation with board management, seeks to ensure decision quality and consistency by reading a large number of draft AIA trial decisions and giving feedback or suggestions to authoring judges prior to issuance. In addition, the board reviews any AIA trial decisions that are appealed to the U.S. Court of Appeals for the Federal Circuit and the appeals court subsequently reverses or remands. Finally, board officials told us that the board has begun to increase the number of trial decisions considered for precedential and informative designations as part of its efforts to ensure the consistency of trial decisions. Taken together, the board’s review processes help ensure that board trial decisions are reviewed in some manner. However, because the board does not have documented procedures for how to review decisions for consistency, the board cannot fully ensure the consistency of the decisions or the processes that lead to them. Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks. Such control activities include clearly documenting internal control in a manner that allows the documentation to be readily available for examination. The documentation may appear in management directives, administrative policies, or operating manuals. We recommended that the Director of USPTO develop guidance, such as documented procedures, for judges reviewing the Patent Trial and Appeal Board’s decisions and the processes that lead to the decisions. USPTO agreed with our recommendation and stated that it has begun taking actions to address it. In addition, to improve various aspects of its trial proceedings, the board has taken several steps to engage with stakeholders. USPTO’s strategic plan states that the board should expand outreach to stakeholders by providing opportunities for interaction and updates on board operations and other important issues. The board has done so through several types of public outreach efforts, including participating in roundtables, webinars, and judicial conferences, among other activities. The board has made several changes to policies and procedures based on stakeholder feedback gathered through these mechanisms. Stakeholders Agree the CBM Program Has Reduced Litigation, and Many See Value in Maintaining Aspects of the Program Stakeholders we interviewed for our March 2018 report generally agreed the CBM program has reduced litigation involving business method patents because the CBM program allows these patents to be more easily challenged than in district courts, and many stakeholders said there is value in maintaining some aspects of the program. Stakeholders told us that fewer business method patent lawsuits are filed and that existing lawsuits are often dropped after patents have been through the CBM program. However, stakeholders also noted that the Supreme Court’s 2014 decision in Alice Corp. Pty. Ltd. v. CLS Bank Int’l has contributed to the reduced number of business method patent lawsuits. Stakeholders told us that the CBM program has made it riskier to assert business method patents because, compared with district court, the program offers a cheaper and more efficient way for alleged infringers to challenge a patent’s validity. In addition, according to stakeholders, patent owners are more focused on asserting business method patents that are higher quality and less vulnerable to challenge either under the CBM program or based on the Supreme Court’s decision in Alice; these are patents that describe a technological invention that is not abstract and implemented on a generic computer. Stakeholders we interviewed generally agreed the effects of the CBM program on innovation and investment have been minimal or mostly positive. More specifically, stakeholders told us that the CBM program is good for overall innovation and investment in financial technologies in that the program eliminates overly broad (non-specific), low-quality patents. Stakeholders told us they believe the existence and assertion of overly broad patents is bad for innovation, in part because defending against alleged infringement is expensive and time-consuming, even under the CBM program. Assertion of overly broad, unclear, or otherwise low-quality patents acts much like a tax on investment, according to stakeholders. Stakeholders also told us that removing such patents from the marketplace promotes innovation because it prevents these patents from blocking new innovation. According to stakeholders, innovation is represented by the quality of the patents issued rather than the quantity. A large number of patents in a technology space, according to stakeholders, can make it difficult to innovate within that crowded space. Most stakeholders told us there was value in maintaining aspects of the CBM program, including the ability to challenge patents at the Patent Trial and Appeal Board on all four patentability requirements—subject matter; novelty; non-obviousness; and clarity and specificity. Stakeholders we interviewed pointed to inconsistencies in how federal courts interpret subject matter eligibility and clarity requirements, in particular. Stakeholders said that the federal courts and jurors do not necessarily have the expertise to interpret requirements for subject matter eligibility and clarity, and that the technically trained Patent Trial and Appeal Board judges were better suited to make patentability determinations on these grounds. Stakeholders generally agreed that the ability to challenge a patent’s validity on subject matter eligibility grounds remains important, although there was not broad agreement among stakeholders regarding how far that ability should extend beyond business method patents. Some stakeholders said subject matter eligibility challenges were important for a wider scope of patents than just business methods because concerns about subject matter eligibility that apply to business method patents extend to software-related patents in general. Similarly, stakeholders told us that patent clarity problems exist beyond business method patents. Chairman Issa, Ranking Member Johnson, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact John Neumann, Director, Natural Resources and Environment at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Rob Marek (Assistant Director), Michael Krafve, and Cynthia Norris. Additional staff who made key contributions to the report cited in this testimony are identified in the source product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes the information contained in GAO's March 2018 report, entitled U.S. Patent and Trademark Office: Assessment of the Covered Business Method Patent Review Program ( GAO-18-320 ). What GAO Found From September 2012 through September 2017, entities facing patent infringement lawsuits filed 524 petitions challenging the validity of 359 patents under the U.S. Patent and Trademark Office's (USPTO) covered business method (CBM) program, resulting in decisions against about one-third of these patents. The CBM program provides entities facing infringement lawsuits an opportunity to challenge the validity of a business method patent by demonstrating that it did not meet requirements for patentability. Business method patents focus on ways of doing business in areas such as banking or e-commerce. The rate of filing petitions over this period has fluctuated but has generally declined since 2015, and none were filed in August or September 2017. USPTO has taken several steps to ensure the timeliness of trial decisions, review past decisions, and engage with stakeholders to improve proceedings under the program: Timeliness: USPTO regularly informs relevant parties about paperwork requirements and due dates throughout trials. According to program data, as of September 2017, all 181 completed trials were completed within statutorily required time frames. Decision review: USPTO has taken several steps to review its decisions and has monitored the rate at which the Court of Appeals for the Federal Circuit affirms or reverses them. However, USPTO does not have guidance, such as documented procedures, for reviewing trial decisions, or the processes leading to decisions, for consistency. Without guidance, such as documented procedures, USPTO cannot fully ensure that it is meeting its objective of ensuring consistency of decisions. Stakeholder engagement: USPTO judges have engaged with stakeholders by participating in public roundtables and webinars, and attending judicial conferences, among other things. Stakeholders GAO interviewed generally agreed that the CBM program has reduced lawsuits involving business method patents in the federal courts. While many stakeholders favored maintaining aspects of the program, there was not strong consensus among stakeholders for how future trials should be designed.
gao_GAO-19-122
gao_GAO-19-122_0
Background While its core mission of protecting federal facilities has remained constant as FPS moved from one agency to another, its responsibilities have changed. While in GSA’s PBS, FPS was responsible for protecting GSA held-or–leased facilities, providing both physical security and law enforcement services. To protect buildings, FPS officers developed physical security risk assessments, installed security equipment, and oversaw contract guard services. As a part of its law enforcement services, among other duties, FPS officers enforced laws and regulations aimed at protecting federal facilities and the persons in such facilities and conducted criminal investigations. Following the September 11, 2001 attacks, the Homeland Security Act of 2002 was enacted; it created DHS and moved FPS from GSA to the new department, effective in March of 2003. Within DHS at ICE, FPS’s responsibilities grew beyond solely protecting GSA buildings to include homeland security activities such as implementing homeland security directives and providing law enforcement, security, and emergency-response services during natural disasters and special events. In 2009, DHS proposed transferring FPS from ICE to NPPD. In explaining the proposed transfer in DHS’s fiscal year 2010 budget justification to Congress, DHS noted that this move would allow ICE to focus on its law enforcement mission of protecting the American people by targeting the people, money, and materials that support terrorist and criminal activities relating to our nation’s borders. DHS noted that FPS should reside within NPPD given that both agencies had responsibilities for implementing the National Infrastructure Protection Plan. DHS further noted that FPS would be able to gain synergy by working alongside NPPD’s Office of Infrastructure Protection and that having FPS and the Office of Infrastructure Protection in the same organization would further solidify NPPD as DHS’s lead for critical infrastructure protection. The fiscal year 2010 DHS appropriations act, which was signed into law on October 28, 2009, funded FPS under NPPD via revenue and collections of security fees. While in NPPD, FPS continued to lead physical security and law enforcement services at GSA-held or GSA-leased facilities and continued its efforts in homeland security activities. Throughout FPS’s different organizational placements in DHS, we have reported that FPS faces persistent challenges meeting its mission to protect facilities. In 2003, we designated federal real-property management as a high-risk area, in part, because of physical security challenges at federal facilities, such as the need for a risk-based approach to determining the level of security required. In 2011, we reported on FPS’s challenges in transferring mission support functions when transitioning from ICE into NPPD. While FPS has been in NPPD, we also reported on challenges FPS faced, such as in performing risk assessments, managing and overseeing contract guards, collaborating with GSA and the Marshals on facility security, and funding its operations. We made recommendations to help address these challenges, and FPS has made progress in addressing some of these recommendations. For example, FPS (1) developed a Modified Infrastructure Survey Tool to help it more effectively perform risk assessments, (2) coordinated with GSA and other agencies to reduce unnecessary duplication in risk assessments, (3) implemented new procedures to better manage and oversee contract guards, and (4) as of September 2018, established a formal agreement with GSA on roles and responsibilities related to facility protection. However, as we discuss later in this report, challenges related to other aspects of overseeing contract guards, collaborating with GSA and Marshals, and funding persist. In November 2018, legislation was enacted that could result in FPS moving for a third time, although the location has not been determined. This legislation—which reorganizes NPPD to an organization that has a greater statutory focus on managing cyber risks—requires the Secretary of Homeland Security to, within 90 days after the completion of our review, determine the appropriate placement for FPS within DHS and begin transfer of FPS to that entity. If the Secretary determines that DHS is not an appropriate placement for FPS, the Secretary would be required to submit to the Director of OMB and Congress an explanation for the reasons of such a determination—including, among other things, how DHS considered the results of our current review—and a recommendation on the appropriate placement of FPS within the executive branch of the federal government. When DHS was established, we identified organizational and accountability criteria for the department. From this prior work, we identified key criteria that are relevant to assessing potential placement options for FPS, as shown in table 2. In addition, other practices provide valuable insights for agency officials to consider when evaluating or implementing a reorganization or transformation. For example, we have previously reported (1) on key practices and questions for organizational transformations, mergers and consolidations, and agency reform efforts and (2) on best practices for the analyses of alternatives. We reported that organizational transformations, such as a change in organizational placement, can take many years to fully implement, can result in reduced productivity and morale in the short-term, and may require up-front investments. Therefore, we found that these practices and questions offer valuable insights for agency officials to consider when evaluating or implementing a reorganization or transformation. For example, in May 2012, we reported that a key practice in organizational change is for agency officials to identify and agree on the specific goals of the change—that is, what the agency expects to achieve by making the change—or the problems a change will solve. In July 2003, we reported that implementing a large-scale organizational transformation requires the concerted efforts of both leadership and employees to accomplish new organizational goals. In October 2015, we identified best practices for analyzing alternatives, such as defining criteria to assess alternatives, identifying a range of alternatives to assess, and analyzing the benefits and trade-offs of each alternative. Moving FPS to Any of the Selected Agencies Evaluated Would Result in Both Benefits and Trade- offs We found that none of the selected agencies met all the organizational placement criteria; thus, any of the organizational placement options could result in both benefits and trade-offs. Officials from FPS and some of the selected agencies as well as representatives from other stakeholders we interviewed (e.g., an association of federal law enforcement officers, a union representing FPS employees, and others) provided us with examples of how those benefits and trade-offs might affect FPS. In instances where selected agencies met organizational placement criteria (that is, in instances where selected agencies were similar to FPS), FPS could experience benefits. See table 3 for a summary of how selected agencies met and did not meet key organizational placement criteria, and appendix II and III for additional details. For example, for the mission, goals, and objectives criterion, DHS, NPPD, and Secret Service could provide benefits to FPS because, like FPS, their mission or goal statements as noted in their strategic plans include an explicit focus on the protection of infrastructure or specific facilities. Also, GSA has a statutory facility protection mission. Our prior work found that placing an agency into an organization that has a similar mission may help ensure that the agency’s mission receives adequate funding, attention, visibility, and support. For the responsibilities criterion, DHS, CBP, Secret Service, Justice, and the Marshals could provide benefits to FPS, because all of these agencies, like FPS, perform both physical security and law enforcement activities. In the past, FPS faced challenges ensuring that both these activities were prioritized, according to FPS officials. Officials explained that a parent agency that is able to focus on both activities could help ensure equal and adequate attention in both areas. While there are similarities in responsibilities between FPS and these agencies, there are differences in the extent to which and for what purpose these agencies perform the responsibilities, some of which we discuss following table 3. Because none of the agencies met all criteria, placing FPS in any of the selected agencies would require trade-offs. For example: While placing FPS in DHS, NPPD, or the Secret Service may provide FPS benefits in areas related to mission, responsibilities, and information sharing, there could be some adverse effect on FPS’s law enforcement operations or other activities. Specifically, as discussed above, placement in DHS, NPPD, or the Secret Service could provide FPS benefits because these agencies have similar missions and facility protection responsibilities, and have access to and share information related to national homeland security that FPS needs to carry out its mission. However, NPPD, for example, does not perform law enforcement activities. Therefore, according to FPS officials, FPS’s law enforcement activities may not continue to receive full attention. Further, keeping FPS in NPPD may not address some of the challenges related to culture, such as morale issues that, according to an official from the association of law enforcement officers, stem in part from FPS not being placed in a law enforcement organization. If placed in the Secret Service, this agency may not have the administrative capacity to handle the additional FPS human capital workload. Secret Service officials told us that they have a staffing shortage, which is exacerbated by the time it takes to vet applicants and process new staff through background checks and security clearances. As another example, FPS’s placement in GSA or Marshals could enhance coordination among these agencies, but there could be some adverse effect on FPS’s ability to carry out its mission or responsibilities. Specifically, GSA and Marshals could be appropriate choices as these agencies currently coordinate with FPS on facility protection. For GSA’s held or leased facilities, FPS is primarily responsible for protecting federal employees and visitors in those facilities while GSA, as the federal government’s landlord, performs some physical security activities, such as funding and repairing security fixtures. At federal courthouses, FPS is the primary federal agency responsible for patrolling and protecting the perimeter while Marshals is responsible for the security of the federal judiciary and as such provides for security inside the building. However, we have found challenges FPS has faced in coordinating with these agencies. In December 2015, for example, we found that FPS and GSA had not agreed on a common outcome related to facility protection or the roles and responsibilities to accomplish their missions. Further, in September 2011, we reported that FPS and Marshals faced challenges related to coordination, such as in the implementation of roles and responsibilities and the use or participation in existing collaboration mechanisms. In September 2018, NPPD and GSA signed a memorandum of agreement that, among other things, describes FPS’s and GSA’s roles and responsibilities, and FPS, Marshals, and other agencies involved in protecting courthouses (i.e., GSA and the Administrative Office of the U.S. Courts) are working to finalize a separate agreement for courthouse security. As these agreements are implemented, coordination between these agencies should improve as we have previously reported that establishing clear roles and responsibilities, in agreements or through other mechanisms, contribute to effective coordination. In addition, Marshals may be a good placement option for FPS since both agencies perform physical security and law enforcement activities, and because both agencies use a large number of contract guards. However, because FPS does not share mission and goals with Marshals, it may be less equipped to prioritize FPS’s activities in the law enforcement and physical security areas. Justice and Marshals officials said that, in their view, Marshals is different from FPS because Justice and Marshals perform limited physical security activities and have an extensive law enforcement mission, while the opposite is the case for FPS. Further, Marshals officials said that FPS’s and Marshal’s law enforcement activities support different purposes—with Marshals supporting a violent-crime reduction mission and FPS supporting a facility protection mission. As a result, Marshals officials said that FPS’s facility protection mission may not receive full attention. Regarding contract guards, Marshal’s guard force is smaller, performs different activities, and has different requirements compared to FPS’s guard force. Regarding GSA, while GSA performs some physical security activities, it does not perform law enforcement, which is a critical part of FPS’s responsibilities and, according to some stakeholders we interviewed, a key aspect of FPS’s culture. GSA also does not have the same access to information related to national homeland security as FPS currently has, and therefore, FPS’s access to this information could be affected, according to officials. Finally, various placement options could help FPS address some of its long-standing challenges such as in overseeing contract guards, collaborating with GSA and the Marshals, and funding. However, these placements could also affect whether FPS’s needs are prioritized. For example, placing FPS in GSA or the Marshals may further help address coordination challenges. Additionally, placing FPS in GSA could address challenges FPS faces with funding. If placed in GSA, GSA and FPS could consider whether to use the Federal Buildings Fund for security projects related to facility management, such as installing cameras. OMB staff said that there are limitations with the Federal Buildings Fund, such as the amount of funding available for security projects. Further, the adverse effect of placing FPS in either GSA or the Marshals is that Marshals does not share mission and goals with FPS and that GSA does not have law enforcement responsibilities; therefore, these agencies may not prioritize FPS’s needs. For additional information on how the various agencies met each criterion, see appendixes II and III. DHS Has Not Taken Key Steps to Fully Assess Potential Placement Options When managing an agency or considering an organizational change, such as that of FPS’s placement within or outside of DHS, our prior work has stated that an agency can benefit from periodically evaluating its organizational structure, identifying what a change is expected to achieve, and analyzing alternatives. Specifically, Standards for Internal Control in the Federal Government states that agency management should establish an organizational structure to achieve the agency’s objectives. According to the Standards, an effective management practice for attaining this outcome includes periodically evaluating the organizational structure to ensure that it meets its objectives and has adapted to changes. We have also reported that a key practice in organizational change is to identify and agree on what a change is expected to achieve or the problems the change will solve. The process of defining such expected outcomes can help decision makers reach a shared understanding of what challenges need to be addressed. Furthermore, we have reported on best practices for analyzing alternatives to help ensure that agencies select the option that best meets their needs. These practices can be applied to a wide range of activities or programs in which an alternative must be selected from a set of possible options. The practices include assessing the current environment to provide a basis for comparison with other alternatives and identifying and assessing benefits and trade-offs of each alternative. However, DHS has not taken key steps to fully assess potential placement options. Specifically, DHS has not assessed the organizational structure of FPS, such as its placement in NPPD, even though both have evolved since FPS was placed in NPPD in 2010. For example, NPPD has increased its focus on protecting the nation’s cyber infrastructure as threats in this area have grown, and its funding for this purpose has increased. In light of these changes, in 2015 and 2016, DHS proposed that NPPD restructure itself to increase its focus on cybersecurity. However, the proposals did not include an assessment of FPS’s organizational placement. The November 2018 legislation gave NPPD a greater statutory focus on cyber risk and may result in additional changes to the organization’s activities. Additionally, while in NPPD, FPS also has been increasingly engaged in providing law enforcement for homeland security, with the establishment of a rapid protection force of that can respond to heightened threat situations. Given these changes, without an assessment, DHS cannot be certain that FPS is currently placed in an agency that enables FPS to meet its mission. Additionally, because DHS did not analyze FPS’s current placement in NPPD, it does not have a benchmark for comparison to other agencies. Without such an analysis, it is unclear whether FPS needed to be moved from NPPD. On one hand, FPS made progress while placed in NPPD in addressing many of our recommendations, and some stakeholders we spoke with (officials from DHS and NPPD) said that FPS was in the right place in NPPD. For example, a DHS official stated that from a resource perspective there was no good reason to move FPS out of NPPD as the official had not seen a business case to do so. Additionally, an NPPD official stated that mission alignment and an opportunity to influence the national facility-security policy were compelling reasons for FPS to stay in NPPD. Further, NPPD officials said that FPS was meeting its mission and objectives. On the other hand, FPS continued to experience challenges in carrying out its mission in NPPD—such as in overseeing contract guards, collaborating with GSA and the Marshals, and having adequate funding— such that questions have been raised as to whether placing FPS in NPPD was successful. DHS has recently initiated an effort to evaluate FPS’s placement, but it lacks several of the elements for a successful evaluation. Specifically, in August 2018, DHS, NPPD, and FPS established a working group with a draft charter with the objective of making a recommendation to the Secretary of Homeland Security on the organizational placement of FPS within DHS. The working group’s evaluation criteria for FPS placement consist of mission, command and control, resources, implementation schedule, and workforce and culture. While establishing this group and identifying criteria are positive steps in assessing FPS’s placement, the group’s planned activities are limited in several ways. For example, while the charter is a draft, it does not indicate that the working group will describe what changing FPS’s placement is expected to achieve. This factor is particularly important given that each placement option has its benefits and trade-offs and that stakeholders’ opinions of the options varied. Changing FPS’s placement could include: addressing one or more of the key criteria previously discussed in this addressing some or all of the challenges that persist, such as in collaboration or contract guard oversight; or a combination of both. Further, the draft charter does not indicate that the working group will evaluate agencies outside of DHS or incorporate best practices for analyzing alternatives, such as evaluating FPS’s current placement in NPPD and the benefits and trade-offs of placement options. Without conforming to the best practices, DHS will not have assurance that the working group recommends the alternative that best meets mission needs. DHS’s current approach to evaluating FPS’s placement limits DHS’s ability to reliably assess the merits of placement options supported by GSA and FPS. GSA officials said GSA would take FPS and moving FPS back to GSA could benefit tenants in federal facilities, strengthen security support, and reduce redundancies because both agencies have federal facility protection responsibilities. Further, according to GSA, if consolidated under GSA, FPS could become more efficient, better manage costs, and leverage acquisition processes by making use of GSA’s existing services. FPS officials stated that they prefer FPS to be a standalone entity that reports directly to DHS leadership. According to FPS, being a standalone agency in DHS would establish the protection of federal facilities as a critical mission of DHS and provide FPS with the direct support of DHS leadership. Further, according to FPS officials, having this support would better enable them to carry out their mission. However, neither GSA nor FPS has conducted analyses to support their preferences, and DHS is not planning to look at options outside of DHS at this time. As a result, DHS cannot fully assess FPS’s or GSA’s positions. Once DHS identifies what it expects to achieve by moving FPS, in line with key practices for organizational change, and establishes an evaluation approach that reflects best practices for an analysis of alternatives, it will be in a position to best assess benefits and trade-offs previously discussed. In absence of these steps, DHS may not be positioning itself to make an informed decision as to what organization best supports FPS. Conclusions Over the past 15 years, FPS has been located in three different agencies (GSA, ICE, and NPPD), and there continues to be disagreement about whether it is currently in the best place to achieve its objectives. Further, agency and stakeholder opinions vary about where and whether FPS should move. DHS has established a working group to evaluate placement options for FPS. However, the working group’s planned activities do not include key steps to fully assess potential placement options. Specifically, while the group’s charter is a draft, it does not state whether it plans to assess FPS’s current placement in NPPD, what DHS expects to achieve by changing FPS’s placement, or effective placement options for relocating FPS. These steps would help DHS address legislation enacted in November 2018 requiring the review of placement options for FPS—including how DHS considered the results of our review. Regardless of the legislation, DHS cannot have a complete discussion that leads to an informed decision on FPS’s placement without taking these steps. Identifying the expected outcomes of changing FPS’s placement and performing analyses are critical because organizational change can take many years to fully implement, can result in reduced productivity and morale in the short-term, and may require up-front investments. Without determining what it expects to achieve by moving FPS and conducting an evaluation using appropriate criteria, DHS may not be well-positioned to identify an organization that best supports FPS. Recommendations for Executive Action We are making the following two recommendations to the Secretary of Homeland Security: The Secretary of Homeland Security—in consultation with NPPD and FPS—should identify the specific goals of a change in FPS’s placement— that is, what DHS expects to achieve by moving FPS to another agency. (Recommendation 1) The Secretary of Homeland Security—in consultation with NPPD, FPS, and other agencies as relevant—should fully evaluate placement options for FPS based on what DHS expects to achieve by changing FPS’s placement, an assessment of FPS’s current placement, and other best practices such as an analysis of alternatives assessing the benefits and trade-offs discussed in this report. (Recommendation 2) Agency Comments We provided a draft of this product to DHS, GSA, Justice, and OMB for comment. In its comments, reproduced in appendix IV, DHS concurred with our recommendations and outlined steps it plans to take to address them. DHS also provided technical comments, which we incorporated as appropriate. GSA, Justice, and OMB only provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Administrator of General Services, the Attorney General, the Director of OMB, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-2834 or RectanusL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology To address our objectives, we reviewed our prior work related to organizational transformation, where we identified organizational and accountability criteria that Congress should consider when determining which agencies to include or exclude from the newly created DHS. The criteria are relevant to our review of FPS’s organizational placement as Congress considers whether to include or exclude FPS in various agencies within and outside DHS. We selected a subset of the criteria that are the most relevant to FPS’s organizational placement to include in our review. For each criterion, we also identified elements (i.e., characteristics) that are specific to FPS based upon our review of FPS documents and our prior work on topics related to the criteria, and our discussions with federal officials with experience in facility security, the Federal Law Enforcement Officers Association, and a former high-ranking official in NPPD with knowledge of FPS. To identify challenges facing FPS, we reviewed our past work and the status of our prior recommendations, and interviewed stakeholders and agency officials. We reviewed pertinent proposed and enacted legislation related to DHS’s reauthorization and FPS. We reviewed Standards for Internal Control in the Federal Government for relevant management responsibilities. And, we reviewed our prior reports on key practices and questions for organizational change and best practices for an analysis of alternatives process. We used practices identified in these reports as well as internal controls to assess the steps DHS has taken to assess placement options for FPS. We applied the key criteria to eight selected agencies in DHS, GSA, and the Department of Justice (Justice) that we determined could be potential organizational placement options for FPS, as shown in table 4. We selected three of our eight placement options (CBP, ICE, and Secret Service) based upon our review of the most recently available data from the Department of Justice on the number of federal law enforcement officers. We selected these three agencies because they employed the largest number of law enforcement officers within DHS. Our selection of agencies with federal law enforcement officers is relevant because FPS employs such officers. We selected three options (GSA, NPPD, and a standalone entity in DHS) because FPS was previously organizationally placed within GSA, is currently placed in NPPD, and because of FPS’s preference to be a standalone entity reporting directly to the Deputy Secretary of DHS. We selected our remaining two options (a standalone entity within Justice and the Marshals) because the duties of the Marshals include law enforcement and protection of federal courthouses and because legislation proposed during our review would have, if enacted, instructed the Secretary of Homeland Security to recommend the appropriate placement of FPS within the executive branch of the federal government. We also identified DHS’s Office of the Chief Security Officer as an office within DHS that has the facility security responsibility for managing contract guards at DHS’s former headquarters at the Nebraska Avenue Complex in Washington, D. C. We determined that this security office is a policy office within DHS’s Management Directorate with its primary mission being the security of DHS employees and a focus on expanding internal security policy. For the purposes of our review, we did not include OCSO as a potential placement option for FPS because the security office does not have a large number of law enforcement officers, plans to divest operational security responsibilities, and was not a previous, current or FPS desired placement. Our exclusion of OCSO does not preclude DHS from assessing OCSO as a placement option for FPS. We reviewed documentation and interviewed officials from FPS and the selected agencies to identify similarities, differences, and other considerations with regard to each of the key criteria. For the first four key criteria—(1) mission, goals, and objectives; (2) responsibilities; (3) organizational culture; and (4) information sharing and coordination—we determined that a selected agency met the criteria if the agency or its subcomponents have any similarities to FPS. For the last criterion— mission support—we determined that a selected agency met the criterion if the agency or its subcomponents have mission support similar to FPS or could provide mission support that FPS needs. Although we used the key criteria to assess eight agencies we selected, the criteria can be used to assess any potential placement option for FPS. We also reviewed documentation and conducted interviews with stakeholders including: representatives from the Federal Law Enforcement Officers Association; representatives from the American Federation of Government Employees Local 918 (the union that represents NPPD employees— including FPS); representatives from two unions that represent a large number of Protective Security Officers (i.e., contract guards), the United Government Security Officers of America and Security and Security, Police and Fire Professionals Association of America; representatives from the National Association of Security Companies (an association of contract guard companies); officials from agencies that coordinate with or use FPS for facility the Department of Justice for law enforcement coordination and the Internal Revenue Service and the Social Security Administration as large users of FPS facility protection; staff from the Office of Management and Budget; and officials from DHS’s Interagency Security Committee, which develops the security standards for non-military federal facilities. We also obtained views from a former high-ranking official in NPPD with knowledge of FPS. Additionally, we obtained views from officials, staff, and representatives from FPS, the selected agencies and stakeholders on the alignment between FPS and the agencies as well as on the potential placement options. The results of these interviews are non- generalizable to all of FPS’s stakeholders but provide useful examples of considerations related to various placement options. We conducted this performance audit from June 2017 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comparison of Selected Agencies and the Federal Protective Service (FPS) in Elements Associated with Key Organizational-Placement Criteria Based on our prior work related to organizational transformation, we identified five key criteria to consider when assessing placement options for FPS: (1) mission, goals, and objectives; (2) responsibilities; (3) organizational culture; (4) information sharing and coordination; and (5) mission support. For each criterion, we identified elements that are specific to FPS. We identified these elements from documentation and interviews from federal officials with experience in facility security, the Federal Law Enforcement Officers Association, a former high-ranking official in NPPD with knowledge of FPS, and our review of prior work on topics related to the criteria. We compared selected agencies that could be placement options to FPS in each of the elements—see tables below. The selected agencies are the Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), National Protection and Programs Directorate (NPPD), United States Secret Service (Secret Service), General Services Administration (GSA), Department of Justice (Justice), and the U.S. Marshals Service (Marshals). We assumed that FPS would be a standalone entity in DHS, GSA, and Justice. For elements in the first four criteria—(1) mission, goals, and objectives; (2) responsibilities; (3) organizational culture; and (4) information sharing and coordination—a “yes” in the table means that any function of the selected agency or its subcomponents have similarities to FPS. For elements in the last criterion—mission support—a “yes” means that any function of the selected agencies or its subcomponents have mission support similar to FPS or could provide mission support that FPS needs. For all criteria, the “yes” designation does not account for the magnitude of the effort or activities performed by each of the selected agencies. Appendix III: Summary of Selected Agencies’ Similarities and Differences Related to Key Organizational-Placement Criteria We identified five key organizational placement criteria that are relevant to consider when assessing FPS’s placement: (1) mission, goals, and objectives; (2) responsibilities; (3) organizational culture; (4) information sharing and coordination; and (5) mission support. We evaluated whether selected agencies that could be placement options for FPS met the key organizational placement criteria. The selected agencies are the Department of Homeland Security (DHS); U.S. Customs and Border Protection (CBP); U.S. Immigration and Customs Enforcement (ICE); National Protection and Programs Directorate (NPPD); United States Secret Service (Secret Service); General Services Administration (GSA); Department of Justice (Justice); and the U.S. Marshals Service (Marshals). We assumed that FPS would be a standalone entity in DHS, GSA, and Justice. For the first four criteria—(1) mission, goals, and objectives; (2) responsibilities; (3) organizational culture; and (4) information sharing and coordination—we determined that a selected agency met the criteria if the agency or its subcomponents have similarities to FPS in relevant elements identified in appendix II. We determined that a selected agency met the mission support criterion if the agency or its subcomponents have similarities to FPS or could provide FPS needed mission support in relevant elements. Mission, Goals, and Objectives FPS’s mission focuses on the protection of federal facilities and the people working in and visiting those facilities. In table 10 and subsequent paragraphs, we describe how selected agencies met the mission, goals, and objectives criterion—that is, the selected agencies that were similar to FPS for this criterion—areas of consideration if FPS is placed in those agencies, and how the selected agencies did not meet the criterion. DHS, NPPD, and Secret Service are similar to FPS in that their mission statements or goals as stated in their strategic plans include an explicit focus on the protection of infrastructure or specific facilities. GSA has a statutory facility protection mission. Our prior work found that placing an agency into an organization that has a similar mission may help ensure that the agency’s mission receives adequate funding, attention, visibility, and support. One of DHS’s goals—as noted in its strategic plan covering fiscal years 2014 to 2018—is to reduce risk to the nation’s critical infrastructure. In addition, NPPD’s mission is to lead the national effort to protect and enhance the resilience of the nation’s physical and cyber infrastructure. To carry out this mission, NPPD coordinates efforts to protect infrastructure in 16 critical infrastructure sectors, including a government facilities sector. Further, the Secret Service’s mission is to ensure, among other things, the security of the United States President, Vice President, and other individuals. The Secret Service’s Uniformed Division protects locations necessary for accomplishing its mission of protecting these individuals. Per statute, GSA is responsible for the operation, maintenance, and protection of buildings and grounds occupied by the federal government and under the jurisdiction, custody, and control of GSA. While DHS, NPPD, Secret Service, and GSA may be good placement options for FPS given their similarities in mission or goals (i.e., focus on infrastructure or facility protection), stakeholders we interviewed identified some key areas of consideration that may have a bearing on how well FPS would fit in NPPD, Secret Service, and GSA. NPPD: FPS and NPPD officials expressed concerns about the fit between the two agencies given differences in how they perform their infrastructure protection missions. Specifically, FPS has employees who directly protect federal facilities, while NPPD’s physical infrastructure protection efforts provide guidance and resources to federal, state, and local governments, and private sector companies so that they can protect their facilities. Furthermore, officials from FPS, NPPD, the union representing FPS officials, an association representing federal law enforcement officers, and a former high- ranking official in NPPD said that a difference between the two agencies is that FPS performs law enforcement activities to carry out its protection mission while NPPD does not. Secret Service: Officials from FPS and Secret Service said that placing FPS in the Secret Service could present challenges because the two agencies’ missions have some fundamental differences—FPS focuses on protecting federal facilities and Secret Service focuses on protecting individuals such as the United States President and Vice President. Furthermore, another difference is that the scope of facilities that the Secret Service protects is smaller and narrower than FPS, according to FPS and Secret Service officials. FPS protects about 9,000 facilities throughout the United States, while Secret Service’s Uniformed Division—which is responsible for protecting facilities—protects a limited number of facilities in the National Capital Region (e.g., the White House, the Vice President’s residence). FPS officials said that another consideration between the two agencies is that FPS’s mission of protecting federal facilities would get lost in Secret Service’s mission of protecting the President of the United States and other key individuals. GSA: Stakeholders provided differing views on how well FPS would fit in GSA. An official from CBP and officials from Justice said that FPS should be placed in GSA because FPS focuses on GSA-held or- leased facilities. Furthermore, GSA officials stated FPS and GSA could merge as both have the authority to protect federal facilities, and there is an intuitive relationship between GSA’s focus on the management and operations of federal facilities and FPS’s mission of the security of federal facilities. Conversely, officials from FPS, staff from OMB, and officials of an association that represents security companies, said that FPS should not move to back to GSA. These officials and staff said that FPS should not move to GSA because, among other reasons, the two agencies have different missions: GSA focuses on federal real estate and some physical security activities not homeland security or law enforcement. CBP, ICE, Justice, and Marshals do not have mission statements or goals that focus explicitly on infrastructure or facility protection. Nonetheless, as we discuss in the next section of this report, CBP, Justice, and Marshals have some facility protection responsibilities. In addition, FPS and the selected agencies share few or no operational objectives. DHS, ICE and NPPD share one or two operational objectives with FPS—DHS shares objectives that focuses on mitigating risks and responding to incidents, ICE shares one that focuses on intelligence gathering, and NPPD shares one that focuses on facility assessments. FPS, Justice, and Marshals have a few similar operational objectives. The three agencies have objectives that focus on the integration and use of intelligence information. FPS and Marshals also have similar objectives that focus on facility assessments, mitigating risks, and on rapidly responding to emergencies and incidents. Responsibilities To carry out its facility protection mission at about 9,000 federal facilities, FPS performs physical security as well as law enforcement activities. As a part of its physical security activities, FPS conducts facility security assessments, identifies countermeasures (e.g., equipment and contract guards) best suited to secure a facility, and oversees contract guards. As a part of its law enforcement activities, FPS proactively patrols facilities, responds to incidents, and conducts criminal investigations, among other things. FPS also provides additional operational law enforcement support, at the direction of the Secretary of Homeland Security, to address emerging threats and homeland security incidents. According to FPS officials, previous placements have focused on physical security or law enforcement, but not both. For example, FPS officials told us that because of ICE’s focus on law enforcement, FPS’s physical security activities took a backseat to ICE’s law enforcement mission. Similarly, according to FPS officials, NPPD has not prioritized FPS’s law enforcement activities because NPPD does not have a focus on law enforcement. One of FPS’s most critical activities is overseeing about 13,500 contract guards who are posted at federal facilities and are responsible for controlling access to facilities, responding to emergency situations involving facility safety and security, and performing other duties. FPS is responsible for overseeing these guards to ensure, among other things, that they are performing their assigned duties and have the necessary training and certifications. We have reported on challenges FPS faces in overseeing contract guards. For example, in August 2012, we reported that FPS faced challenges ensuring that contract guards have the necessary training and certifications. We found that although FPS verifies contractor-reported guard certification and training information by conducting monthly audits, FPS does not independently verify the contractor’s information. In table 11 and subsequent paragraphs, we describe how selected agencies met the “responsibilities” criterion—that is, the selected agencies that were similar to FPS for this criterion—areas of consideration if FPS is placed in those agencies, and how the selected agencies did not meet the criterion. Like FPS, DHS, the selected agencies in DHS (except ICE), GSA, Justice, and Marshals have responsibilities for federal facility protection. As discussed above, DHS, NPPD, and the Secret Service have mission or goal statements that explicitly address infrastructure or facility protection. CBP’s, GSA’s, Justice’s, and Marshals’ mission or goal statements do not explicitly state a focus on infrastructure or facility protection, but these agencies have some facility protection responsibilities to help achieve their missions. For example, GSA has some protection responsibilities for about 8,700 GSA-held or GSA-leased facilities in support of its mission of managing the federal real estate portfolio. GSA conducts repairs that affect the operation of building security equipment and develops policy and requirements for the building security used in the design and construction of GSA buildings. Marshals have security responsibilities at federal courthouses in support of its mission to protect, defend, and enforce the nation’s justice system. Stakeholders we interviewed identified some areas of consideration that may have a bearing on how well FPS would fit in agencies that have facility protection responsibilities: Officials from FPS and Marshals questioned how FPS would meld with agencies that protect facilities on a smaller scale. CBP, Justice, and Marshals perform facility protection at a smaller number of facilities as compared to FPS and GSA: CBP has facility protection responsibilities at about 1,200 border patrol stations, ports of entry, and other facilities; Justice (excluding Marshals) at 36 facilities; and Marshals at about 430 facilities with a judicial presence, while FPS and GSA have protection responsibilities at about 9,000 and 8,700 facilities, respectively. Justice and Marshals officials said that there are some differences between their agencies and FPS’s facility protection responsibilities. Specifically, these officials said that unlike FPS, Justice and Marshals have limited responsibilities for facility protection, and in the case of Marshals, this responsibility is related to the protection of the federal judiciary. Physical Security and Law Enforcement Activities FPS most closely aligns with DHS, CBP, Secret Service, Justice, and Marshals because these agencies perform both physical security and law enforcement activities. However, as discussed in the paragraph below, there are differences in the extent to which and for what purpose these agencies perform these activities. The remaining agencies perform either physical security (NPPD, GSA) or law enforcement activities (ICE), but not both. While DHS, CBP, Secret Service, Justice, and Marshals align with FPS with regard to the two types of activities it performs, there are differences in how these agencies perform these activities because these agencies’ activities and missions differ from FPS. For example, Justice and Marshals officials explained that in their view, Justice and Marshals are different from FPS because Justice and Marshals perform limited physical security activities and have extensive law enforcement missions, whereas FPS has a limited law enforcement mission and an extensive facility protection mission. Further, Marshals officials said that FPS’s and Marshal’s law enforcement activities support different purposes—with Marshals supporting a violent-crime reduction mission and FPS supporting a facility protection mission. As a result, Marshals officials said that FPS’s facility protection mission may not receive full attention. Further while FPS performs law enforcement activities relevant to federal facility protection, the Secret Service performs law enforcement relevant to protecting key individuals, such as the President. Furthermore, although GSA does not perform law enforcement activities, GSA officials said that if FPS moved to GSA, its leadership would provide FPS organizational support that would enable both FPS’s law enforcement and physical security activities. FPS officials stated that if FPS moved outside of DHS, the Secretary of Homeland Security—who is responsible for protecting the nation—may lose protection responsibilities for federal facilities as well as the ability to use FPS for law enforcement support when needed for homeland security. Contract Guard Responsibilities Like FPS, Marshals also employs a large number of contract guards for facility protection. The remaining agencies (DHS, CBP, ICE, NPPD, Secret Service, GSA, and Justice) use FPS’s contract guards, procure a limited number of guards or use their own federal officers for facility protection, according to officials from these agencies. Similar to FPS, Marshals also performs compliance reviews of training and certification information maintained by its contractors, and Marshals officials explained that these reviews are performed periodically. Staff from OMB and an association of security companies said that Marshals may be a good fit for FPS because Marshals, like FPS, uses a contract guard force. We have previously reported that a consideration of moving one agency into another is whether the move can help improve the efficiency and effectiveness of agency missions by, among other things, addressing gaps. In this regard, one consideration is whether FPS could leverage the Marshals’ oversight of its own contract guards to address its ongoing challenges in this area. However, differences between FPS’s and Marshals’ contract guard programs exist. For example, Marshals’ guard force is smaller than FPS’s with about 4,400 guards and the day-to-day duties of FPS’s contract guards are different from Marshals’ contract guards. Both FPS’s and Marshals’ contract guards control access to facilities. However, Marshals contract guards also provide security for the judicial process, such as providing armed escort services to judges, jurors, and other court personnel and providing security in a courtroom during hearings. Furthermore, some requirements between the two guard forces vary. For example, Marshals has more stringent requirements for contract guards in the areas of education and law enforcement experience. Organizational Culture While there are many areas relevant to organizational culture, law enforcement is a key aspect of FPS’s organizational culture, according to officials from an association of security companies and a former high- ranking official in NPPD. One area that has affected FPS’s culture, particularly morale, according to an official from the association of law enforcement officers, is that FPS’s criminal investigators receive federal law enforcement officer retirement benefits, while its inspectors—who also perform some law enforcement and who form the majority of FPS’s workforce—do not. In table 12 and subsequent paragraphs, we describe how selected agencies met the organizational culture criterion—that is, the selected agencies that were similar to FPS for this criterion—areas of consideration if FPS is placed in those agencies, and how the selected agencies did not meet the criterion. DHS, nearly all the selected agencies in DHS, and Justice have cultures similar to FPS because they are all law enforcement agencies, but NPPD and GSA do not. An official from an association of federal law enforcement officers said moving FPS to a law enforcement agency may improve FPS’s employee satisfaction. Specifically, this official explained that one advantage of moving FPS to a law enforcement agency is that it could mean that FPS inspectors could be reclassified into positions that would receive federal law enforcement officer retirement benefits, leading to improved employee satisfaction and retention. FPS officials said that Justice’s long-standing culture that is focused on law enforcement is something that FPS sees as one of Justice’s advantages. Although FPS and some of the selected agencies are similar in that their cultures focus on law enforcement, there are differences among their cultures. For example, FPS officials questioned how their agency would meld with the Secret Service since it has long history, and Marshals officials said that FPS and the Marshals do not have comparable legacies. The Secret Service and Marshals have been around for about 150 and 230 years, respectively, while FPS has a 47-year history. In addition, FPS and the law enforcement agencies may have different hiring practices, which can influence the culture of the workforce. Secret Service, for example, requires that all its employees hold a top-secret security clearance. This level of clearance is not required for all of FPS’s employees, according to an FPS official. If FPS moved to Secret Service, Secret Service officials stated that there may be a need to create different workforce categories due to differences in the hiring requirements, a situation that may affect FPS’s and the Secret Service’s employee morale. Information Sharing and Coordination Regarding information sharing, in 2016, DHS designated a division within FPS as a Component Intelligence Program (CIP). CIPs are organizations in DHS that collect, gather, process, analyze, produce, or disseminate information related to national homeland security. According to FPS officials, FPS’s participation in meetings held by the CIPs is important because it provides FPS more visibility on the threats that other DHS agencies have identified and actions they plan to take. Further, FPS shares information obtained in CIP meetings with federal agencies across the United States to support emergency preparedness, security, and employee safety. Additionally, as a CIP, FPS has an opportunity to provide input on the national homeland-security information that the Secretary of Homeland Security receives. Finally, FPS has greater access to information than it might otherwise receive without the CIP designation. FPS officials said that FPS’s designation as a CIP was a “game changer” for FPS’s abilities to identify and share information on emerging threats. FPS officials explained that FPS’s placement could influence whether FPS continues to have direct access to information related to national homeland security that it needs to carry out its mission. Regarding coordination, FPS currently coordinates with both GSA and Marshals to fulfill its facility protection mission; however, we have reported on challenges FPS has faced in coordinating with these agencies. FPS’s coordination with GSA: FPS and GSA share responsibility for protecting federal facilities. FPS is primarily responsible for protecting federal employees and visitors in federal facilities held or leased by GSA. GSA serves as the federal government’s landlord and, in this role, performs some physical security activities such as funding and repairing security fixtures. In December 2015, we found that FPS and GSA had not agreed on a common outcome related to facility protection or the roles and responsibilities to accomplish their missions. FPS’s coordination with Marshals: FPS coordinates with Marshals to protect about 430 federal courthouses. At courthouses held or leased by GSA, FPS is the primary federal agency responsible for patrolling and protecting the perimeter of the facilities and for enforcing federal laws and regulations in those facilities. Marshals has primary responsibility for the security of the federal judiciary, including the safe conduct of court proceedings and the security of federal judges, court personnel, jurors, and the visiting public. In September 2011, we reported that FPS, Marshals, and other agencies involved in protecting courthouses (i.e., GSA and the Administrative Office of the U.S. Courts) faced challenges related to coordination, such as in the implementation of roles and responsibilities and the use or participation in existing collaboration mechanisms. In table 13 and subsequent paragraphs, we describe how selected agencies that met the information sharing and coordination criterion—that is, the selected agencies that were similar to FPS for this criterion—areas of consideration if FPS is placed in those agencies, and how the selected agencies did not meet the criterion. Like FPS, all of the selected agencies except GSA have access to and can share information related to national homeland security, and these agencies could share that same information with FPS. Specifically, like FPS, the selected agencies in DHS are CIPs or participate in other groups that have access to and can share information related to national homeland security. Justice and Marshals have access to homeland security information through the Federal Bureau of Investigation and participate in separate groups where national homeland security information is shared, including the Joint Terrorism Task Force and the National Counterterrorism Center. While selected agencies in DHS and Justice are similar to FPS in the area of information sharing, there are some differences and challenges that decision makers would need to consider before placing FPS in these agencies. For example, FPS and the selected agencies in DHS and Justice require different types of information to meet respective mission needs. In previous organizational placements, FPS has faced challenges with information sharing. For example, FPS officials told us that when FPS was part of ICE, they relied on ICE to provide them with information, which slowed down FPS’s ability to react to information specific to facility protection. This may not be an issue if FPS continues to have direct access to information as a CIP. While GSA does not have access to national homeland security information, GSA has access to and shares information pertinent to the security of government facilities through, among other sources, participation in the government facilities sector of the Government Coordinating Council and Interagency Security Committee. Officials from FPS, an association of security companies, and a former high-ranking official in NPPD—said if FPS moved to GSA, FPS could lose direct access to critical information that is necessary for it to accomplish its mission. Furthermore, staff from OMB said FPS’s participation in DHS’s homeland security groups has given the agency some level of credibility. Thus, if FPS moved to an agency that does not have access to national homeland security information, such as GSA, there may be resistance from DHS agencies and others in sharing information with FPS, according to the OMB staff. If FPS moved to Justice or Marshals, FPS officials said that they would be able to continue to access and share homeland security information through Justice’s information sharing community. Thus, a move to either of these two agencies would not have as great an impact to their access to homeland security information as a move to GSA would, according to FPS officials. Coordination Based on the coordination challenges we found in our prior work, FPS and GSA or Marshals may continue to disagree on roles and responsibilities if FPS is placed in these agencies. However, in September 2018, NPPD and GSA signed a memorandum of agreement that, among other things, describes FPS’s and GSA’s roles and responsibilities, and FPS, Marshals, GSA, and the Administrative Office of the U.S. Courts are working to finalize a separate agreement for courthouse security. Accordingly, coordination between these agencies should improve with the implementation of these agreements as we have previously reported that establishing clear roles and responsibilities, in agreements or through other mechanisms, contribute to effective coordination. Moving one agency into another does not necessarily mean that the two agencies will coordinate better. As discussed earlier in this report, FPS moved from ICE to NPPD so that FPS could gain synergy with NPPD’s Office of Infrastructure Protection, which is responsible for coordinating infrastructure protection across government and the private sector. According to OMB staff we interviewed, this synergy has not happened in part because NPPD and FPS missions are self-contained—with FPS focused on federal facility infrastructure and the Office of Infrastructure Protection focused on other types of infrastructure, including privately owned infrastructure. DHS, CBP, ICE, NPPD, and Secret Service do not have joint responsibilities for coordinating facility protection because these agencies rely on FPS to provide security services or provide their own security services. Mission Support FPS officials told us that over the course of its previous organizational placements, FPS’s mission support capabilities have matured and that it is now able to provide its own mission support in most areas. For example, FPS owns and uses many of the key operational and business- related information technology (IT) systems and applications it needs to carry out its mission. Despite the maturation of FPS’s in-house mission support activities, FPS still receives some mission support services from other agencies in DHS, such as human capital and some aspects of information technology. FPS would need mission support in these areas if it changed its organizational placement. Separately, FPS has faced challenges in the area of financial management, and changing FPS’s placement could help address those challenges. Finally, FPS offers its own training courses and has access to DHS’s Federal Law Enforcement Training Centers (FLETC), and therefore it does not need mission support from a parent agency in this area. In table 14 and subsequent paragraphs, we describe how selected agencies met the mission support criterion—that is, the selected agencies that had mission support that FPS needs—areas of consideration if FPS is placed in those agencies, and how the selected agencies did not meet the criterion. Among the agencies we reviewed, GSA has the infrastructure to support FPS in its funding approach. FPS officials told us that one of the key challenges they experienced in ICE was that ICE did not have institutional knowledge on FPS’s funding approach, particularly FPS’s fee structure, and FPS experienced changes in fees that were not aligned to what was needed to cover its efforts. FPS funds its operations by collecting security fees from federal agencies that use FPS for facility protection. GSA is well positioned to support FPS’s funding approach because it is the only agency we reviewed that also collects monies from multiple federal agencies to support some of its operations. According to documentation we reviewed and interviews with officials from selected agencies, we found that among the remaining agencies, some do not collect fees (NPPD, Secret Service) and others collect fees to support operations, but not from other federal agencies (DHS, CBP, ICE, Justice, Marshals). Further, based on our review of FPS’s fiscal year 2019 budget request to Congress and our past work, we found that FPS faces challenges in generating enough revenue to cover its operational costs. If placed in GSA, GSA and FPS could consider whether to use the Federal Buildings Fund for security projects related to facility management, such as installing cameras. OMB staff said that there are limitations with the Federal Buildings Fund, such as the amount of funding available for security projects. Further, OMB staff said that finding cost-effective ways for FPS to carry out its operations will help the agency address its funding challenges. Human Capital Any of the selected agencies could provide FPS needed human capital support. FPS performs some human capital activities, such as estimating the number of staff it needs to perform its mission but does not have delegated examining authority that allows it to fill competitive civil service jobs. NPPD—FPS’s current parent agency—has this authority and is responsible for recruiting, hiring, and performing other human capital services on behalf of FPS. All the selected agencies we reviewed have delegated examining authority. Thus, any one of these agencies could provide human capital services on behalf of FPS. Officials from three of the selected agencies—ICE, the Secret Service, and Marshals—said that they already face challenges with hiring enough staff to fulfill their own missions or may not have the administrative capacity to handle an additional human capital workload for FPS. For example, officials from the Secret Service and Marshals said they have staffing shortages, which negatively affects their ability to fulfil their missions. The shortage is exacerbated by the time it takes to vet applicants and process new staff through background checks and security clearances, according to the officials. Marshals officials said absorbing FPS would not help the agency address the staffing shortage because FPS employees perform a different mission, including a different law enforcement mission, which require different skill sets, training, etc. Further, Marshals officials said that given the time it takes to vet its own applicants and process its own staff, it lacks the administrative capacity to take on a new agency. Finally, Justice officials said that if FPS moved into Marshals, FPS staff would require ongoing human resources support for such things as performance management, payroll, personnel action processing, and benefits counseling. They said that Marshals is not staffed to assume the full human capital services required of another agency. Separately, an official from ICE said that the agency’s human capital office is currently undergoing a major realignment of service functions and that given FPS’s large workforce, ICE would not have the administrative capacity to take on the additional human capital workload for FPS. NPPD may experience some gaps in providing some human capital functions if FPS moved out of NPPD. According to NPPD, FPS provides NPPD 23 staff positions to help NPPD carry out its human capital activities. If FPS moved out of NPPD, NPPD staff said that 15 of the positions could be realigned back to FPS. The remaining 8 positions, which perform major functions including processing pay and managing information technology systems for human capital needs, would need to remain in NPPD if they are not replaced by NPPD. According to NPPD officials, the human capital teams that perform these functions are already understaffed and the skillsets for these functions are not plentiful in the workforce. Thus, if NPPD were unable to retain these positions, NPPD officials said that there may be significant gaps, such as in processing pay. Information Technology (IT) FPS’s operational and business-related IT systems and applications would not be greatly affected by a change in FPS’s organizational placement because FPS owns many of the systems and applications it needs to carry out its mission. For example, FPS owns a system to help agency officials conduct and track facility security assessments and another system to track law enforcement activities (e.g., tracking investigative cases and incidents). If FPS’s placement changed, the agency could take its systems with it, though there may be some transition or integration costs, according to FPS officials. FPS uses some IT systems or applications that it does not own and that would need some consideration if FPS changed its organizational placement, particularly if FPS moved outside DHS. For example, FPS uses ICE’s system for managing financial transactions and ICE’s IT network. If FPS moved outside of DHS, resources would be needed to remove FPS from this ICE system and network, according to FPS officials. GSA and Justice have financial management systems that FPS could use. Marshals do not have its own financial management but uses Justice’s system. According to Justice and Marshals officials, Justice’s financial management system is currently not configured to support the collection of fees that support operations. Any changes to the configuration of Justice’s financial management system, such as the inclusion of FPS’s fee-based collections, would require the approval of Justice and possibly other Justice components that use the system. If FPS stayed within DHS, including as a standalone entity within DHS, it could potentially continue to use ICE’s system or use CBP or the Secret Service’s systems. Training DHS, CBP, ICE, Secret Service, Justice, and Marshals provide law enforcement training, but FPS would not need access to such training if placed in these agencies because FPS provides its own training on topics related to facility protection. For example, FPS provides training to its inspectors on physical security activities, such as identifying countermeasures needed at facilities. FPS officials said that there would be no efficiency gained in merging FPS and these agencies’ training programs because FPS performs activities that most other law enforcement agencies do not perform. NPPD and GSA do not perform law enforcement activities and therefore do not have law enforcement training programs. If moved to either of these two agencies, FPS could continue to use its own training courses. Furthermore, CBP, ICE, Secret Service, and Marshals are Federal Law Enforcement Training Centers (FLETC) Partner Organizations, meaning that they have access to training provided at FLETC training facilities. FPS is also currently designated as a FLETC Partner Organization and therefore would not need to rely on these agencies to obtain this designation. All Partner Organizations, regardless of whether they are DHS agencies or not, share the same equal privileges at FLETC, including priority scheduling for basic and advanced law enforcement training. Nonetheless, Justice and Marshals officials explained that their FLETC training curriculum, planning, and structure are vastly different than other Partner Organizations due to the differing mission sets. NPPD and GSA are not FLETC Partner Organizations. According to FLETC officials, however, because FPS is currently a FLETC Partner Organization, it would continue to have access to FLETC while in NPPD or GSA. Appendix IV: Comments from the U.S. Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Amelia Bates Shachoy (Assistant Director); Roshni Davé (Analyst-in-Charge); Ben Atwater; Jazzmin Cooper; George Depaoli; Adam Gomez; Geoffrey Hamilton; Malika Rice; Amy Rosewarne; Kelly Rubin; Sarah Veale; and Amelia Michelle Weathers made key contributions to this report.
Why GAO Did This Study FPS, within DHS's NPPD, conducts physical security and law enforcement activities for about 9,000 federal facilities and the millions of employees or visitors who work in or visit these facilities. FPS moved from GSA to DHS's ICE in 2003 and to NPPD in 2009. GAO has reported that FPS faced challenges in each location. Legislation enacted in November 2018 requires DHS to review placement options for FPS and could result in FPS moving again within DHS or to another executive branch agency. GAO was asked to review issues related to organizational placement options for FPS. This report examines (1) the potential effects of FPS's placement in selected agencies and (2) steps DHS has taken to assess placement options for FPS. GAO identified five key organizational placement criteria based on prior work and identified eight agencies as potential placement options. The agencies were selected because they have the largest number of law enforcement officers or perform physical security, among other reasons. GAO reviewed documentation and interviewed officials from FPS, selected agencies, and key stakeholders. GAO compared agencies to FPS to determine if they meet the organizational placement criteria. An agency meets the criteria if it has similarities to FPS. What GAO Found In considering organizational placement options for the Department of Homeland Security's (DHS) Federal Protective Service (FPS), GAO found that none of the eight agencies GAO selected met all the key organizational placement criteria; thus, any of the organizational placement options could result in both benefits and trade-offs. For example, keeping FPS in DHS's National Protection and Programs Directorate (NPPD) could provide FPS some benefits because FPS and NPPD have missions that include the protection of infrastructure or specific facilities, facility protection responsibilities, and access to and sharing of information related to national homeland security. However, unlike FPS, NPPD does not perform both physical security and law enforcement activities, which is a potential trade-off. In another example, the General Services Administration (GSA) and the United States Marshals Service (Marshals) could provide benefits because they currently coordinate with FPS on facility protection. However, Marshals does not have a mission or goals that explicitly focus on the protection of infrastructure or facilities and GSA does not perform law enforcement, which are potential trade-offs. DHS has not taken key steps to fully assess potential placement options. Specifically, DHS has not assessed the organizational structure of FPS, such as its placement in NPPD, even though FPS and NPPD have evolved since FPS was placed in NPPD in 2010. Standards for Internal Control state that agency management should establish an organizational structure to achieve the agency's objectives and that an effective management practice for attaining this outcome includes periodically evaluating the structure to ensure that it has adapted to changes. Additionally, because DHS did not analyze FPS's current placement in NPPD, DHS does not have a benchmark for comparison to other agencies. DHS recently established a working group to assess the placement of FPS. However, the group's planned activities are limited in several ways. For example, the group's draft charter does not indicate that the working group will describe what DHS expects to achieve by changing FPS's placement. Further, the draft charter does not indicate that the working group will evaluate the benefits and trade-offs of placement options. GAO has previously identified these and other steps as key to successful organizational change or analysis of alternatives. These steps would help DHS address the 2018 legislation to review placement options for FPS—including, how DHS considered the results of GAO's review. Regardless of the legislation, DHS may not be positioning itself to make an informed decision as to what organization best supports FPS. What GAO Recommends DHS should identify the expectations for changing FPS's placement and take steps to fully evaluate placement options. DHS concurred with the recommendations and outlined steps it plans to take to address them.
gao_GAO-18-599
gao_GAO-18-599_0
Background Hard-to-Count Groups Although the Bureau goes to great lengths to conduct an accurate count of the nation’s population, some degree of inaccuracy is inevitable. When the census misses a person who should have been included, it results in an undercount. An overcount occurs when an individual is counted more than once or in the wrong place. These errors are problematic because certain groups such as minorities, young children, and renters are more likely to be missed in the census, while other groups such as those who may own a second, seasonal home are more likely to be counted more than once. As census data are used to apportion seats in Congress, redraw congressional districts, and allocate billions of dollars in federal assistance each year, improving coverage and reducing undercounts are important. As an example, the Bureau reported that the 2010 Census did not have a significant net undercount or overcount nationally. However, as shown in figure 1, errors in census coverage were unevenly distributed through the population. For example, the Bureau estimated that it missed nearly 5 percent of American Indians living on reservations—the sociodemographic group with the highest percent net undercount in 2010—whereas the Bureau estimated it overcounted almost 1 percent of non-Hispanic whites. In addition to those groups with characteristics the Bureau can measure—based on their responses to certain questions asked on the census questionnaire—there are many other hard-to-count groups, some of which cut across sociodemographic groups, as shown in table 1. For example, lesbian, gay, bisexual, transgender, or queer/questioning persons or persons who distrust government can cut across all sociodemographic groups. There are complex reasons why certain groups are considered hard-to- count. According to Bureau officials, for example, one way to think about the hard-to-count problem is to consider what groups are hard to locate, contact, persuade, and interview for the census (see figure 2). Hard-to-locate. Some groups are hard-to-locate because where they live is unknown, or they move frequently. For example, the Bureau faces difficulty counting persons experiencing homelessness. Adding to this difficulty are reported increases in the prevalence and complexity of outdoor encampments across the country. Inhabitants design many of these encampments to remain hidden; some people may remain in an encampment for years while other people may move frequently. Hard-to-persuade. Other groups are hard-to-persuade to participate in the census. For example, while the Bureau had identified those who distrust government as a hard-to-count group based on research prior to the 2010 Census, in November 2017, the Bureau reported to its National Advisory Committee an increase in unprompted confidentiality concerns raised by individuals in focus groups and pretests for the 2020 Census and other surveys. Multiple factors. Some groups are hard to count for multiple complex reasons. For example, a Bureau taskforce found that households with young children up to 4 years old may be missed altogether due to frequent moves between rental units (hard-to-contact). Moreover, some households studied—such as complex households with multiple generations—also appeared to be confused about whether or not to include their young children when completing the questionnaire or when being interviewed by census enumerators. The Bureau also found that language barriers sometimes resulted in households leaving young children off their census or other survey questionnaire (hard-to- interview). Congress Has Prioritized Funding for Decennial Partnership and Communications Efforts in Both the 2010 Census and the 2020 Census An appropriation in the American Recovery and Reinvestment Act of 2009 (Recovery Act) allowed the Bureau to increase the funding of the Bureau’s 2010 Census partnership and communications efforts. The Bureau has partnered with governments, businesses, and local community organizations to help promote the census. The Bureau has also relied on a communications campaign including paid advertisements in national and targeted markets to help build awareness of the census. After adjusting for inflation, the Bureau spent about $123 million to expand its advertising and about $125 million to expand its partnership efforts (in 2017 dollars), primarily by hiring additional partnership-related staff beyond original plans. Partnership staff hired to support the 2010 Census were responsible for mobilizing local support for the census by working with local organizations to promote participation. Partnership staff for the 2010 Census included a mix of partnership specialists—responsible for building relationships with and obtaining commitments from governments, local businesses, and other organizations to help promote the census—managers, graphic designers, and clerical support positions. After receiving Recovery Act funding, the Bureau created a new partnership assistant position. After the partnership specialists had established agreements with local organizations, these partnership assistants were responsible for supporting the implementation of promotion efforts, such as by staffing fairs and other events. Bureau officials told us that they believed that creating a new partnership assistant position would help promote census awareness. The Consolidated Appropriations Act, 2018 directed the Bureau to conduct its fiscal year 2018 partnership and communications efforts in preparation for the 2020 Census at a level and staffing no less than the Bureau conducted during fiscal year 2008 in preparation for the 2010 Census. The act appropriated more than $2.5 billion for the Periodic Censuses and Programs account, which according to Bureau officials includes over $1 billion from the Bureau’s fiscal year 2019 budget request intended to smooth transition of funding between fiscal years, such as in the event of a continuing resolution. The Bureau Plans to Enhance Outreach to and the Enumeration of Hard-to-Count Groups in 2020, but Estimated Spending Is Similar to 2010 The Bureau Plans Enhancements to Key 2020 Census Operations to Address Complexity Enumerating Hard-to- Count Groups The Bureau will continue to rely on its Integrated Partnership and Communications operation—designed to communicate the importance of census participation and motivate self-response—as a key component of its efforts to improve enumeration of hard-to-count persons in the 2020 Census. Evaluations conducted by the Bureau found that its partnership and communications efforts had positive effects on increasing awareness and participation among the hard-to-count in prior censuses. Because of the positive effects, the Bureau has begun outreach to the more than 257,000 tribal, state, and local governments as well as other businesses and organizations it partnered with in 2010. For example, the Bureau plans to continue using “trusted voices”—individuals or groups with relevance, importance, and relatability to a given population, such as local leaders and gatekeepers within isolated communities—to promote the census. As part of this effort, the Bureau plans to continue outreach initiatives to specific constituencies, such as to faith-based communities, and, through its Foreign Born/Immigrant initiative, to outreach and communicate with recent immigrants, undocumented residents, refugees, and migrant and seasonal farm workers. In addition, the Bureau still plans to advertise in national and targeted markets. For example, to support its 2020 outreach efforts, including to hard-to-count groups, the Bureau awarded a communications contract in August 2016 to Young and Rubicam, an advertising firm. As has been done in prior censuses, this contractor has enlisted 14 partners and subcontractors to help it reach specific sociodemographic groups, such as American Indian and Alaska Native populations and Hispanic communities. Given the increasingly complex task of counting those historically missed in the census, the Bureau has taken steps or plans to enhance some aspects of the initiatives under its Integrated Partnership and Communications operation and to other key operations compared to the 2010 Census, as shown in table 2. For example, the Bureau overhauled a metric it has used to help manage and target field work for its partnerships to areas with hard-to-count populations, basing it now on predictions of each household’s likelihood to self-respond to the census. Using this new low response score metric, the Bureau created a publicly available online mapping tool for its partnership staff and other users to better understand the sociodemographic make-up of their assigned areas and to plan their outreach efforts accordingly. Moreover, as we previously recommended in 2010, the Bureau also plans to develop predictive models to help allocate its advertising using: (1) these predictive response data, (2) results describing the complexity of difficult enumeration from its recent “behaviors, attitudes, and motivators survey” study and focus groups, and (3) other third-party data. The Bureau is still evaluating certain initiatives before deciding whether or not to include them in its 2020 plans. For example, as part of the 2018 End-to-End Test currently underway in Providence, Rhode Island, the Bureau is piloting the use of Internet kiosks in selected post offices to help allow persons to self-respond to the census. Bureau officials said they will decide whether to move forward with the use of kiosks in post offices in 2020 after evaluating the pilot and the test. In addition, according to the Bureau’s current planning documents, the Bureau has plans to change other key operations to help improve the enumeration of certain hard-to-count groups. For example, to help address the complex undercount of young children, the Bureau revised the census questionnaire and instructions to enumerators to more explicitly mention the inclusion of grandchildren and any non-relatives in household population counts. In addition, the Bureau’s planning documents describe plans to offer administrators at certain group quarters locations, such as college dormitories, the option to electronically transfer their rosters to the Bureau. Bureau officials said that this planned change will help reduce the need for enumerators to visit those locations, and that such an efficiency gain will allow them to devote resources on the ground to other harder to enumerate group quarters. Recognizing the importance of reaching an increasingly linguistically diverse population, the Bureau has also made significant changes to its Language Services operation for 2020, including increasing the number of non-English languages formally supported by the Bureau. Table 3 below summarizes changes in the number of languages the Bureau plans to support. According to the Bureau, this larger choice of languages should increase the percentage of limited-English-speaking households directly supported by that operation from 78 percent in 2010 to 87 percent in 2020. The Bureau is still assessing the level of non-English support it will directly provide through advertising, partnership, and promotional materials. Bureau officials stated that they will decide the number of — and which—non-English languages to support after it has completed research on how best to segment advertising markets in fall 2018. Until then, it has committed to at least 12 non-English languages—which is less than half of the 27 non-English languages similarly supported in the 2010 Census. Bureau officials said that one action they will take to mitigate any effects if the Bureau decides on a fewer number of languages for 2020 is to provide language-independent media templates—including scripts to videos ready for non-English voiceovers— to any partner groups that may need them. The Bureau has also formalized its language translation capabilities for the non-English languages it chooses to support based on 2010 Census evaluations that found, among other things, that the Bureau’s lack of sufficient oversight of its translation process hampered consistency of its translation of promotion and outreach materials. For the 2020 Census, Bureau officials said they intend to rely on in-house translation experts adhering to translation industry standards. Bureau officials stated that the Bureau will not attempt to oversee the translations that partners may make into less commonly spoken languages using the Bureau’s language-neutral materials when trying to reach more isolated language areas, though officials stated that its partners, including contractors for advertising, will rely on Bureau-developed language glossaries for census terminology when translating into other languages. The Bureau Plans Total Spending for Its 2020 Census Outreach Efforts Similar to that for 2010 Census, and to Hire More Partnership Specialists Instead of Assistants The Bureau estimates total spending for its 2020 partnership and communications outreach efforts to be similar to what it reported spending on those efforts for the 2010 Census after adjusting for inflation. Specifically, according to documents supporting the Bureau’s most recent life cycle cost estimate for the 2020 Census, the Bureau may spend about $850 million in its outreach to promote the 2020 Census, compared to nearly $830 million in total spending in comparable categories for the 2010 Census. (See table 4.) Partnership staff. According to the Bureau’s current planning documents, it will hire nearly twice as many partnership specialists— responsible for building relationships and obtaining commitments from organizations—to support the 2020 Census than it hired to support the 2010 Census. Despite this planned increase in partnership specialists, the Bureau’s total estimated spending on partnership staff—$248 million—is less than the $334 million the Bureau reported spending in the same cost category for 2010 after adjusting for inflation. This change is in part because the Bureau does not plan to hire any partnership assistants to support the 2020 Census. According to Bureau planning data from the 2010 Census, the Bureau planned to hire over 1,700 partnership assistants—those that assisted specialists for the 2010 Census—with Recovery Act funding. As noted previously, Bureau officials said that the additional funding it received from the Recovery Act in 2009 (about $125 million in 2017 dollars) largely funded the hiring of these partnership assistants. The effect of the Recovery Act funding on partnership hiring is shown in figure 3 below. According to Bureau officials, without the Recovery Act funding and its direction for the Bureau to increase hiring in order to stimulate the economy, the Bureau would not have hired the large number of partnership assistants that it did. According to Bureau officials, this shift in hiring toward more partnership specialists will enable a greater focus on creating more partnerships and require greater reliance on partner organizations to help with staffing for outreach and promotion events in local communities that partnership assistants were used for in the 2010 Census. While the ability of future partners to help with these events remains to be seen, Bureau officials involved in early outreach with partners stated that they believe this planned approach shows early promise based on the over 1,500 partners they have engaged for the 2020 Census so far. Headquarters support. The $111 million amount the Bureau plans to spend in headquarters support for outreach efforts is similar to the $106 million it spent in the 2010 Census after adjusting for inflation. According to Bureau documents, this support will be used for advertising, media, and partnership efforts. Communications campaign. The Bureau plans to spend more in its communications campaign category in the 2020 Census than what it reported spending in this area during the 2010 Census—$492 million compared to $388 million after adjusting for inflation, according to the Bureau’s cost estimation documents. The campaign will include paid advertising and the development of promotional materials. According to Bureau officials, they will initiate much of this spending in May 2019. This larger figure includes about $152 million for additional contracted services still being planned, but provisionally allocated for various advertising support efforts with the balance for various partnership materials not included in other contracts. The Bureau does not plan to repeat its “2010 Census Road Tour” involving a large mobile display and over a dozen cargo vans that were driven to promotional events around the country at a cost of about $16.6 million after adjusting for inflation. While the Bureau did not conduct a formal evaluation of the initiative’s effectiveness at encouraging response during the 2010 Census, Bureau officials told us that they do not believe it was as effective a use of resources compared to the other options they are planning for 2020. The Bureau Started Partnership Hiring Earlier for the 2020 Census Than for 2010 An evaluation conducted by the Bureau of its 2010 partnership efforts recommended that, for the 2020 Census, the Bureau hire at least a core group of partnership staff 3 years prior to census day instead of the 2 years prior as was done for the 2010 Census. Consistent with that recommendation, according to Bureau officials, the Bureau hired five partnership specialists for the 2020 Census in October 2015—more than 2 years earlier in the decennial cycle than its first hiring of partnership specialists in January 2008 for the 2010 Census, as shown in figure 4. Bureau officials told us that this hiring helped the Bureau complete tribal consultations earlier than it had for the 2010 Census. Moreover, the Bureau continued its early hiring with 39 more partnership specialists in fiscal year 2017. Bureau officials said that, with the additional year of preparation, these staff initiated outreach to the highest level of government in each of the 50 states, the District of Columbia, and Puerto Rico, resulting in, as of April 2018, partnership staff having obtained commitments or statements of interest from all but two state governments to form State Complete Count Commissions/Committees. These Commissions/Committees are intended to help form partnerships at the highest levels of government within each state and leverage each state’s vested interest in a timely and complete count of its population. Bureau officials said they also recently further accelerated the Bureau’s planned time frames for hiring partnership specialists. These officials said that with the funds made available in the Fiscal Year 2018 Consolidated Appropriations Omnibus, the Bureau began posting job announcements for about 70 partnership specialists in April 2018 and hopes to begin hiring in July 2018—3 months earlier than October 2018, as had otherwise been planned. In addition, with 2018 funds, Bureau officials said they are working to identify elements of the communications campaign to begin earlier than the planned start date of October 1, 2018. The Bureau and the lead communications contractor identified possible efforts to start months earlier. According to Bureau officials, they are finalizing how to accelerate these efforts, including the Statistics in Schools initiative, media planning, and hosting a creative development workshop with the communications contractors. The Bureau Faces Internal Management and External Workforce Challenges in Improving the Enumeration of Hard- to-Count Groups in 2020 The Bureau Faces an Internal Management Challenge Integrating Its Many Hard-to-Count Efforts According to the Bureau and as shown in figure 5, over one-third of its 35 operations (14 of 35) are designed, at least in part, to help improve the enumeration of hard-to-count groups. These efforts range from the earliest field data collection operations—such as address canvassing when the Bureau aims to identify all possible addresses where people live, including hidden housing units such as basement apartments or attics—to some of its later field operations, such as nonresponse follow- up when census enumerators visit each household that did not self- respond. Each of the 35 operations is implemented by a separate team that manages and controls its activities and, according to Bureau governance documents, is also responsible for reviewing and managing its risks, schedule, and scope, as well as developing needed capability requirements. Team leads are responsible for ensuring integration with other operation teams, and escalating risks to management, as well as ensuring communication upward to the various governance bodies overseeing the decentralized structure. Operational decisions within the scope of plans that have been approved by the governance bodies are made at the team level, while ultimate responsibility rests with respective associate directors for the decennial, field, communications, and other directorates, whose staff largely comprise the teams, and the Director of the Census Bureau itself. The Bureau exercises change control over the scope, schedule, and documentation of its baseline program design, with a change control board comprising process and program managers with responsibility over the operational teams. Approved changes are formally communicated via e-mail to stakeholders in the change control process. Managing decentralized operations in such a way can be effective and provide an agency flexibility in responding to changing conditions on the ground, such as when adapting census methods in response to natural disasters as the Bureau had to do during the 2010 Census for areas affected by Hurricane Katrina. However, such decentralization also presents a challenge to management as it tries to ensure the integration of its efforts to improve enumeration of the hard-to-count groups. To help address the challenge of managing so many hard-to-count efforts that cut across the decentralized operations, during our review, the Bureau developed a draft operational design document. This document describes the major operations and initiatives that contribute to, at least in part, its goal to improve the enumeration of hard-to-count groups in the 2020 Census. This is the Bureau’s first comprehensive look at the hard- to-count goal for the 2020 Census. Bureau officials said that they developed the document because they realized that looking across the Bureau’s operations and how they relate to difficulties enumerating hard- to-count groups would provide them a useful perspective that could help identify any gaps or interdependencies in their various hard-to-count efforts. Bureau officials said they plan to refine and include this document as a chapter in the fall 2018 update of their broader 2020 Census Operational Plan. Although this is a good first step to elevate the visibility of the hard-to-count goal, we identified a number of other areas where additional steps or management focus may be needed in order to help ensure integration of certain hard-to-count related efforts, including the following: During exchanges of information between the Bureau and its National Advisory Committee in 2017 and 2018, the Bureau proposed using additional focus groups with certain population groups, census interviewers, and trusted community messengers. These focus groups are intended to identify root causes and ways to overcome the confidentiality concerns increasingly being raised by respondents in the Bureau’s earlier testing by helping to inform messaging and outreach plans as well as staff support documents and training materials. However, as of May 1, 2018, the Bureau reported that it had yet to identity the resources needed to conduct the additional focus groups it had proposed. If the Bureau is going to take this step, it would need to complete its analysis from these proposed focus groups with interviewers and others before starting to develop its 2020 messaging, currently scheduled to begin in October 2018. Any delays in scheduling these activities could have an effect on activities intended to help improve enumeration of the hard-to-count in other related operations. The detailed operational plans for 10 of the Bureau’s 14 hard-to- count-related operations have been documented and released publicly. However, we found that several of the detailed plans already released—while self-described as being updated over time to reflect changes in strategies based on ongoing planning, research, and testing—are nearly two years old and may not reflect more recent decisions made. Attention by Bureau management to the details of these operational plans as they are updated will be critical to ensure that their interdependencies with other efforts are accounted for. Similarly, as of May 2018, little detail is available about what interdependencies the other 4 hard-to-count related efforts will have on the overall 2020 Census Operational Plan and on the Bureau’s efforts to improve the enumeration of the hard-to-count in particular. For example, the Bureau’s operation to enumerate persons at transitory locations—key to counting mobile persons, including those living at motels or with traveling carnivals—is one of the 4 efforts without a detailed operational plan yet. Because the Bureau is not scheduled to test the integration of this enumeration with other systems before the 2020 Census, it remains to be seen how its forthcoming design may interact with other related operations and systems. While Bureau officials stated that procedures likely to be used for this operation are well established from prior censuses, they also stated that there may be significant changes from the past in the process the Bureau uses to determine where to count persons in this operation and may rely on changes in the non-ID processing operation—helping enumerate persons not having a pre-assigned census identification number. With less than 2 years to go until Census Day (April 1, 2020), there is little room for delay in considering how forthcoming details on hard-to-count efforts yet to be finalized— or changed based on ongoing testing or other decisions—may have consequences on other related efforts. According to the Project Management Institute’s A Guide to the Project Management Body of Knowledge, integrated change control can help address overall risk to related efforts, which often arises from changes made without consideration of the overall goals or plans. A significant amount of hard-to-count-related planning for the 2020 Census is currently underway, and in the less than 2 years remaining before Census Day, it will be important for Bureau management to maintain a focus that helps ensure that hard-to-count-related decisions yet to be made as well as any changes to those already made are integrated with other related efforts. Focused attention on these efforts will also help ensure that any interdependencies, synergies, or gaps are identified and included in the change-control processes the Bureau already has in place. Hiring Partnership Staff with Critical Skills in a Tight Labor Market Creates a Workforce Challenge for the Bureau and It Lacks Data from 2010 to Guide Its Efforts As noted previously, a key component of the 2010 Census was the hiring of partnership staff to help build relationships with and obtain commitments from local organizations to help encourage census participation, particularly among hard-to-count groups. For the 2020 Census, in addition to the core relationship-building skills, Bureau officials said they are working to identify specialized skills needed to operate partnership initiatives in a 2020 environment, such as advanced knowledge of digital media. However, the Bureau faces a significant challenge in hiring these kinds of staff because it is operating in a much tighter labor market than it did prior to the 2010 Census. As a result, it may not be able to hire the partnership staff with the skills it now needs as easily as it had in the past. According to Bureau of Labor Statistics data, the unemployment rate in January 2008, when the Bureau first hired partnership staff for the 2010 Census, was 5 percent. That number increased to more than 7 percent by December 2008, and then ranged from more than 7.5 percent to 10 percent in 2009 and through Census Day in April 2010. During this time, the Bureau hired nearly 3,000 partnership staff, many of which the Bureau hired in a few short months after receiving additional funding from the Recovery Act. The unemployment rate is substantially lower now as we approach the comparable part of the decade for the 2020 Census. Specifically, the rate has ranged from 4.9 percent in October 2016, when the Bureau starting hiring for an early round of about 40 partnership staff, to less than 4 percent in May 2018. Bureau officials reported experiencing challenges during these early hiring efforts for partnership staff, although they were ultimately able to fill the nearly 40 positions the Bureau sought to fill across its six census regions. Bureau officials in the regional field offices reported observing smaller applicant pools, declined job offers, and early turnover due to a lower pay rate the Bureau offered compared to the local economy. Moreover, these officials reported seeing fewer applicants through local job markets, which had been successful recruiting mechanisms in the prior census. According to the Bureau’s planning documents, the Bureau plans to ramp up its hiring of partnership specialists between July 2018 and 2019. If the unemployment rate generally holds steady at around the 4 percent of May 2018, the Bureau will likely face challenges recruiting and retaining partnership staff with the critical skills needed. Bureau officials said that they will develop customized recruiting strategies to fill specific needs as they identify and refine the mix of partnership skills needed to support their 2020 efforts. For example, Bureau officials acknowledged the need to more effectively use USAJobs, the federal recruiting website, and targeted job announcements. They also identified the possibility of hiring additional partnership staff for short- term assignments closer to census day to help meet specific needs, such as assisting with non-English language enumeration and connecting with faith-based or immigrant communities in areas with low participation. Following through on its plans to identify an optimal mix of skill-sets and tailored recruiting strategies, in accordance with leading practices, will be important for the Bureau as it operates in a tight labor market because delays or shortfalls in hiring partnership staff could put the Bureau’s plans for building support for the census at risk. As the Bureau has decided to rely more heavily on partnership specialists as part of its outreach and promotion strategy to reach hard-to-count groups and still faces decisions about where to staff them, it has done so without the benefit of data on its actual hiring of partnership staff from the 2010 Census. During our review, the Bureau was unable to readily provide us with data on the actual number or timing of partnership specialists and assistants hired to support the 2010 Census, and instead, we had to use detailed Bureau planning documents for our analysis. Bureau officials reported that their records in 2010 did not clearly link the positions and grades recorded in the payroll system for individual staff who were hired to support a different operation to the roles they subsequently played in carrying out the partnership efforts. Standards for Internal Control in the Federal Government state that management should use quality information to achieve the entity’s objectives. Bureau officials recognize the importance of having such data readily available both for evaluating implemented efforts and for future planning, and said they will take steps to better record these types of data for the 2020 Census. Doing so will better position the Bureau to evaluate the effectiveness of its hiring strategy and tradeoffs in alternative approaches, to learn lessons from the 2020 implementation, and to optimize related staffing strategies in the future. Conclusions Much of the Bureau’s planning efforts to help address the longstanding challenge of enumerating hard-to-count groups in the 2020 Census are underway. Importantly, the various operations and initiatives related to the hard-to-count are either in the planning or early implementation stages. While the Bureau has taken some steps to better understand the scope of these efforts, going forward, it will be important for the Bureau to ensure that management maintains a focus on forthcoming changes and decisions on hard-to-count related efforts to ensure they are integrated with other hard-to-count related efforts across the Bureau’s decentralized operations. Doing so will help the Bureau identify possible synergies, interdependencies, or gaps specific to how they might affect the Bureau’s ability to improve the census and help address overall risk to related efforts. In addition, information about related efforts in prior censuses can help inform management and its ongoing planning. However, the Bureau’s lack of complete and reliable data on hiring partnership staff for the 2010 Census—such as numbers, dates, and positions filled—affects its ability to fully consider tradeoffs it is making among types of staff it plans to hire for the 2020 Census. As the Bureau continues to ramp up its hiring of partnership specialists and other staff to support enumeration of the hard- to-count, improved recording of hiring numbers, dates, and positions filled—particularly for staff supporting multiple operations—can help position the Bureau to evaluate the effectiveness of its hiring strategy and support efforts to optimize any related hiring in future censuses. Recommendations for Executive Action We are making the following two recommendations to the Department of Commerce and the Census Bureau: The Secretary of Commerce should ensure the Director of the U.S. Census Bureau takes steps to ensure that forthcoming changes and decisions on hard-to-count related efforts are integrated with other hard-to-count related efforts across the Bureau’s decentralized operations. (Recommendation 1) The Secretary of Commerce should ensure the Director of the U.S. Census Bureau takes steps to ensure for the purposes of evaluation and future planning that information is recorded and available on partnership hiring numbers, dates, positions filled, and in support of what part of the 2020 Census. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Commerce. In its written comments, reproduced in appendix I the Department of Commerce agreed with our findings and recommendations and said it would develop an action plan to address them. The Census Bureau also provided technical comments that we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Commerce, the Undersecretary of Economic Affairs, the Acting Director of the U.S. Census Bureau, and the appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff that made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Commerce Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Robert Goldenkoff, (202) 512-2757 or goldenkoffr@gao.gov. Staff Acknowledgements In addition to the contact named above, Ty Mitchell, Assistant Director; Chris Falcone, Analyst-in-Charge; Mark Abraham, Ann Czapiewski, Kayla Robinson, Cynthia Saunders, and Stewart Small made key contributions to this report.
Why GAO Did This Study A goal for the 2020 Census is to count everyone once, only once, and in the right place. Achieving a complete and accurate census is becoming an increasingly complex task, in part because the nation's population is growing larger, more diverse, and more reluctant to participate. When the census misses a person who should have been included, it results in an undercount. Historically, certain sociodemographic groups have been undercounted in the census, which is particularly problematic given the many uses of census data. GAO was asked to review the Bureau's plans for enumerating hard-to-count groups in the 2020 Census. This report examines (1) the Bureau's plans for improving the enumeration of the hard-to-count in 2020, and how that compares with 2010; and (2) the challenges the Bureau faces in improving the enumeration of the hard-to-count in 2020. GAO reviewed Bureau planning, budget, operational, and evaluation documents as well as documents of the hard-to-count related working groups of the Bureau's National Advisory Committee; and interviewed Bureau officials. What GAO Found The Census Bureau's (Bureau) plans for enumerating groups considered hard-to-count, such as minorities, renters, and young children, in the 2020 Census includes the use of both traditional and enhanced initiatives. For example, the Bureau plans to continue using certain outreach efforts used in 2010, such as a communications campaign with paid advertising, partnerships with local organizations, and targeted outreach to immigrant and faith-based organizations. The Bureau also plans enhancements to its outreach efforts compared to 2010. For example, to help address the undercount of young children, the Bureau revised the census questionnaire and instructions to enumerators to more explicitly include grandchildren in counts. Other planned changes include: Expanded languages: The Bureau plans to offer more non-English language response options and instructional materials than for 2010. More partnership specialists: The Bureau plans to hire nearly twice as many partnership specialists as it had planned for the 2010 Census to recruit partner organizations in local communities. Earlier partnership hiring : The Bureau started hiring a small number of partnership staff in October 2015—2 years earlier than it did for 2010. While efforts have been made, enumerating hard-to-count persons in 2020 will not be easy. Aside from the inherent difficulties of counting such individuals, the Bureau faces certain management challenges related to its hard-to-count efforts. First, the Bureau's hard-to-count efforts are distributed across over one third of its 35 operations supporting the 2020 Census. And while decentralized operations can provide flexibility, to enhance visibility over these hard-to-count efforts, the Bureau recently developed a draft operational document. However, the Bureau will continue to face challenges in ensuring its hard-to-count efforts integrate with each other. For example, some of the detailed plans for 10 of the hard-to-count efforts were released in 2016 and are awaiting updates, while 4 plans have yet to be released. With less than 2 years until Census Day (April 1, 2020), there is little room for delay. Therefore, to ensure that emerging plans related to the hard-to-count efforts integrate with existing plans, Bureau management will need to continue its focus on control of the changes in hard-to-count efforts moving forward. Second, the Bureau faces a challenge of a tighter labor market than existed prior to 2010 that could potentially create shortfalls or delays in its hiring of partnership staff who are needed to reach small and hard-to-count communities. In early hiring for 2020, Bureau officials reported smaller than expected applicant pools, declined offers, and turnover. Although it has plans to identify critical skills for 2020 and for tailored recruiting, collecting data on its hiring efforts will also be important. Currently, the Bureau lacks data from its 2010 Census that could have helped inform its partnership-staff hiring efforts for 2020. What GAO Recommends GAO recommends that the Bureau take steps to ensure that forthcoming changes and decisions on its hard-to-count related efforts are integrated with other operational efforts and that it collects data on its 2020 partnership hiring efforts. The Department of Commerce agreed with GAO's recommendations, and the Bureau provided technical comments that were incorporated, as appropriate.
gao_GAO-18-336T
gao_GAO-18-336T_0
CMS Delegates Monitoring of Beneficiaries who Receive Opioid Prescriptions to Plan Sponsors, but Does Not Have Sufficient Information on Those Most at Risk for Harm CMS Delegates Monitoring of Individual Beneficiaries’ Opioid Prescriptions to Plan Sponsors Our October 2017 report found that CMS provides guidance to Medicare Part D plan sponsors on how the plan sponsors should monitor opioid overutilization problems among Part D beneficiaries. The agency includes this guidance in its annual letters to plan sponsors, known as call letters; it also provided a supplemental memo to plan sponsors in 2012. Among other things, these guidance documents instructed plan sponsors to implement a retrospective drug utilization review (DUR) system to monitor beneficiary utilization starting in 2013. As part of the DUR systems, CMS requires plan sponsors to have methods to identify beneficiaries who are potentially overusing specific drugs or groups of drugs, including opioids. Also in 2013, CMS created the Overutilization Monitoring System (OMS), which outlines criteria to identify beneficiaries with high-risk use of opioids and to oversee sponsors’ compliance with CMS’s opioid overutilization policy. Plan sponsors may use the OMS criteria for their DUR systems, but they have some flexibility to develop their own targeting criteria within CMS guidance. At the time of our review, the OMS considered beneficiaries to be at a high risk of opioid overuse when they met all three of the following criteria: 1. received a total daily MED greater than 120 mg for 90 consecutive 2. received opioid prescriptions from four or more providers in the previous 12 months, and 3. received opioids from four or more pharmacies in the previous 12 months. The criteria excluded beneficiaries with a cancer diagnosis and those in hospice care, for whom higher doses of opioids may be appropriate. Through the OMS, CMS generates quarterly reports that list beneficiaries who meet all of the criteria and who are identified as high-risk, and then distributes the reports to the plan sponsors. Plan sponsors are expected to review the list of identified beneficiaries, determine appropriate action, and then respond to CMS with information on their actions within 30 days. According to CMS officials, the agency also expects that plan sponsors will share any information with CMS on beneficiaries that they identify through their own DUR systems. We found that some actions plan sponsors may take include Case management. Case management may include an attempt to improve coordination issues, and often involves provider outreach, whereby the plan sponsor will contact the providers associated with the beneficiary to let them know that the beneficiary is receiving high levels of opioids and may be at risk of harm. Beneficiary-specific point-of-sale (POS) edits. Beneficiary-specific POS edits are restrictions that limit these beneficiaries to certain opioids and amounts. Pharmacists receive a message when a beneficiary attempts to fill a prescription that exceeds the limit in place for that beneficiary. Formulary-level POS edits. These edits alert providers who may not have been aware that their patients are receiving high levels of opioids from other doctors. Referrals for investigation. According to the six plan sponsors we interviewed, the referrals can be made to CMS’s National Benefit Integrity Medicare Drug Integrity Contractor (NBI MEDIC), which is responsible for identifying and investigating potential Part D fraud, waste, and abuse, or to the plan sponsor’s own internal investigative unit, if they have one. After investigating a particular case, they may refer the case to the HHS-OIG or a law enforcement agency, according to CMS, NBI MEDIC, and one plan sponsor. Based on CMS’s use of the OMS and the actions taken by plan sponsors, CMS reported a 61 percent decrease from calendar years 2011 through 2016 in the number of beneficiaries meeting the OMS criteria of high risk—from 29,404 to 11,594 beneficiaries—which agency officials consider an indication of success toward its goal of decreasing opioid use disorder. In addition, we found that CMS relies on separate patient safety measures developed and maintained by the Pharmacy Quality Alliance to assess how well Part D plan sponsors are monitoring beneficiaries and taking appropriate actions. In 2016, CMS started tracking plan sponsors’ performance on three patient safety measures that are directly related to opioids. The three measures are similar to the OMS criteria in that they identify beneficiaries with high dosages of opioids (120 mg MED), beneficiaries that use opioids from multiple providers and pharmacies, and beneficiaries that do both. However, one difference between these approaches is that the patient safety measures separately identify beneficiaries who fulfill each criterion individually. CMS Does Not Have Sufficient Information on Most Beneficiaries Potentially at Risk for Harm Our October 2017 report also found that while CMS tracks the total number of beneficiaries who meet all three OMS criteria as part of its opioid overutilization oversight across the Part D program, it does not have comparable information on most beneficiaries who receive high doses of opioids—regardless of the number of providers and pharmacies used—and who therefore may be at risk for harm, according to CDC guidelines. These guidelines note that long-term use of high doses of opioids—those above a MED of 90 mg per day—are associated with significant risk of harm and should be avoided if possible. Based on the CDC guidelines, outreach to Part D plan sponsors, and CMS analyses of Part D data, CMS has revised its current OMS criteria to include more at-risk beneficiaries beginning in 2018. The new OMS criteria define a high user as having an average daily MED greater than 90 mg for any duration, and who receives opioids from four or more providers and four or more pharmacies, or from six or more providers regardless of the number of pharmacies, for the prior 6 months. Based on 2015 data, CMS found that 33,223 beneficiaries would have met these revised criteria. While the revised criteria will help identify beneficiaries who CMS determined are at the highest risk of opioid misuse and therefore may need case management by plan sponsors, OMS will not provide information on the total number of Part D beneficiaries who may also be at risk of harm. In developing the revised criteria, CMS conducted a one-time analysis that estimated there were 727,016 beneficiaries with an average MED of 90 mg or more, for any length of time during a 6 month measurement period in 2015, regardless of the number of providers or pharmacies used. These beneficiaries may be at risk of harm from opioids, according to CDC guidelines, and therefore tracking the total number of these beneficiaries over time could help CMS to determine whether it is making progress toward meeting the goals specified in its Opioid Misuse Strategy to reduce the risk of opioid use disorders, overdoses, inappropriate prescribing, and drug diversion. However, CMS officials told us that the agency does not keep track of the total number of these beneficiaries, and does not have plans to do so as part of OMS. (See fig. 1.) We also found that in 2016, CMS began to gather information from its patient safety measures on the number of beneficiaries who use more than 120 mg MED of opioids for 90 days or longer, regardless of the number of providers and pharmacies. The patient safety measures identified 285,119 such beneficiaries—counted as member-years—in 2016. However, this information does not include all at-risk beneficiaries, because the threshold is more lenient than indicated in CDC guidelines and CMS’s new OMS criteria. Because neither the OMS criteria nor the patient safety measures include all beneficiaries potentially at risk of harm from high opioid doses, we recommended that CMS should gather information over time on the total number of beneficiaries who receive high opioid morphine equivalent doses regardless of the number of pharmacies or providers, as part of assessing progress over time in reaching the agency’s goals related to reducing opioid use. HHS concurred with our recommendation. CMS Oversees Providers through its Contractor and Plan Sponsors, but Efforts Do Not Specifically Monitor Opioid Prescriptions Our October 2017 report found that CMS oversees providers who prescribe opioids to Medicare Part D beneficiaries through its contractor, NBI MEDIC, and the Part D plan sponsors. NBI MEDIC’s data analyses to identify outlier providers. CMS requires NBI MEDIC to identify providers who prescribe high amounts of Schedule II drugs, which include but are not limited to opioids. Using prescription drug data, NBI MEDIC conducts a peer comparison of providers’ prescribing practices to identify outlier providers—the highest prescribers of Schedule II drugs. NBI MEDIC reports the results to CMS. NBI MEDIC’s other projects. NBI MEDIC gathers and analyzes data on Medicare Part C and Part D, including projects using the Predictive Learning Analytics Tracking Outcome (PLATO) system. According to NBI MEDIC officials, these PLATO projects seek to identify potential fraud by examining data on provider behaviors. NBI MEDIC’s investigations to identify fraud, waste, and abuse. NBI MEDIC officials conduct investigations to assist CMS in identifying cases of potential fraud, waste, and abuse among providers for Medicare Part C and Part D. The investigations are prompted by complaints from plan sponsors; suspected fraud, waste, or abuse reported to NBI MEDIC’s call center; NBI MEDIC’s analysis of outlier providers; or from one of its other data analysis projects. NBI MEDIC’s referrals. After identifying providers engaged in potential fraudulent overprescribing, NBI MEDIC officials said they may refer cases to law enforcement agencies or the HHS-OIG for further investigation and potential prosecution. Plan sponsors’ monitoring of providers. CMS requires all plan sponsors to adopt and implement an effective compliance program, which must include measures to prevent, detect, and correct Part C or Part D program noncompliance, as well as fraud, waste, and abuse. CMS’s guidance focuses broadly on prescription drugs, and does not specifically address opioids. Our report concluded that although these efforts provide valuable information, CMS lacks all the information necessary to adequately oversee opioid prescribing. CMS’s oversight actions focus broadly on Schedule II drugs rather than specifically on opioids. For example, NBI MEDIC’s analyses to identify outlier providers do not indicate the extent to which they may be overprescribing opioids specifically. According to CMS officials, they direct NBI MEDIC to focus on Schedule II drugs, because these drugs have a high potential for abuse, whether they are opioids or other drugs. However, without specifically identifying opioids in these analyses—or an alternate source of data—CMS lacks data on providers who prescribe high amounts of opioids, and therefore cannot assess progress toward meeting its goals related to reducing opioid use, which would be consistent with federal internal control standards. Federal internal control standards require agencies to conduct monitoring activities and to use quality information to achieve objectives and address risks. As a result, we recommended that CMS require NBI MEDIC to gather separate data on providers who prescribe high amounts of opioids. This would allow CMS to better identify those providers who are inappropriately and potentially fraudulently overprescribing opioids. HHS agreed, and noted that it intends to work with NBI MEDIC to identify trends in outlier prescribers of opioids. Our report also found that CMS also lacks key information necessary for oversight of opioid prescribing, because it does not require plan sponsors to report to NBI MEDIC or CMS cases of fraud, waste, and abuse; cases of overprescribing; or any actions taken against providers. Plan sponsors collect information on cases of fraud, waste, and abuse, and can choose to report this information to NBI MEDIC or CMS. While CMS receives information from plan sponsors who voluntarily report their actions, it does not know the full extent to which plan sponsors have identified providers who prescribe high amounts of opioids, or the full extent to which sponsors have taken action to reduce overprescribing. We concluded that without this information, it is difficult for CMS to assess progress in this area, which would be consistent with federal internal control standards. In our report, we recommended that CMS require plan sponsors to report on investigations and other actions taken related to providers who prescribe high amounts of opioids. HHS did not concur with this recommendation. HHS noted that plan sponsors have the responsibility to detect and prevent fraud, waste, and abuse, and that CMS reviews cases when it conducts audits. HHS also stated that it seeks to balance requirements on plan sponsors when considering new regulatory requirements. However, without complete reporting—such as reporting from all plan sponsors on the actions they take to reduce overprescribing—we believe that CMS is missing key information that could help assess progress in this area. Due to the importance of this information for achieving the agency’s goals, we continue to believe that CMS should require plan sponsors to report on the actions they take to reduce overprescribing. - - - - - In conclusion, a large number of Medicare Part D beneficiaries use potentially harmful levels of prescription opioids, and reducing the inappropriate prescribing of these drugs is a key part of CMS’s strategy to decrease the risk of opioid use disorder, overdoses, and deaths. Despite working to identify and decrease egregious opioid use behavior—such as doctor shopping—among Medicare Part D beneficiaries, CMS lacks the necessary information to effectively determine the full number of beneficiaries at risk of harm, as well as other information that could help CMS assess whether its efforts to reduce opioid overprescribing are effective. It is important that health care providers help patients to receive appropriate pain treatment, including opioids, based on the consideration of benefits and risks. Access to information on the risks that Medicare patients face from inappropriate or poorly monitored prescriptions, as well as information on providers who may be inappropriately prescribing opioids, could help CMS as it works to improve care. Chairman Jenkins, Ranking Member Lewis, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgements If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-7114 or CurdaE@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Will Simerl (Assistant Director), Carolyn Feis Korman (Analyst-in-Charge), Amy Andresen, Drew Long, Samantha Pawlak, Vikki Porter, and Emily Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Misuse of prescription opioids can lead to overdose and death. In 2016, over 14 million Medicare Part D beneficiaries received opioid prescriptions, and spending for opioids was almost $4.1 billion. GAO and others have reported on inappropriate activities and risks associated with these prescriptions. This statement is based on GAO's October 2017 report (GAO-18-15) and discusses (1) CMS oversight of beneficiaries who receive opioid prescriptions under Part D, and (2) CMS oversight of providers who prescribe opioids to Medicare Part D beneficiaries. For the October 2017 report, GAO reviewed CMS opioid utilization and prescriber data, CMS guidance for plan sponsors, and CMS's strategy to prevent opioid misuse. GAO also interviewed CMS officials, the six largest Part D plan sponsors, and 12 national associations selected to represent insurance plans, pharmacy benefit managers, physicians, patients, and regulatory and law enforcement authorities. What GAO Found The Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), provides guidance on the monitoring of Medicare beneficiaries who receive opioid prescriptions to plan sponsors—private organizations that implement the Medicare drug benefit, Part D—but lacks information on most beneficiaries at risk of harm from opioid use. CMS provides guidance to plan sponsors on how they should monitor opioid overutilization among Medicare Part D beneficiaries, and requires them to implement drug utilization review systems that use criteria similar to CMS's. CMS's criteria focused on beneficiaries who do all the following: (1) receive prescriptions of high doses of opioids, (2) receive prescriptions from four or more providers, and (3) fill prescriptions at four or more pharmacies. According to CMS, this approach focused actions on beneficiaries the agency determined to have the highest risk of harm. CMS's criteria, including recent revisions, do not provide sufficient information about the larger population of potentially at-risk beneficiaries. CMS estimates that while 33,223 beneficiaries would have met the revised criteria in 2015, 727,016 would have received high doses of opioids regardless of the number of providers or pharmacies. In 2016, CMS began to collect information on some of these beneficiaries using a higher dosage threshold for opioid use. This approach misses some who could be at risk of harm, based on Centers for Disease Control and Prevention guidelines. As a result, CMS is limited in its ability to assess progress toward meeting the broader goals of its Opioid Misuse Strategy for the Medicare and Medicaid programs, which includes activities to reduce the risk of harm to beneficiaries from opioid use. CMS oversees the prescribing of drugs at high risk of abuse through a variety of projects, but does not analyze data specifically on opioids. According to CMS officials, CMS and plan sponsors identify providers who prescribe large amounts of drugs with a high risk of abuse, and those suspected of fraud or abuse may be referred to law enforcement. However, GAO found that CMS does not identify providers who may be inappropriately prescribing large amounts of opioids separately from other drugs, and does not require plan sponsors to report actions they take when they identify such providers. As a result, CMS is lacking information that it could use to assess how opioid prescribing patterns are changing over time, and whether its efforts to reduce harm are effective. What GAO Recommends In the October 2017 report, GAO made three recommendations that CMS (1) gather information on the full number of at-risk beneficiaries receiving high doses of opioids, (2) identify providers who prescribe high amounts of opioids, and (3) require plan sponsors to report to CMS on actions related to providers who inappropriately prescribe opioids. HHS concurred with the first two recommendations, but not with the third. GAO continues to believe the recommendation is valid, as discussed in the report and in this statement.
gao_GAO-18-436T
gao_GAO-18-436T_0
Agencies Can Better Ensure Effectiveness of Guidance through Consistent Adherence with OMB Requirements and Internal Controls First, I will discuss our 2015 report on guidance processes at USDA, Education, HHS, and DOL, specifically (1) how these agencies decide whether to issue regulations or guidance and (2) the extent to which they adhere to OMB requirements and internal controls when developing guidance. Agency guidance documents, even though they are not generally legally binding as regulations or statutes are, can have a significant effect, both because of their volume and because of their potential to prompt changes in the behavior of regulated parties and the general public. Guidance generally serves different purposes than those of regulations. Agencies also issue regulatory guidance that sets forth a policy on a statutory, regulatory, or technical issue, or an interpretation of a statutory or regulatory issue—as illustrated in figure 1 below. The processes by which agencies issue guidance and regulations are governed by statutes, executive orders, and agencies’ policies and procedures, with the aim of greater transparency and public participation, enhanced oversight, and reduced regulatory burdens. Agencies Weighed Various Factors When Deciding Whether to Issue Regulations or Guidance Agency officials considered a number of factors before deciding whether to issue guidance or undertake rulemaking. Among these factors at the four agencies included in our analysis, a key criterion was whether officials intended for the document to be binding (in which case they issued a regulation). OMB’s Office of Information and Regulatory Affairs (OIRA) staff concurred that agencies understood what types of direction to regulated entities must go through the regulatory process. Officials from all four agencies also told us that they understood when guidance was inappropriate and when regulation was necessary. They said that they consulted with legal counsel when deciding whether to initiate rulemaking or issue guidance. For example, HHS’s Administration for Community Living officials told us that they considered a number of factors, including whether the instructions to be disseminated were enforceable or merely good practice. Specifically, when Administration for Community Living officials noticed that states were applying issued guidance related to technical assistance and compliance for the state long-term care ombudsman program differently, they decided it would be best to clarify program actions through a regulation. Officials believed that a regulation would ensure consistent application of program requirements and allow them to enforce those actions. They issued the proposed rule in June 2013 and the final rule in February 2015. In another example, officials at USDA’s Food and Nutrition Service told us that the decision to issue guidance or undertake rulemaking depended on (1) the extent to which the proposed document was anticipated to affect stakeholders and the public, and (2) what the subagency was trying to accomplish with the issued document. The agencies used guidance for multiple purposes and differed in the amount of guidance they issued. The purposes of guidance included explaining or interpreting regulations, clarifying policies in response to questions or compliance findings, disseminating suggested practices or leadership priorities, and providing grant administration information. Guidance documents provide agencies valuable flexibility to help regulated agencies comply with agency regulations, and address new issues and circumstances more quickly than may be possible using rulemaking. Guidance documents that meet OMB’s definition of “significant” are subject to the regulatory practices and requirements established by OMB. OMB defines a significant guidance document as guidance with a broad and substantial impact on regulated entities. An economically significant guidance document is a significant guidance document that may reasonably be anticipated to lead to an annual effect on the economy of $100 million or more, among other factors. Guidance that does not fall under the definition of “significant” is not subject to the OMB Bulletin, and those guidance procedures are left to agency discretion. The four agencies we reviewed considered few of their guidance documents to be significant. As of February 2015, agencies listed the following numbers of significant guidance documents on their websites: Education, 139; DOL, 36; and USDA, 34. We were unable to determine the number of significant guidance documents issued by HHS. All four agencies told us that they did not issue any economically significant guidance. OIRA staff told us they accepted departments’ determinations of which types of guidance meet the definition of significant guidance. Agencies also varied in the amount of guidance they issued, ranging from 10 to more than 100 documents issued in a single year. Agency officials said that mission or the types of programs administered can affect the number of guidance documents issued. For example, officials from DOL’s Bureau of Labor Statistics told us they rarely issue guidance—about 10 routine administrative memorandums each year related to the operation of two cooperative agreement statistical programs. In contrast, DOL’s Occupational Safety and Health Administration officials told us they have regularly issued guidance to assist with regulatory compliance, and could easily produce 100 new or updated products each year to provide guidance to regulated entities. Agencies Should Increase Adherence with OMB Requirements and Internal Controls We found opportunities for agencies to improve regulatory guidance processes by strengthening compliance with OMB requirements for significant guidance and the use of management controls for producing their guidance documents. In 2015, we made 11 recommendations to USDA, HHS, DOL and Education to better ensure the adherence to OMB requirements for approval and public access of regulatory guidance, to strengthen the use of internal controls in guidance processes, and to improve the usability of websites with online guidance, three of which remain open. USDA, DOL and Education have addressed recommendations concerning strengthening the application of management controls—internal controls—and improving their websites to ensure the public can easily find, access, and comment on online guidance. These recommendations for HHS remain open as well as an additional recommendation concerning developing written procedures for agency approval of written guidance. These actions would help to ensure appropriate review and use of these documents, and both could also facilitate opportunities for affected parties and stakeholders to provide feedback on those documents. Adherence to OMB Requirements for Significant Guidance We found that agencies did not always adhere to OMB requirements for significant guidance. The OMB Final Bulletin for Agency Good Guidance Practices establishes standard elements that must be included in significant guidance documents and directs agencies to (1) develop written procedures for the approval of significant guidance, (2) maintain a website to assist the public in locating significant guidance documents, and (3) provide a means for the public to submit comments on significant guidance through their websites. Education and USDA had written procedures for the approval of significant guidance as directed by OMB. While DOL had written approval procedures, they were not available to the appropriate officials, and DOL officials noted that they required updating. HHS did not have any written procedures. We found that Education, USDA, and DOL consistently applied OMB’s public access and feedback requirements for significant guidance, while HHS did not. We also found opportunities for agencies to improve access to their guidance. In April 2015, we found that subagencies used different strategies to disseminate guidance and all relied primarily on posting the guidance on their websites. USDA, DOL, and Education posted their significant guidance on a departmental website as directed by OMB; at that time HHS did not, but has since posted such a page on its website in response to our recommendation. On their websites, agencies used several approaches —including organizing guidance by audience or topic and highlighting new or outdated guidance—to facilitate access. However, we identified factors that hindered online access, including long lists of guidance and documents dispersed among multiple web pages. Opportunities also exist for agencies to use the web metrics they already collect to improve how guidance can be accessed. All agencies and their subagencies that we studied collected web metrics, and many used them to evaluate online guidance dissemination. However, many of these subagencies did not use metrics to improve how they disseminated guidance through their websites. Beyond their websites, subagencies found other ways to disseminate and obtain feedback on issued guidance, including focus groups, surveys, and direct feedback from the public at conferences, webinars, and from monitoring visits. Application of Internal Controls for Guidance Processes For guidance that does not meet OMB’s definition of significant, we found opportunities for agencies to improve guidance development, review, evaluation, and dissemination processes by strengthening their adherence to internal controls. Wider adoption of these practices could better ensure that agencies have internal controls in place to promote quality and consistency of their guidance development processes, and to ensure that guidance policies, processes, and practices achieve desired results, and prevent and detect errors. We recommended that agencies strengthen their application of internal controls to guidance practices by adopting practices, such as: Determining Appropriate Level of Review to Manage Risk: Most subagencies in our study managed risk by determining appropriate levels of review. Agencies face multiple risks when going through the guidance production process, such as legal challenges that issued guidance is asserting binding requirements without having gone through the rulemaking process. Agencies can manage risk by involving agency management in decisions to initiate guidance, prioritize among proposed guidance, and determine the appropriate level of review prior to issuance. Maintaining Written Policies and Procedures for the Production of Nonsignificant Guidance: Most subagencies we reviewed did not have written procedures for the production of non-significant guidance. Written procedures for guidance initiation, development, and review help ensure that actions are taken to address risks and enforce management’s directives when an agency is developing regulatory guidance. Documented procedures are an important internal control activity to help ensure that officials understand how to adequately review guidance before issuance. Ensuring Communication during the Guidance Development and Review Process: Most subagencies we reviewed had methods to ensure communication during the guidance development and review process. Communication procedures provide an opportunity for subagencies to get feedback from agency management, other federal agencies, and the public before the guidance issues. For example, officials told us that they conferred with other affected subagencies or federal departments to ensure consistency of their guidance during the development of guidance. Regularly Evaluating Whether Issued Guidance is Effective and Up to Date: Almost half of the subagencies we reviewed regularly evaluated whether issued guidance was effective and up-to-date. Agencies benefit from procedures to continually reassess and improve guidance processes. Without a regular review of issued guidance, agencies can miss the opportunity to revisit whether current guidance could be improved and thereby provide better assistance to regulated entities and grantees. Compliance with the Congressional Review Act Could Be Strengthened Prior studies have indicated that agencies typically issue a larger number of regulations during the transition from the end of one presidential administration to the beginning of the next administration, relative to comparable periods earlier in the administration, a phenomenon often referred to as “midnight rulemaking.” The Edward “Ted” Kaufman and Michael Leavitt Presidential Transitions Improvements Act of 2015 included a provision requiring us to review final significant regulations promulgated by executive departments during the 120-day presidential transition periods (September 23 through January 20) at the end of Presidents Clinton, Bush, and Obama’s administrations and compare them to each other and to regulations issued during the same 120-day period in nontransition years since 1996. Among other objectives, we assessed the extent to which there was variation in (1) the number of regulations and their characteristics, such as the types of rulemaking procedures agencies used; and (2) agencies’ reported compliance with procedural requirements for promulgating the regulations, such as requirements in the Congressional Review Act (CRA). CRA was enacted to better ensure that Congress has an opportunity to review and possibly disapprove regulations, in certain cases, before they take effect. Agencies Published More Economically Significant and Significant Final Regulations and Provided More Opportunity for Public Participation During the transition periods at the end of each of the three administrations we reviewed, agencies published more economically significant and significant final regulations relative to comparable time periods earlier in each administration (see figures 2 and 3). In particular, the Clinton, Bush, and Obama administrations published on average roughly 2.5 times more economically significant regulations during transition periods than during nontransition periods. But agencies more often, relative to nontransition periods, provided the public an opportunity to influence the development of the transition-period regulations by providing advanced notice of their issuance in the Unified Agenda, and opportunities to comment on proposed regulations before they were finalized. Some Regulations Did Not Comply with the Congressional Review Act In their published regulations, agencies generally reported complying with four of five procedural requirements for promulgating regulations during both transition and nontransition periods–the Regulatory Flexibility Act (RFA), the Small Business Regulatory Enforcement Fairness Act (SBREFA), the Paperwork Reduction Act (PRA), and the Unfunded Mandates Reform Act of 1995 (UMRA). These laws require agencies to consider the impact of regulations on small entities, impose additional requirements on the Environmental Protection Agency and the Occupational Safety and Health Administration to obtain input from small entities for rulemaking efforts that are expected to have a significant economic impact on a substantial number of small entities, require all agencies to minimize the burden on the public of information collections, and require agencies to prepare an assessment of the anticipated costs and benefits for any regulation that includes a federal mandate requiring nonfederal parties to expend resources without being provided funding to cover the costs, respectively. Agencies reported complying for nearly all economically significant regulations and the majority of significant regulations with these four laws. Agencies less often complied with one or more CRA requirements. Over 25 percent of economically significant regulations did not comply with the CRA (see figure 4). We estimated that 15 percent of significant regulations published across all periods reviewed failed to meet at least one of the CRA requirements we reviewed. The most common CRA deficiency for economically significant regulations was agencies’ failure to provide Congress the required time to review and possibly disapprove regulations, which we had also identified as a deficiency in previous work. Among the most active regulatory agencies for economically significant regulations, the Departments of Health and Human Services and Transportation had higher rates of noncompliance than the government-wide percentages for both the transition and nontransition periods we reviewed. However, noncompliance was not limited to these two agencies; 17 of the 23 agencies that published economically significant regulations during the periods we reviewed had at least one noncompliant regulation. Though agencies are responsible for complying with CRA, OMB is responsible under Executive Order 12866 for oversight of agencies’ rulemaking, consistent with law, and reviews regulations before publication, which provides an opportunity to identify and help agencies avoid potential noncompliance. Economically significant regulations for which OMB completed its review within 3 months before the planned effective date were at high risk of not complying with CRA, thus increasing the risk that agencies would not provide Congress with the required time for its reviews. We recommended that OMB, as part of its regulatory review process, identify economically significant regulations at potential risk of not complying with CRA and work with agencies to ensure compliance. OMB staff did not take a position agreeing or disagreeing with the recommendation. One of the common themes in our work over several decades is the need for transparency of the regulatory review process and opportunities for increasing public participation and congressional oversight. The potential effects of guidance underscore the need for consistent and well- understood processes for the development, review, dissemination, and evaluation of guidance. Further, we found that while there were increased opportunities for public participation for regulations promulgated at the end of Presidents’ terms, there are increasing instances of noncompliance with delay requirements under the Congressional Review Act. Ensuring that agencies consistently provide Congress with the required time to review, and possibly disapprove regulations, is important throughout a President’s term, and particularly following a presidential transition when Congress typically has a larger number of regulations to potentially review. Improvements made in transparency of the rulemaking process benefit not only the public, but congressional oversight. Chairman Gowdy, Ranking Member Cummings, and Members of the Committee, this concludes my prepared statement. Once again, I appreciate the opportunity to testify on these important issues. I would be pleased to address any questions you or other members of the Committee might have at this time. GAO Contact and Staff Acknowledgments For questions about this statement, please contact me at (202) 512-2660 or nguyentt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Tim Bober, Tara Carter, Colleen Corcoran, Robert Cramer, Alix Edwards, Shirley A. Jones, Heather Krause, Barbara Lancaster, Michael O’Neill, and Andrew J. Stephens. Related GAO Products Federal Rulemaking: OMB Should Work with Agencies to Improve Congressional Review Act Compliance during and at the End of Presidents’ Terms. GAO-18-183. March, 13, 2018. Regulatory Guidance Processes: Treasury and OMB Need to Reevaluate Long-standing Exemptions of Tax Regulations and Guidance. GAO-16-720. September 6, 2016. Regulatory Guidance Processes: Selected Departments Could Strengthen Internal Control and Dissemination Practices. GAO-15-368. April 16, 2015. Regulatory Guidance Processes: Agencies Could Benefit from Stronger Internal Control Practices. GAO-15-834T. September 23, 2015. Federal Rulemaking: Agencies Could Take Additional Steps to Respond to Public Comments. GAO-13-21. December 20, 2012. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Congress has often asked GAO to evaluate the implementation of procedural and analytical requirements that apply to agencies' rulemaking and guidance processes. The importance of improving the transparency of those processes, including providing public participation and sufficient oversight, is a common theme throughout GAO's body of work on federal regulation. Based on GAO's prior work, this testimony addresses: (1) the extent to which USDA, Education, HHS, and DOL adhered to OMB requirements and internal controls when developing regulatory guidance, and (2) agencies' compliance with the CRA for regulations promulgated during presidential transitions. What GAO Found Agencies GAO reviewed—Departments of Agriculture (USDA), Education (Education), Health and Human Services (HHS), and Labor (DOL) did not consistently adhere to Office of Management and Budget (OMB) requirements and internal controls when developing regulatory guidance, as GAO reported in 2015. Unlike regulations, regulatory guidance is not generally legally binding and is subject to different requirements for regulatory oversight. Agencies weighed various factors when they determined whether to issue guidance. The agencies GAO reviewed issued different amounts of guidance for various purposes, such as explaining plans for implementing regulations. Agencies found few of their guidance documents to be “significant,” guidance with a broad and substantial impact on regulated entities. USDA and Education had written procedures for the approval of significant guidance as directed by OMB; DOL's procedures needed updating and to be distributed to appropriate agency officials; HHS did not have any. GAO found that USDA, Education, and DOL consistently applied OMB's requirements for public feedback and access, for example public access to guidance through websites, while HHS did not. Agencies can better ensure consistent application of review processes and public access to significant guidance through better adherence to OMB requirements. GAO also found opportunities for agencies to improve adherence to internal controls for guidance that did not meet OMB's definition of “significant.” For example, most subagencies GAO reviewed did not have written procedures for the production of guidance and about half did not regularly evaluate whether issued guidance was effective and up-to-date. Adherence to these internal controls could promote quality and consistency in guidance development processes. GAO found that agencies did not consistently comply with the Congressional Review Act (CRA) for regulations promulgated during the 120-day presidential transition periods (September 23 through January 20), as defined by the Presidential Transitions Improvements Act of 2015. GAO reported that during the transition from the end of one presidential administration to the next, the Clinton, Bush, and Obama administrations published on average roughly 2.5 times more economically significant regulations during transition periods than during nontransition periods; increases are typical during transition periods. For these regulations, agencies more frequently provided advanced notice to the public, thus providing the public opportunities to influence the development of these transition period regulations before they were finalized. In their published regulations, agencies generally reported complying with four of five procedural requirements for promulgating regulations during both transition and nontransition periods. Agencies are required to 1) assess the impact of regulations on small entities, 2) minimize the burden that information collections impose on the public, 3) assess the costs and benefits of regulations that include federal mandates, and 4) for certain agencies, obtain direct input from small entities during rulemaking. Also, a fifth requirement, agencies must comply with CRA, which provides Congress an opportunity to review and possibly disapprove regulations before they take effect. Agencies less often complied with CRA, during both transition and nontransition periods. The most common deficiency was agencies' failure to provide Congress the required time to review regulations, which GAO has also identified as a deficiency in previous work. What GAO Recommends In the April 2015 report on regulatory guidance, GAO made eleven recommendations to USDA, Education, HHS, and DOL to ensure adherence to OMB requirements and applicable elements of internal controls. Three of these recommendations to HHS remain open: 1) to develop written procedures for the approval of significant guidance, 2) strengthen application of internal controls over guidance processes, and 3) improve its website. In the March 2018 report on rulemaking at the end of presidents' terms, GAO recommended OMB, as part of its regulatory review process, identify economically significant regulations at risk of not complying with the CRA and work with agencies to ensure compliance. OMB staff did not agree or disagree with the recommendation.